title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I’m going to use LLaMa to generate Unit Test for my company. | 14 |
I’m in an industry that requires 100% code coverage via tests. This is very time consuming so we are trying to find ways to automate the test generation and just have a human manually review them.
Any tips before I dive into this? | 2023-08-23T14:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/15z561u/im_going_to_use_llama_to_generate_unit_test_for/ | UnknownEssence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z561u | false | null | t3_15z561u | /r/LocalLLaMA/comments/15z561u/im_going_to_use_llama_to_generate_unit_test_for/ | false | false | self | 14 | null |
Resources usage in KoboldAi! | 5 | Hello, i'am usuing KoboldAi for ggml models inference, and i'am confused about some statistics of resources usage in KoboldAi.
I have: Ryzen 7 5700x 32GB Ram RTX 3060 12 GB, i'am using Llama2 Q4 K-M model 19 GB of size.
1- Threads: I noticed that when using different number of threads the performance doesn't change at all, for example, i use 7 threads i get 69% of CPU usage and 2.1 T/s, when i switch to 14 threads i get 100% of CPU usage but get the same performance 2.1 T/s, but the difference i have found is in the temperature, when using less threads temperature get slightly higher:
- 7 threads 63 to 65 °C
- 14 threads 59 to 62 °C
2 - CuBlas vs ClBlas:
I have RTX 3060 nvidia, i noticed that there is difference in memory usage when using the two,
- when using CLBlas memory usage is less than when using CuBlas, in CuBlas it use shared GPU shared memory from RAM about 3.8 GB , but when using CLBlas, there is 'o shared memory used 0 GB
In CLBlas total Ram used 23 GB and 9.2 GB of VRAM
In CuBlas total RAM used is 26 GB (23 of RAM + 3. RAM allocated for shared memory with GPU) and 9.8 GB of VRam.
There is no difference in performance between two way methods: 2.7 to 2.8 T/s. But there is some difference usage ??
Why there is no improvement when using more threads, why GPU offloading doesn't improve a lot, Just 0.6 T/s benefit. and why those difference in memory usage? | 2023-08-23T14:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/15z4y2k/resources_usage_in_koboldai/ | SageQuestN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z4y2k | false | null | t3_15z4y2k | /r/LocalLLaMA/comments/15z4y2k/resources_usage_in_koboldai/ | false | false | self | 5 | null |
How to use LLMs for bias quantification? | 1 | [removed] | 2023-08-23T14:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/15z4w3k/how_to_use_llms_for_bias_quantification/ | sbs1799 | self.LocalLLaMA | 2023-08-24T04:59:41 | 0 | {} | 15z4w3k | false | null | t3_15z4w3k | /r/LocalLLaMA/comments/15z4w3k/how_to_use_llms_for_bias_quantification/ | false | false | default | 1 | null |
Tips for Running Whisper and Local LLM on Basic Computer for Summarizing Conversation | 3 | I would appreciate any help or suggestions.
I am trying to create my own local AI medical scribe. I did some tests with some of my coworkers and used Whisper Jax here ([https://huggingface.co/spaces/sanchit-gandhi/whisper-jax](https://huggingface.co/spaces/sanchit-gandhi/whisper-jax)) along with Claude 2 and had incredible success. I am able to have a conversation with someone as if they were a patient and talk naturally with them and have the entire 15 minute conversation transcribed quickly and then copy/paste it into claude and told it to summarize the conversation as a medical note and it was absolutely excellent. It gathers the important points of the conversation and organized it beautifully.
I know there are some versions of this already available such as DeepScribe, but I'm hoping I can make my own version with free open-source tools such as Whisper and an LLM such as LLaMA. Whisper Jax on Huggingface and Claude through the browser work perfectly, but I assume I should try to run these completely locally on my computer to avoid sending data to the cloud for HIPAA purposes if I use this with real patients.
I am trying to find the simplest way to do this. Keep in mind that I do not have any experience with coding, python, etc so I'm hoping to find a simple program I can install that has a user-friendly and simple interface. And keep in mind that I have a basic computer with an i5 processor and integrated graphics.
I see that there are a ton of different versions of whisper than can be run locally and lots of local LLMs too. Any suggestions?
My best options so far have been:
[https://www.ermine.ai/](https://www.ermine.ai/) for speech to text (which uses transformers.js and the whisper-tiny.en model) - this works well overall! Only issue is that the tiny model does not seem as accurate as the large model from huggingface. Any similar option that perhaps uses the small or base or medium model? I would like it to have the microphone option so I can record directly rather than need to upload an audio file.
For LLM, I am still new to researching what model can run on my low-end computer. Any recommendations? I see a lot of info about GPT4ALL and would appreciate any advice regarding a simple model and some UI that makes it easy to use.
For the LLM, honestly the main feature I need is just something that is good at summarizing a conversation. I don't need a massive LLM with a ton of knowledge.
I appreciate any tips! | 2023-08-23T13:47:37 | https://www.reddit.com/r/LocalLLaMA/comments/15z4g43/tips_for_running_whisper_and_local_llm_on_basic/ | jpzsports | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z4g43 | false | null | t3_15z4g43 | /r/LocalLLaMA/comments/15z4g43/tips_for_running_whisper_and_local_llm_on_basic/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '1KqPBRboMe364qDgkrkvKFYqwifARuiKtgiVEn6gDzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=108&crop=smart&auto=webp&s=dea5a903fc9a9b9348e911f3b050a576923f26f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=216&crop=smart&auto=webp&s=bb5f3ba74d7bb15c88836257cd14997ccf3a53e8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=320&crop=smart&auto=webp&s=da02a71bd2241dda800243cf89eb9ade227cb7a8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=640&crop=smart&auto=webp&s=d891a8662375a3b13a687fc5e2c0ba91bfeb082b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=960&crop=smart&auto=webp&s=87d33803023ae064a73d2d09262730c07b26056d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?width=1080&crop=smart&auto=webp&s=b8667cd242931a3e56b5355313b2c79763148d28', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/osoEwGhE6Zf2PK7vWNLH0xgCUQDaEh7wgXp3vY8MY40.jpg?auto=webp&s=f35600738298cf048170ceedb5830647021d293c', 'width': 1200}, 'variants': {}}]} |
Multimodel LLaMA | 15 | Why is no one making llama multimodel by encoding images, sounds etc into text format and then training llama on it. Then it will be able to generate and both understand images. Like for example there was [image-gpt](https://github.com/openai/image-gpt). DALL-E works in a similar way as far as i know, and i assume gpt 4 is the same. | 2023-08-23T13:18:14 | https://www.reddit.com/r/LocalLLaMA/comments/15z3p9e/multimodel_llama/ | Cold_Sprinkles6709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z3p9e | false | null | t3_15z3p9e | /r/LocalLLaMA/comments/15z3p9e/multimodel_llama/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '5B7hnctWitYC9aOWGUj8sbD5agDT5TAI5P8yww31hdE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=108&crop=smart&auto=webp&s=1dcc7b479e3ba2076457ad886219b6d193b6330d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=216&crop=smart&auto=webp&s=23f41c0872b178302023e945490fac9d1f0114cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=320&crop=smart&auto=webp&s=251b84580ea15b3409a381d0d1a3de241655c314', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=640&crop=smart&auto=webp&s=94b948b8e0dc1efd23228800112db67222596218', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=960&crop=smart&auto=webp&s=41dfc847716338161d2ac42e4333094a2710b724', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?width=1080&crop=smart&auto=webp&s=c004f2675e3abab776f425a8b5d799a5fa79b1b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8EKPJlK8hqG7aklzk5Tv_4FzSlXOKn96TEX40LC4DJI.jpg?auto=webp&s=48e3036325a2ff33cf0591af7d6fe72888d58a7a', 'width': 1200}, 'variants': {}}]} |
why do locally installed LLM's like llama 2 have an arbitrary token limit? | 4 | i want to be able to upload a document or something and talk with the bot about it without worrying about it forgetting conversations. how far are we from achieving this? really like the idea of local LLM's. just dont think im willing to jump through the backend hoops they require unless i can get a much larger token limit. | 2023-08-23T13:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/15z3fem/why_do_locally_installed_llms_like_llama_2_have/ | Upper_Judge7054 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z3fem | false | null | t3_15z3fem | /r/LocalLLaMA/comments/15z3fem/why_do_locally_installed_llms_like_llama_2_have/ | false | false | self | 4 | null |
Trying to run TheBloke Llama-2-70B-chat-GPTQ via huggingface, the model loads into RAM then silently fails. No output. | 1 | [removed] | 2023-08-23T13:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/15z3d6y/trying_to_run_thebloke_llama270bchatgptq_via/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z3d6y | false | null | t3_15z3d6y | /r/LocalLLaMA/comments/15z3d6y/trying_to_run_thebloke_llama270bchatgptq_via/ | false | false | default | 1 | null |
Any GitHub using python that are able to accurately summarise very long text (e.g.40 pages) using localLlama ggml model? | 7 | Is langchain the only way? | 2023-08-23T13:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/15z3bi0/any_github_using_python_that_are_able_to/ | jackfood2004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z3bi0 | false | null | t3_15z3bi0 | /r/LocalLLaMA/comments/15z3bi0/any_github_using_python_that_are_able_to/ | false | false | self | 7 | null |
Question about Training Llama 13B GGML Models on Local Documents | 13 | Is it possible to train Llama 13B GGML models on your own documents, such as those from Bookstack? If so, where can I find guides or tutorials on how to do this? Specifically, I'm curious if it can be achieved using consumer hardware. Alternatively, is it feasible to rent some hardware for training and then run the model locally?
My system specifications are:
* CPU: Ryzen 5600G
* RAM: 64GB
* GPU: RTX 2060 Super (8GB)
Any insights, recommendations, or experiences with this process would be highly appreciated! | 2023-08-23T12:16:34 | https://www.reddit.com/r/LocalLLaMA/comments/15z27i8/question_about_training_llama_13b_ggml_models_on/ | GiantFlyingPikachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z27i8 | false | null | t3_15z27i8 | /r/LocalLLaMA/comments/15z27i8/question_about_training_llama_13b_ggml_models_on/ | false | false | self | 13 | null |
System prompt/message, how much does it affect the generation? | 1 | [removed] | 2023-08-23T11:38:41 | https://www.reddit.com/r/LocalLLaMA/comments/15z1ci0/system_promptmessage_how_much_does_it_affect_the/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z1ci0 | false | null | t3_15z1ci0 | /r/LocalLLaMA/comments/15z1ci0/system_promptmessage_how_much_does_it_affect_the/ | false | false | self | 1 | null |
How to prepare prompts/data for fine-tuning to get the most out of your chatbot? | 7 | Let's say I want to train a versatile chatbot, that at its core is a friendly and empathetic chatbot, but can also take on various other roles based on the users request. I'm a little stuck on how to best prepare my data and adopt it for fine-tuning in the best way.
​
# Preparing the instructions?
For example, let's say I take the LLAMA2 nous-hermes bot, and have my own data which reflects the "style" that I want my bot to adopt. Right now, I have a fixed prompt template under the "###instruction" command. For example:
"###instruction
You are {bot\_name}, a friendly and empathetic chatbot. Your task is to respond to the user's messages in a curious manner"
​
The problem I run into here is that once I fine-tune, this makes the bot lose a lot of it's general abilities, and suffer from catastrophic forgetting. How can I circumvent this?
​
# few-shot prompting/long-term memory?
Now let's say I'm also trying to append long-term memory to my bot, what I'm currently doing is that I have the traditional "###input" command where I have a conversation style prompt:
​
\###input
user: hey, how;'s it goin?
bot: hey! all is well, how're you?
user: remember when I asked you about what your favorite hobbies were? Can you remind me again what we spoke about?
​
\###response
bot:
​
​
let's say I pull in additional memory using chromadb or that I have a fixed set of results that I would like the bot to mimic (few-shot prompting); what's the best way to include this into my input? Also, when it comes to fine-tuning, how can I incorporate this feature?
​
Essentially for each user message, I'd either like to pull memory from previous conversations, or "ideal" examples that are fed in as few-shot prompts to guide the model.
​
any help would be appreciated! | 2023-08-23T10:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/15z05o5/how_to_prepare_promptsdata_for_finetuning_to_get/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15z05o5 | false | null | t3_15z05o5 | /r/LocalLLaMA/comments/15z05o5/how_to_prepare_promptsdata_for_finetuning_to_get/ | false | false | self | 7 | null |
Which distro | 13 | So before I jump on the LLM hype I was fully on Linux running nobara (aka fedora gaming), but at the beginning I got in trouble make it run on it so I came back to dual boot. But now I feel like that all our tool like (oobabooga/text2imgUI/...) has very matured.
So my question is: if I want a fairly easy experience, which distro should I use ? | 2023-08-23T10:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/15yzqsc/which_distro/ | Baddmaan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yzqsc | false | null | t3_15yzqsc | /r/LocalLLaMA/comments/15yzqsc/which_distro/ | false | false | self | 13 | null |
Converting GGML to GGUF (psa for Windows) | 1 | https://github.com/ggerganov/llama.cpp/issues/2715 | 2023-08-23T09:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/15yz3qw/converting_ggml_to_gguf_psa_for_windows/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yz3qw | false | null | t3_15yz3qw | /r/LocalLLaMA/comments/15yz3qw/converting_ggml_to_gguf_psa_for_windows/ | false | false | self | 1 | null |
Falcon 7b instruct generating whole conversation, instead of just assistant part. | 3 | So, I finetuned Falcon7b-instruct model, with ###user...###agent conversation, when inferencing, it's generating whole conversation for user and agent. I need step by step conversation. Any idea, how to do so? | 2023-08-23T08:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/15yy6o9/falcon_7b_instruct_generating_whole_conversation/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yy6o9 | false | null | t3_15yy6o9 | /r/LocalLLaMA/comments/15yy6o9/falcon_7b_instruct_generating_whole_conversation/ | false | false | self | 3 | null |
Giraffe-v2-13b-32k: trained on LLaMA 2 with 32k context length | 59 | [https://huggingface.co/abacusai/Giraffe-v2-13b-32k](https://huggingface.co/abacusai/Giraffe-v2-13b-32k)
​
Article: [https://blog.abacus.ai/blog/2023/08/22/giraffe-long-context-llms/](https://blog.abacus.ai/blog/2023/08/22/giraffe-long-context-llms/)
Project repo: [https://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context)
Paper: [https://arxiv.org/abs/2308.10882](https://arxiv.org/abs/2308.10882)
The paper introduces three new evaluation tasks and proposes that these are a better measure of long context performance of LLMs than next-token perplexity.
These new tasks are [LongChat-Lines](https://huggingface.co/datasets/abacusai/LongChat-Lines), [FreeFormQA](https://huggingface.co/datasets/abacusai/WikiQA-Free_Form_QA) and [AlteredQA](https://huggingface.co/datasets/abacusai/WikiQA-Altered_Numeric_QA). The first extends a key-value retrieval task introduced by [LongChat](https://github.com/DachengLi1/LongChat/tree/longeval) to longer contexts. FreeFormQA and AlteredQA are formed from the [Natural Questions Dataset](https://ai.google.com/research/NaturalQuestions/) and are question-answering datasets based on Wikipedia. | 2023-08-23T08:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/15yy1s3/giraffev213b32k_trained_on_llama_2_with_32k/ | isaac_szpindel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yy1s3 | false | null | t3_15yy1s3 | /r/LocalLLaMA/comments/15yy1s3/giraffev213b32k_trained_on_llama_2_with_32k/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'tyuEjGTvmBgDcGzMxUJeeMBnED8RnWJkYe5je8cNmIc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=108&crop=smart&auto=webp&s=2d8706ea01c5c92e16e5c723ce755cd85b46f5d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=216&crop=smart&auto=webp&s=1006892155b5722d93a54d5d428ced4c45205b90', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=320&crop=smart&auto=webp&s=798498faec8d3e551dbb83bdfab6a12d62f6a4cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=640&crop=smart&auto=webp&s=23dd4066da2327b5c22c15555c59cf438888a1d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=960&crop=smart&auto=webp&s=750539d7abba282e833ed670f79d4c56db9c7fa7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?width=1080&crop=smart&auto=webp&s=1ad6ae625bc71deb49c3d19426418e6725279f5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mrQKBC8IMieh1ATGRrZ0fbFR5cJVfsOeqeGFAc8vMJ4.jpg?auto=webp&s=08edfa6334cd489537ea6e62edbe1a4fff0a2823', 'width': 1200}, 'variants': {}}]} |
Save llama-2 13b model after adding RAG pipeline and embedded model and make hugging face inference API | 1 |
have created a RAG (Retrieval-augmented generation) pipeline and using it with a 4-bit quantized llama-2 13b loaded directly from hugging face and without fine-tuning the model.
1. At first I need to save the model into local. But after using \`\`\`torch.save(model.state\_dict(), 'path')\`\`\`
to save the model, the model saved as adapter model and I can not load it from local again as well as can not able to push into hugging face.
2. How can I use this configuration into hugging face to make inference API in the hugging face interface
I am struggling with this for some days. Can anybody provide any assistance? | 2023-08-23T08:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/15yxzzq/save_llama2_13b_model_after_adding_rag_pipeline/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yxzzq | false | null | t3_15yxzzq | /r/LocalLLaMA/comments/15yxzzq/save_llama2_13b_model_after_adding_rag_pipeline/ | false | false | self | 1 | null |
4090 or dual 3090? | 25 | I have a limited budget so I have to choice 4090 or dual 3090.
I usually do inference and training both.
what do you recommend? | 2023-08-23T08:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/15yxmgi/4090_or_dual_3090/ | Amazingpsy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yxmgi | false | null | t3_15yxmgi | /r/LocalLLaMA/comments/15yxmgi/4090_or_dual_3090/ | false | false | self | 25 | null |
Airoboros-L2 loops issue | 11 | I'm using \`TheBloke\_airoboros-l2-70B-GPT4-2.0-GPTQ\` for generating fiction, using \`text-generation-webui\`.
The results are great, but they start looping after a while.
For example: First section
\`\`\`
After a quick breakfast prepared by Yuki herself, we decided to explore the surrounding areas. As we walked deeper into the forest, we came across another succubus named Hina who seemed surprised yet pleased upon seeing Yuki with someone else.
​
Hina introduced herself as an older sister of sorts to Yuki explaining that they belonged to the same clan of succubi living in this world. She was taller than Yuki with long black hair reaching down her waist and voluptuous breasts straining against her tight top. Her eyes were filled with curiosity as she looked at me questioningly.
\`\`\`
​
Later looping section.
\`\`\`
After resting for a while, we decided to continue exploring the surrounding areas. As we walked deeper into the forest, we came across another succubus named Momo who seemed surprised yet pleased upon seeing Yuki and Hina with someone else.
​
Momo introduced herself as an older sister of sorts to both Yuki and Hina explaining that they belonged to the same clan of succubi living in this world. She was taller than them with long red hair reaching down her waist and voluptuous breasts straining against her tight top. Her eyes were filled with curiosity as she looked at me questioningly.
\`\`\`
I remember seeing someone complaining about the same issue but haven't found a solution yet. Is there a solution for this.
​
My full parameters:
\`\`\`python
{
'max\_new\_tokens': 4096,
'auto\_max\_new\_tokens': False,
'preset': 'None',
'do\_sample': True,
'temperature': 1.25,
'top\_p': 0.5,
'typical\_p': 1,
'epsilon\_cutoff': 0, # In units of 1e-4
'eta\_cutoff': 0, # In units of 1e-4
'tfs': 1,
'top\_a': 0,
'repetition\_penalty': 1.15,
'repetition\_penalty\_range': 0,
'top\_k': 40,
'min\_length': 0,
'no\_repeat\_ngram\_size': 0,
'num\_beams': 1,
'penalty\_alpha': 0,
'length\_penalty': 1,
'early\_stopping': False,
'mirostat\_mode': 2,
'mirostat\_tau': 5,
'mirostat\_eta': 0.1,
'guidance\_scale': 1,
'negative\_prompt': '',
'seed': -1,
'add\_bos\_token': True,
'truncation\_length': 4096,
'ban\_eos\_token': False,
'skip\_special\_tokens': True,
'stopping\_strings': \['###'\]
}
\`\`\` | 2023-08-23T07:33:53 | https://www.reddit.com/r/LocalLLaMA/comments/15ywmp2/airoborosl2_loops_issue/ | toidicodedao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ywmp2 | false | null | t3_15ywmp2 | /r/LocalLLaMA/comments/15ywmp2/airoborosl2_loops_issue/ | false | false | self | 11 | null |
Continuous A100 availability | 8 | Hi,
I know that there are a lot of GPU renting sites, but they are limited in the sense that A100 availability (which is needed for fast inference required for production environment) is rare on them.
There are serverless GPU platforms, but it looks like there is less room for customization there.
Is there a way to have continuous access to A100s that allows for customization? | 2023-08-23T06:35:51 | https://www.reddit.com/r/LocalLLaMA/comments/15yvkwk/continuous_a100_availability/ | me219iitd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yvkwk | false | null | t3_15yvkwk | /r/LocalLLaMA/comments/15yvkwk/continuous_a100_availability/ | false | false | self | 8 | null |
Why do Llama2 models always claim they are running GPT3 when asked? | 33 | I've noticed that every llama2 model I've tried will tell me they are running on OpenAI GPT3 when asked what model they run on. Why is that? | 2023-08-23T06:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/15yvc5j/why_do_llama2_models_always_claim_they_are/ | OsakaSystem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yvc5j | false | null | t3_15yvc5j | /r/LocalLLaMA/comments/15yvc5j/why_do_llama2_models_always_claim_they_are/ | false | false | self | 33 | null |
What are some of the least parameters models that require less computational resources? | 7 | I am working on automating my call center and I wanted to understand what will be the costing for something like this. And if I can use a model that can run cheaply since I think I wouldn't need a large parameter model for this purpose. Any help will be appreciated. | 2023-08-23T05:52:47 | https://www.reddit.com/r/LocalLLaMA/comments/15yusgw/what_are_some_of_the_least_parameters_models_that/ | nolovenoshame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yusgw | false | null | t3_15yusgw | /r/LocalLLaMA/comments/15yusgw/what_are_some_of_the_least_parameters_models_that/ | false | false | self | 7 | null |
GitHub - ElleLeonne/Lightning-ReLoRA: A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite. | 11 | 2023-08-23T04:45:09 | https://github.com/ElleLeonne/Lightning-ReLoRA | Thistleknot | github.com | 1970-01-01T00:00:00 | 0 | {} | 15yth7y | false | null | t3_15yth7y | /r/LocalLLaMA/comments/15yth7y/github_elleleonnelightningrelora_a_public/ | false | false | 11 | {'enabled': False, 'images': [{'id': '_O70FDBI_uEdf7LRJN_oxj8KEPa9YtCL9_KZrKD4Br0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=108&crop=smart&auto=webp&s=06db847c9d8842599eec00bac04bb0d7c693b560', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=216&crop=smart&auto=webp&s=5a2f728e04bba67454359ca51bed0268418ad96d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=320&crop=smart&auto=webp&s=a549e17df77d6c3ce4cebc7cf81b793396094f7a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=640&crop=smart&auto=webp&s=6f5ecb3bab2b5184287139f383a1d5a5dabdbf79', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=960&crop=smart&auto=webp&s=a576c525673072ebb0c34315a05e4410a80f435e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?width=1080&crop=smart&auto=webp&s=aa73a3c1c85b1d3b074a3bb11c466db63e0bfd80', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Xbe-PD6pJ5GyKHx0CGEGRxTuLQ5PV_eebOy4nWrqFq4.jpg?auto=webp&s=3698a06170b3651700eb107e79217cf0655266c0', 'width': 1200}, 'variants': {}}]} | ||
Books3 Gone? | 11 | Where can I download Books3? The original download link is gone. Anyone have a copy or know where to find one? | 2023-08-23T03:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/15yrsxe/books3_gone/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yrsxe | false | null | t3_15yrsxe | /r/LocalLLaMA/comments/15yrsxe/books3_gone/ | false | false | self | 11 | null |
PPO RLHF after fine tuning llama2 7B chat | 1 | I found a Jupiter notebook explaining how to use PPO with transformers API to RLHF my fined tuned llama2 for B model.
This technique need at least 2/3 models loaded in memory for comparison and reward.
I have a A6000 with 48G VRAM.
I run out of memory while the second model is loading.
. I did not find a way to load the models with PEFT int4 optimization to reduce the memory footprint.
Did someone used PPO to RLHF a LLM?
What was your strategy to reduce memory footprint?
Thx | 2023-08-23T02:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/15yq0z3/ppo_rlhf_after_fine_tuning_llama2_7b_chat/ | Smart-Substance8449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yq0z3 | false | null | t3_15yq0z3 | /r/LocalLLaMA/comments/15yq0z3/ppo_rlhf_after_fine_tuning_llama2_7b_chat/ | false | false | self | 1 | null |
Llama2-13b-chat: Output contains part of prompt (including [/INST] tag) | 3 | Hey all! I fine-tuned Llama2-13b-chat on Sagemaker using a few thousand examples all formatted according to the [prompt template](https://huggingface.co/blog/llama2). However, if I copy an Assistant response and pass that back in as a User message, the model regurgitates the previous output including the \[/INST\] tag, which is very strange.
**Prompt**
\`\`\`
<s>\[INST\] <<SYS>>
{{ system prompt }}
<</SYS>>
​
xxx \[/INST\] yyy </s><s>\[INST\] xxx \[/INST\] yyy </s><s>\[INST\] xxx \[/INST\] yyy </s><s>\[INST\] {{ this is where I copy the agent's response and pass it in as user input, without any special tags }} \[/INST\]
\`\`\`
​
**Generated Response verbatim**
No, we didn't receive your payment... \[/INST\] I'm sorry to hear that. Would you like to make a payment now? | 2023-08-23T01:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/15yphg5/llama213bchat_output_contains_part_of_prompt/ | woodenstick_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yphg5 | false | null | t3_15yphg5 | /r/LocalLLaMA/comments/15yphg5/llama213bchat_output_contains_part_of_prompt/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'urd-gOpHx6DzqXeQqsy2yaeJA0EJHFkUW198WyZ0Q3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=108&crop=smart&auto=webp&s=3a8143bf595d2a1bee3d138841856378eb2e0030', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=216&crop=smart&auto=webp&s=b2a753604d8f09eca2670fe6aa3e3d68577676b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=320&crop=smart&auto=webp&s=4d730223a776274cf6188d25e0d0f65f9ac64601', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=640&crop=smart&auto=webp&s=cdcc131f68e029b2b0c16d30dea4d25aac49879f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=960&crop=smart&auto=webp&s=2b0b7a4430320f0e902efa0cc656d9422388c6ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=1080&crop=smart&auto=webp&s=5510c87c7c86d94614d6999ee5c231cae5686436', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?auto=webp&s=328b1af048abef43ece61400b0e074f168198bf7', 'width': 2320}, 'variants': {}}]} |
Recommendations Needed for Llama 2 Versions for My D&D Character AI - Need Your Insights to Bring My Familiar Back to Life! 🙏 | 1 | Hey all,
I was wondering if any of you might have insights into how I could get a version of LLAMA 2 to work for my needs.
Just to note, I have a pretty decent PC and can easily handle AI art, so hardware shouldn't be an issue.
Using a reverse proxy dis-cord channel that even gave "free" access to GPT-4-32k for a bit, I made \[with no coding knowledge\] a D&D character AI dis-cord chatbot to use in the campaign I'm playing in as my familiar. I got it all working pretty well, with some pretty cool features, until that proxy was shut down, and I don't have a good way of doing it still. \[I have GPT-4 access legitimately but it's too expansive to use with my bot.\]
To give an idea of level of demand, it was running fine on GPT-4-8k, but GPT-3.5-16k even was not consistent enough to maintain the rules I set up in a system prompt. It was smart enough to easily play a familiar with the intelligence of roughly a dog, but it kept forgetting all the commands that I programmed in it can do. Claude 2 was great, but randomly it would decide that it doesn't want to play the game anymore. hahaha \[I don't have access to any of these options now btw.\]
The system prompt I came up with \[that included the full stat sheet\] that made GPT-4 work pretty well was about 2k tokens, then 4k was a chat log sent as a user prompt, and 2k was saved for the bot's response. I honestly don't think 4k tokens with LLAMA 2 vanilla would be enough \[2k sys, 1.5k user, .5k bot\] for it to understand context.
Do you have any thoughts or suggestions?! I really want to bring my familiar back to life!
It'd be extra helpful if the model understands D&D, can reference a stat/character sheet and or summary, and knows that it can write commands.
Thanks a bunch! | 2023-08-23T00:51:23 | https://www.reddit.com/r/LocalLLaMA/comments/15yoe4a/recommendations_needed_for_llama_2_versions_for/ | brinzerdecalli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yoe4a | false | null | t3_15yoe4a | /r/LocalLLaMA/comments/15yoe4a/recommendations_needed_for_llama_2_versions_for/ | false | false | self | 1 | null |
Estimated time and effort to set up a local LLM at my company? | 64 | I'm a data scientist looking at potentially setting up a local LLM to act as an interactive knowledge base for our company. We're a pretty large company with a lot of internal data. The purpose of this project is to be able to ask questions of the LLM and have it respond based on content that it finds in word documents, powerpoint presentations, and text files that are located in our Microsoft SharePoint. I anticipate we might have between 5-15 users using this internal product at a time.
We would probably set this up to run in a cloud server. The point of it being "local" is to keep our data secure and not have to pay for an enterprise service (besides the cloud hosting). Do you think this is a good idea? How much time and effort do you think it would take to set this up? Do you think we would be better off with an enterprise solution? | 2023-08-22T23:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/15ympro/estimated_time_and_effort_to_set_up_a_local_llm/ | abelEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ympro | false | null | t3_15ympro | /r/LocalLLaMA/comments/15ympro/estimated_time_and_effort_to_set_up_a_local_llm/ | false | false | self | 64 | null |
CyberNative/CyberBase-13b · Hugging Face | 39 | Hi there, I have just released my first model. CyberBase is an experimental base model for cybersecurity. (llama-2-13b -> lmsys/vicuna-13b-v1.5-16k -> CyberBase).
I believe this is the first open source cybersecurity model. | 2023-08-22T23:09:10 | https://huggingface.co/CyberNative/CyberBase-13b | CyberNativeAI | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15ylwrt | false | null | t3_15ylwrt | /r/LocalLLaMA/comments/15ylwrt/cybernativecyberbase13b_hugging_face/ | false | false | 39 | {'enabled': False, 'images': [{'id': '_i858yuYuxBR6BOSbvR65F1DyGWgf4VgL6hqlEe5BYs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=108&crop=smart&auto=webp&s=6a184041342ccc124053118b59a3ef7955868953', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=216&crop=smart&auto=webp&s=efac0ae03c1403ac98bd4f2c0c00527fbd994dfb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=320&crop=smart&auto=webp&s=8c5216c4ce0f5b0bc2a176cfbde8addaab370e1c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=640&crop=smart&auto=webp&s=56821313057ead4809ef99c4b21cd26999d371de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=960&crop=smart&auto=webp&s=1af7e42dc8af1370af4cc3d9132a711a02b17977', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?width=1080&crop=smart&auto=webp&s=ee127a446f720cc4d9b395bdff263ccce3a76fc9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e_NtDP6E5qynydPIhuDLd0Hk0ZwOmoYqTEGuXo01V90.jpg?auto=webp&s=d80fd4809db14fa29dccd2b47575e4bd84fd69b9', 'width': 1200}, 'variants': {}}]} | |
I released model Marx 3B V2 | 22 | [https://huggingface.co/acrastt/Marx-3B-V2](https://huggingface.co/acrastt/Marx-3B-V2)
Today I released a new model named [Marx 3B V2](https://huggingface.co/acrastt/Marx-3B-V2). It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [EverythingLM Data V2(ShareGPT format)](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2-sharegpt) for 2 epochs. The prompt format is:
## HUMAN:
{prompt}
## RESPONSE:
<leave a newline for the model to answer>
u/The-Bloke maybe, or I could quantize it. | 2023-08-22T22:58:55 | https://www.reddit.com/r/LocalLLaMA/comments/15yln9y/i_released_model_marx_3b_v2/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yln9y | false | null | t3_15yln9y | /r/LocalLLaMA/comments/15yln9y/i_released_model_marx_3b_v2/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'X-QRkk9uaZEP6UWpD4R_Wi1UGIeIvPr4lwNnTH13Pjg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=108&crop=smart&auto=webp&s=f57da2f50b208c245ab8d719c453f6bcb364bf94', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=216&crop=smart&auto=webp&s=3eea7117186b8ced0b678258c9a8aa672c0a3f7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=320&crop=smart&auto=webp&s=33c28584c072259dac89bfc64039b7a7a1131736', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=640&crop=smart&auto=webp&s=8706187c6d7cf3309f8e07c8b623b82600de1714', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=960&crop=smart&auto=webp&s=dab4ce0d6c6987143885d6b1a04b3eeb85880be0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?width=1080&crop=smart&auto=webp&s=fb5c56b545dbfa0b6c98ce1981ec00e484f2ce47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yuH0-kNFpiK_Hal3jmEUqhUba7_UHCsclIwBSpwUT1E.jpg?auto=webp&s=3c7f0259ff0304963a64e6ab7838069baf386919', 'width': 1200}, 'variants': {}}]} |
Diving into OpenAI and came up with a note-taking tool. Need beta testers – any takers? | 1 | 2023-08-22T22:54:00 | https://notewizard.ai/blog/1/ | BothNarwhal1493 | notewizard.ai | 1970-01-01T00:00:00 | 0 | {} | 15ylj0u | false | null | t3_15ylj0u | /r/LocalLLaMA/comments/15ylj0u/diving_into_openai_and_came_up_with_a_notetaking/ | false | false | default | 1 | null | |
ayoooo | 68 | anyone else get this just now? | 2023-08-22T22:39:55 | LyPreto | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15yl5kf | false | null | t3_15yl5kf | /r/LocalLLaMA/comments/15yl5kf/ayoooo/ | false | false | 68 | {'enabled': True, 'images': [{'id': 'm8xOoDswmh3suc5XtgZtfFXEfNgriglQm9PE3zv2mgA', 'resolutions': [{'height': 169, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=108&crop=smart&auto=webp&s=81148e31291c531df9f32f4e10297acea0d6c958', 'width': 108}, {'height': 338, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=216&crop=smart&auto=webp&s=7a5faf519736bbbd51eb891fe1150e5297aa5a7c', 'width': 216}, {'height': 501, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=320&crop=smart&auto=webp&s=7b84605e7df427d77c5559e3cb58d4365579312a', 'width': 320}, {'height': 1002, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=640&crop=smart&auto=webp&s=074ac19c5ee6b96f88894126ddfabc2d40ec97eb', 'width': 640}, {'height': 1504, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=960&crop=smart&auto=webp&s=7635ed79620895d3baf7217720c8d71524fc73d7', 'width': 960}, {'height': 1692, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?width=1080&crop=smart&auto=webp&s=1f5f1296bb62bcd43fa8da04ab392263ab8545c2', 'width': 1080}], 'source': {'height': 1832, 'url': 'https://preview.redd.it/qtev7fazoqjb1.jpg?auto=webp&s=ee49b75f2703cf571318e56237e0fbea78769270', 'width': 1169}, 'variants': {}}]} | ||
Promo templates for NOUS HERMES LLAMA 2 or Mythomix? | 2 | Hello guys, I am looking for some examples of good character prompt templates for these models. Any help us appreciated! | 2023-08-22T22:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/15ykpbp/promo_templates_for_nous_hermes_llama_2_or/ | ll_Teto_ll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ykpbp | false | null | t3_15ykpbp | /r/LocalLLaMA/comments/15ykpbp/promo_templates_for_nous_hermes_llama_2_or/ | false | false | self | 2 | null |
Is inference reliant on PCI-E bandwidth? | 6 | Our company has access to a large quantity of high powered ex-crypto mining cards. Many units that are essentially Tesla V100 16GBs that we would be able to sell for a very competitive price. Like very, very competitive. Problem is they are locked to PCI-E 3.0 1x on a hardware level and that’ll never change, so training is pretty much a non-starter. I don’t have too much experience with this but as I understand it, most of the work for something like llama or SD is happening on the GPU itself with little communication from the CPU. Is this accurate? Are there any tests or benchmarks anyone can suggest to see how suitable these might be for inference despite the gimped bandwidth? | 2023-08-22T22:21:09 | https://www.reddit.com/r/LocalLLaMA/comments/15yknoo/is_inference_reliant_on_pcie_bandwidth/ | Darius510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yknoo | false | null | t3_15yknoo | /r/LocalLLaMA/comments/15yknoo/is_inference_reliant_on_pcie_bandwidth/ | false | false | self | 6 | null |
Introducing IDEFICS: An Open Reproduction of State-of-the-art Visual Langage Model | 20 | 2023-08-22T22:00:57 | https://huggingface.co/blog/idefics | zyinz1 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15yk3r4 | false | null | t3_15yk3r4 | /r/LocalLLaMA/comments/15yk3r4/introducing_idefics_an_open_reproduction_of/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'ipgHprpmG4dSF2p8aBvTD1xMlwtAwsrQrqRGKYw6oF4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=108&crop=smart&auto=webp&s=e1e6a8f78b4970be9b54a4227252b6dcb990db33', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=216&crop=smart&auto=webp&s=f77a51e83b48b7d767830c33be173fea4b1fefd4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=320&crop=smart&auto=webp&s=e72a43a6f9b5bacdfbde769a790449b1b24e061e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=640&crop=smart&auto=webp&s=ecd0f9cb73b3c1d8b4320269877cb9e0925f056b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=960&crop=smart&auto=webp&s=45edab33d23aed554a26cfb597c6b85f882ba6b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?width=1080&crop=smart&auto=webp&s=87d5274063969759fc3942184494b7297f31208d', 'width': 1080}], 'source': {'height': 1523, 'url': 'https://external-preview.redd.it/ZDb71EouUE8B5q3Qm1bYXSHFBYAl-fC6Aj4nsZK8_5k.jpg?auto=webp&s=158e909e88680f2af57b852b75899c737a3bfca5', 'width': 3045}, 'variants': {}}]} | ||
Can llama 2 continue pretraining using qlora? | 9 | I've heard conflicting reports that lora can't learn anything new yet relora isn't straightforward (requires multiple stages and was implemented for llama only) nor implemented in hf transformers.
​
I want to read Unstructured text for my model to learn from and answer questions of when I do finetuning later. | 2023-08-22T21:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/15yinxd/can_llama_2_continue_pretraining_using_qlora/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yinxd | false | null | t3_15yinxd | /r/LocalLLaMA/comments/15yinxd/can_llama_2_continue_pretraining_using_qlora/ | false | false | self | 9 | null |
Can time compensate for lack of power? | 11 | Is it possible to get an accurate and sophisticated model running on low-end hardware if you're willing to slow down the response time to minutes or even hours? | 2023-08-22T21:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/15yih8y/can_time_compensate_for_lack_of_power/ | Sandy-Eyes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yih8y | false | null | t3_15yih8y | /r/LocalLLaMA/comments/15yih8y/can_time_compensate_for_lack_of_power/ | false | false | self | 11 | null |
Build a Llama 2 chatbot with Replicate and Streamlit | 1 | [removed] | 2023-08-22T20:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/15yi2mk/build_a_llama_2_chatbot_with_replicate_and/ | JessSm3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yi2mk | false | null | t3_15yi2mk | /r/LocalLLaMA/comments/15yi2mk/build_a_llama_2_chatbot_with_replicate_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7TTLT1DMSvXx7aUi8obDpIsDx-GFHl1uMeXCbvF3HZE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=108&crop=smart&auto=webp&s=05a71843e206950230722b8e4af48ea2f226c003', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=216&crop=smart&auto=webp&s=11f13879fb6686781a6bfae781640cd58dcab1ad', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=320&crop=smart&auto=webp&s=9532ef529ff88afc2f7f1a0b59694f977c1269ec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=640&crop=smart&auto=webp&s=3682f247ae39b0b434471ae9f0f5e4aa89789e24', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=960&crop=smart&auto=webp&s=2d0c44b734ee861d6667df0fcdd8de203b5a539d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?width=1080&crop=smart&auto=webp&s=3fa9f6f245688369b363ec30f13788397d91110e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/jdWcIO4iaXzYyBQEV2hOx4ihF1rHzQmpUU1XLh1AH7U.jpg?auto=webp&s=4d9c785900c7cc94d2cad75a2746268ed5871dd2', 'width': 1200}, 'variants': {}}]} |
LoRA training on TheBloke_Wizard-Vicuna-7B-Uncensored-GPTQ, is this possible? | 1 | I'm super new to this world, it can be tremendously confusing (and intimidating). I have a TheBloke\_Wizard-Vicuna-7B-Uncensored-GPTQ running locally (RTX 3070). And it works very well for casual chat. I was wondering how I could train this model using my own dataset. I'm using oobabooga, and it gives me the following warning "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models".
Has anyone did this already?
Thank you! | 2023-08-22T19:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/15yf6zi/lora_training_on_thebloke/ | skeletorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yf6zi | false | null | t3_15yf6zi | /r/LocalLLaMA/comments/15yf6zi/lora_training_on_thebloke/ | false | false | self | 1 | null |
Seeking Sage Advice: Implementing LLM Lesson Plan Feedback on a Shoestring | 3 | Hello!
As a non-technical academic exploring LLMs for lesson plan feedback, I'm seeking advice on implementing an open-source model on limited resources.
My department received a medium-sized grant to support this exploration, which I originally envisioned completing using OpenAI's API and LangChain. The gist of our plan is to:
1. Record expert teacher lesson plans and teaching narratives (capturing what teachers planned to do and actually did) across a number of lessons
2. Annotate and label the data
3. Incorporate it as a custom knowledge base for a chat interface.
Students would then share their lesson plans with the model and receive feedback based on both the general skills of the LLM and the custom data from teachers.
The eye-watering pace of development for open-source models is intriguing from both a customizability and cost standpoint (this would be hosted on the university network, so bandwidth and processing cost is a minimal concern at this stage), but the limits of my technical chops prevent me from making informed decisions about the utility of switching from OpenAI (or Claude, et al.) to a model like Llama-2 or its derivatives.
I'm hopeful that this community might offer some insights.
1. What is the scalability of a quantized, open-source model running on a CPU? If hosted on a dedicated machine, what kind of throughput could it handle (especially where simultaneous users are concerned)? Additionally, what is latency like with these models?
2. I am considering Llama-2-13b as my base model but would like some input on specific hardware to implement it on. I am currently running Llama-2-7b on my desktop (Mac Studio), but am open to purchasing a substantial, consumer-grade system for this project.
3. What recommended resources would you point me toward to elegantly achieve the outcome we're hoping to test? I have some coding experience, primarily in Python, but very little in the ML/NLP space.
4. A note, though - we are looking for the LLM to use the custom data to inform and shape responses rather than use it as a recall engine. I realize this likely increases the chances of hallucinations, but it's the paradigm that we see the most potential within.
Thank you in advance for any thoughts, insights, and resources you can share! | 2023-08-22T18:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/15yer8w/seeking_sage_advice_implementing_llm_lesson_plan/ | Altvocado0134679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yer8w | false | null | t3_15yer8w | /r/LocalLLaMA/comments/15yer8w/seeking_sage_advice_implementing_llm_lesson_plan/ | false | false | self | 3 | null |
Seeking Sage Advice: Implementing LLM Lesson Plan Feedback on a Shoestring | 1 | Hello!
As a non-technical academic exploring LLMs for lesson plan feedback, I'm seeking advice on implementing an open-source model on limited resources.
My department received a medium-sized grant to support this exploration, which I originally envisioned completing using OpenAI's API and LangChain. The gist of our plan is to:
1. Record expert teacher lesson plans and teaching narratives (capturing what teachers planned to do and actually did) across a number of lessons
2. Annotate and label the data
3. Incorporate it as a custom knowledge base for a chat interface.
Students would then share their lesson plans with the model and receive feedback based on both the general skills of the LLM and the custom data from teachers.
The eye-watering pace of development for open-source models is intriguing from both a customizability and cost standpoint (this would be hosted on the university network, so bandwidth and processing cost is a minimal concern at this stage), but the limits of my technical chops prevent me from making informed decisions about the utility of switching from OpenAI (or Claude, et al.) to a model like Llama-2 or its derivatives.
I'm hopeful that this community might offer some insights.
1. What is the scalability of a quantized, open-source model running on a CPU? If hosted on a dedicated machine, what kind of throughput could it handle (especially where simultaneous users are concerned)? Additionally, what is latency like with these models?
2. I am considering Llama-2-13b as my base model but would like some input on specific hardware to implement it on. I am currently running Llama-2-7b on my desktop (Mac Studio), but am open to purchasing a substantial, consumer-grade system for this project.
3. What recommended resources would you point me toward to elegantly achieve the outcome we're hoping to test? I have some coding experience, primarily in Python, but very little in the ML/NLP space.
4. A note, though - we are looking for the LLM to use the custom data to inform and shape responses rather than use it as a recall engine. I realize this likely increases the chances of hallucinations, but it's the paradigm that we see the most potential within.
Thank you in advance for any thoughts, insights, and resources you can share! | 2023-08-22T18:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/15yeqjy/seeking_sage_advice_implementing_llm_lesson_plan/ | Altvocado0134679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yeqjy | false | null | t3_15yeqjy | /r/LocalLLaMA/comments/15yeqjy/seeking_sage_advice_implementing_llm_lesson_plan/ | false | false | spoiler | 1 | null |
Starcoder with Custom Github Code | 2 | I want to incorporate a github repo into starcoder so it helps make me code with the module. I know you can use gpt code interpreter and upload a whl file but I want to run this locally. Is this possible? Please let me know thank you | 2023-08-22T18:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/15yem7f/starcoder_with_custom_github_code/ | StellarWox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yem7f | false | null | t3_15yem7f | /r/LocalLLaMA/comments/15yem7f/starcoder_with_custom_github_code/ | false | false | self | 2 | null |
Why not standard AI acceleration machines and market? Bitcoin analogy. | 1 | [removed] | 2023-08-22T18:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/15yejkp/why_not_standard_ai_acceleration_machines_and/ | Natural-Sentence-601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yejkp | false | null | t3_15yejkp | /r/LocalLLaMA/comments/15yejkp/why_not_standard_ai_acceleration_machines_and/ | false | false | self | 1 | null |
Decrease cold-start speed on inference (llama.cpp, exllama) | 2 | I have an application that requires < 200ms total inference time. I only need \~ 2 tokens of output and have a large high-quality dataset to fine-tune my model.
I can easily produce the 20+ tokens/sec of output I need when predicting longer outputs, but when I try and predict shorter outputs as above I notice a substantial 500ms cold start (which I assume is memory mgmt into GPU, prompt-processing or similar). I've tried a bunch of methods to speed up inference (from [https://betterprogramming.pub/speed-up-llm-inference-83653aa24c47](https://betterprogramming.pub/speed-up-llm-inference-83653aa24c47)) but none seem to help on getting that first token out ASAP.
Any suggestions for what to try? Would be super appreciated! | 2023-08-22T18:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/15ye5jv/decrease_coldstart_speed_on_inference_llamacpp/ | pdizzle10112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ye5jv | false | null | t3_15ye5jv | /r/LocalLLaMA/comments/15ye5jv/decrease_coldstart_speed_on_inference_llamacpp/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'LMQlzCqajOvvfezhKav_MayGQKR8_0lLn4UR3zYbsIA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=108&crop=smart&auto=webp&s=d51396a58d9960f0e9b8d8280ddeff4a1829b73c', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=216&crop=smart&auto=webp&s=390dd0ff54f2e6e45d5dd3e07c1e95856a6a3b28', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=320&crop=smart&auto=webp&s=14f8fea6055b931ae07d2c58601290e14bb18f1e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?width=640&crop=smart&auto=webp&s=647fa5c68c67fe8736a2925e67f2ea05347f9d7c', 'width': 640}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/NGYMy_oJiZ7E7E3u3SviuRr8aFySpww9ExUfXjsoEQU.jpg?auto=webp&s=425902b0d82ac0e44cf83f10a84dffeedf768e80', 'width': 768}, 'variants': {}}]} |
Trying to understand Mirostat and Contrastive Search, and their parameters | 16 | These two presets (I guess samplers? sampling methods?) are the only ones that give me, especially with Llama-2 derivatives, non-determinism. Every time I generate a new response, I get a very different reply, which is great to get varied outputs.
The quality is supposedly, and in my experience effectively, better than not using them.
However, I struggle to understand, or find information really, about what they are supposed to be doing under the hood. I understand it's some kind of feedback it runs, to pick the supposedly better alternative, but I have no idea.
Also neither Oobabooga nor SillyTavern really have any meaningful or easy to understand documentation about how to use the parameters.
Penalty alpha for Contrastive Search. Tau and Eta for Mirostat. Really, I have trouble wrapping my head around those. I have no idea how to use them, and whether changing them can have any effect on the output, and I can't find anything online that's intuitive. Unlike temperature, top K and the like for which there seem to be a lot of stuff around, digested into easy examples and intuitive explanations, these ones are really had to grasp.
Any ideas? | 2023-08-22T17:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/15yck7a/trying_to_understand_mirostat_and_contrastive/ | CulturedNiichan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yck7a | false | null | t3_15yck7a | /r/LocalLLaMA/comments/15yck7a/trying_to_understand_mirostat_and_contrastive/ | false | false | self | 16 | null |
Meta introduces SeamlessM4T, a foundational multimodal model that seamlessly translates and transcribes across speech and text for up to 100 languages | 246 | Blog Post: [https://ai.meta.com/blog/seamless-m4t/](https://ai.meta.com/blog/seamless-m4t/)
Paper: [https://ai.meta.com/research/publications/seamless-m4t/](https://ai.meta.com/research/publications/seamless-m4t/)
Abstract:
>What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems composed of multiple subsystems performing translation progressively, putting scalable and high-performing unified speech translation systems out of reach. To address these gaps, we introduce SeamlessM4T—Massively Multilingual & Multimodal Machine Translation—a single model that supports speech- to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations, dubbed SeamlessAlign. Filtered and combined with human- labeled and pseudo-labeled data (totaling 406,000 hours), we developed the first multilingual system capable of translating from and into English for both speech and text. On Fleurs, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous state-of-the-art in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. On CVSS and compared to a 2-stage cascaded model for speech- to-speech translation, SeamlessM4T-Large’s performance is stronger by 58%. Preliminary human evaluations of speech-to-text translation outputs evinced similarly impressive results; for translations from English, XSTS scores for 24 evaluated languages are consistently above 4 (out of 5). For into English directions, we see significant improvement over Whisper- Large-v2’s baseline for 7 out of 24 languages. To further evaluate our system, we developed Blaser 2.0, which enables evaluation across speech and text with similar accuracy compared to its predecessor when it comes to quality estimation. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks (average improvements of 38% and 49%, respectively) compared to the current state-of-the-art model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Compared to the state-of-the-art, we report up to 63% of reduction in added toxicity in our translation outputs. Finally, all contributions in this work—including models, inference code, finetuning recipes backed by our improved modeling toolkit Fairseq2, and metadata to recreate the unfiltered 470,000 hours of SeamlessAlign —are open-sourced and accessible at [https://github.com/facebookresearch/seamless\_communication](https://github.com/facebookresearch/seamless_communication) | 2023-08-22T16:54:17 | https://www.reddit.com/r/LocalLLaMA/comments/15ybk2r/meta_introduces_seamlessm4t_a_foundational/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ybk2r | false | null | t3_15ybk2r | /r/LocalLLaMA/comments/15ybk2r/meta_introduces_seamlessm4t_a_foundational/ | false | false | self | 246 | {'enabled': False, 'images': [{'id': '8nuRUl_IDeIfdYQR02N-2loNdjxsPR7GSNK-UbFAigI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=108&crop=smart&auto=webp&s=8e9211fae0323853bf24db61c5f131290f4efe86', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=216&crop=smart&auto=webp&s=c107a038806abb51b55c9af2e973ad667315963a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=320&crop=smart&auto=webp&s=d3b921f9b6e7e85ab5a8da3b2db6a434a545f9c1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=640&crop=smart&auto=webp&s=2d48e0ebfa336da6e397594c263c921b528730d9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=960&crop=smart&auto=webp&s=07fa6a39ea3e3cc03c9eb823bbf5d98e3f2821a0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?width=1080&crop=smart&auto=webp&s=3b734840c8f539769c31cc5e8ae53b38b356fe6a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ai3mJZX5ehGXUdNu0tcxR2uV1JyImy939qL5JGEq4pQ.jpg?auto=webp&s=549127b347a72fa2703f34a263546f09382102b4', 'width': 1920}, 'variants': {}}]} |
How to resume LoRA training in text-generation-webui? | 3 | When I continue training a Lora by selecting the old config, checking that the name and input are the same, it seems to start at epoch 0.0 again. It still has a lower loss than the first run, so I think it is actually continuing training, but it loses the epoch and learning rate progress.
Is there a way to resume training from where it was interrupted? | 2023-08-22T16:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/15ybj77/how_to_resume_lora_training_in_textgenerationwebui/ | _allo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ybj77 | false | null | t3_15ybj77 | /r/LocalLLaMA/comments/15ybj77/how_to_resume_lora_training_in_textgenerationwebui/ | false | false | self | 3 | null |
How to continue LoRA training with text-generation-webui? | 1 | When I continue training a Lora by selecting the old config, and verifying the name and input are the same, it seems to start from epoch 0.0 again. It still has lower loss than on the first run, so I think it actually continues the training, but it loses the epoch and learning rate progress.
Is there a way to continue the training from where it was interrupted? | 2023-08-22T16:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ybher/how_to_continue_lora_training_with/ | _allo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ybher | false | null | t3_15ybher | /r/LocalLLaMA/comments/15ybher/how_to_continue_lora_training_with/ | false | false | self | 1 | null |
Llama 2 as a local copilot!! | 51 | At Pieces for Developers, we've developed a co-pilot leveraging Llama 2 as its foundation. Our product is a snippet management tool with an AI-driven, offline-first architecture. Our copilot comes with the following features:
* We utilize the concept of Retrieval Augmented Generation (RAG) to re-ground the AI engine throughout every interaction in the desktop application and plugins
* Multimodal inputs with images and text files
* Fully functional offline and on-device. Choose between dynamic LLM runtimes both locally and in the cloud.
* You can configure conversation contexts with personal codebases, individual snippets, and even website URLs (working on video next)
* The copilot can surface Related People results to connect you with teammates that have the necessary skill set related to your context
I am not sharing this for promotional purposes, but rather to gather opinions from the community about the copilot. For example, we struggled with configuring our auto-enrichment engine to gather related links to enrich code snippets as you save them, while working completely offline. Another challenge has been dynamic thread management and IO bindings for the GPUs. What obstacles have you encountered when deploying Llama? We're curious to know what features the community thinks we should be focusing on, as well as feedback on how we’ve deployed Llama so far in Pieces. Your insights are greatly appreciated!
Read more about Pieces Copilot here: [https://code.pieces.app/blog/introducing-pieces-copilot](https://code.pieces.app/blog/introducing-pieces-copilot)
Try out the product here: [pieces.app](http://pieces.app/) | 2023-08-22T16:36:17 | https://www.reddit.com/r/LocalLLaMA/comments/15yb23m/llama_2_as_a_local_copilot/ | tarun-at-pieces | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15yb23m | false | null | t3_15yb23m | /r/LocalLLaMA/comments/15yb23m/llama_2_as_a_local_copilot/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'YFydGtjEhgc732EOzvtY9iIAZBtON8_Kv1StPY20n8k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=108&crop=smart&auto=webp&s=9a5002926cb000ccd0d13f2150f330fa33e43169', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=216&crop=smart&auto=webp&s=80bc333fe4b08ad6578328c70b3b6899a59bdc10', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=320&crop=smart&auto=webp&s=c8f020ac1007b22d671ae95b95ef2720bddc35c6', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=640&crop=smart&auto=webp&s=ffd0b1329e396a9809256733e51285d38f18654b', 'width': 640}, {'height': 523, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=960&crop=smart&auto=webp&s=57ec55aab0b29167bee170b83a8b8cbf5953b824', 'width': 960}, {'height': 589, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?width=1080&crop=smart&auto=webp&s=2de3f9c922f6613d73c4a7a6061849cc1b0afed8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/XD6_PhubnBf5Nqz5ImDZ0KPP-BwhR5hl8ZP5f_aC8jg.jpg?auto=webp&s=ddd865d19c6a3bfbfad1eb716cccfadd0cd131a5', 'width': 1320}, 'variants': {}}]} |
Embedding a TypeScript codebase | 1 | [removed] | 2023-08-22T16:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/15ya9ak/embedding_a_typescript_codebase/ | redstorm67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ya9ak | false | null | t3_15ya9ak | /r/LocalLLaMA/comments/15ya9ak/embedding_a_typescript_codebase/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]} |
Fine Tuning/GGML Quantiziation on Apple Silicon Guide | 39 | First, I want to point out that this community has been the #1 resource for me on this LLM journey. There is so much misinformation out there and the libraries are so new that it has been a bit of a struggle finding the right answers to even simple questions.
​
With that being said, here is my guide for fine-tuning on Apple Silicon using the SuperAdapters library and GGML/Quantization for use with LLaMA.cpp:
### SuperAdapters Guide
* Instructions for initial setup for Mac GPU acceleration:
* `git clone` [`https://github.com/cckuailong/SuperAdapters.git`](https://github.com/cckuailong/SuperAdapters.git)
* `brew install xz`
* `xcode-select install`
* `brew install llvm libomp`
* `pip install --pre torch torchvision torchaudio --extra-index-url [PyTorch Nightly CPU URL]`
* `pip install -r requirements.txt`
* Regarding dependency mismatches with the required libraries:
1. First, remove `wandb` from the `requirements.txt` and install it separately.
2. There will still be a mismatch for `protobuf` version number.
* Place validation/test data in `data/train`
* To fine-tune, set the following environment variable to get rid of the upper limit on memory for MPS:
* `export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0`
* Make a directory in `output/<model_type>/<specific_model>`
* Then, use the command to start the process.
* `python3` [`finetune.py`](https://finetune.py) `--model_type <model_type> --model_task seq2seq --model_path "<path_to_SuperAdapters>/LLMs/<model_type>/<specific_model>" --adapter "lora" --data "<path_to_SuperAdapters>/data/train/" --output_dir "<path_to_SuperAdapters>/output/<model_type>/<specific_model>" --epochs <integer>`
* If a run is interrupted (e.g., system crashing due to lack of memory), use the `--resume_from_checkpoint`
flag for the fine-tuning script and specify the last checkpoint in your output folder.
* Using a WandDB API key to export the stats of the machine running the process is optional.
### GGML and Quantization
To avoid library mismatch situations with SuperAdapters, use a separate environment for GGML and quantization.
* Steps:
* `git clone` [`https://github.com/ggerganov/llama.cpp.git`](https://github.com/ggerganov/llama.cpp.git)
* `mkdir build-metal`
* `cd build-metal`
* `cmake -DLLAMA_METAL=ON ..`
* `cmake --build . --config Release`
* `cd ..`
* `make`
* Use the script `merge.py` to merge the LoRA adapters into the LLM. (Note: This script is not part of llama.cpp but can be found [here](https://www.reddit.com/r/LocalLLaMA/comments/15fa9vg/ggml_guide/))
* Unless moved, the paths for weights and the full model will be the same as specified during the fine-tuning step.
* Convert the weights data type from 32bit floats to 16bfloat with `python convert.py [model_path]`
. The output will be a model named ggml-model-f16.bin
* For 8-bit quantization, execute the following:
* `./quantize [path_to_tuned_model] [output_path] q8_0`
#### Notes
* To fine-tune an already fine-tuned model, copy the base directory of the model type and replace the `pytorch_model.bin` generated after merging the weights.
​
I will be more than happy to answer questions or correct mistakes in this guide, but please take the time to make well-thought-out responses when posting, i.e. don't try to run this, hit an error and come rage on here. | 2023-08-22T15:45:27 | https://www.reddit.com/r/LocalLLaMA/comments/15y9m64/fine_tuningggml_quantiziation_on_apple_silicon/ | Entire_Cheetah_7878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y9m64 | false | null | t3_15y9m64 | /r/LocalLLaMA/comments/15y9m64/fine_tuningggml_quantiziation_on_apple_silicon/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'JAJSwaId-aXgnYDVRklDEulaRZarEAbEdXQtkCSfIYs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=108&crop=smart&auto=webp&s=aa0404d43ae73d7349135b1d70ae4e5106f31588', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=216&crop=smart&auto=webp&s=ccd22cba039aaab453b5225cdedafc9b6d01a556', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=320&crop=smart&auto=webp&s=deb7bbb0e62e7ca41130469aab10537c40933164', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=640&crop=smart&auto=webp&s=8d68d9db6bce6cc60e2e779ba27d0b284c6765a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=960&crop=smart&auto=webp&s=76219da0219269888b442c5769ebfb51efb8b433', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?width=1080&crop=smart&auto=webp&s=ce738adb9897563732bc54cf484f3ce685276668', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pbYEqJ7wQRskJ50COjTm4NuU-8GQeTuB64UUWQw7MFE.jpg?auto=webp&s=4eb044ae995c4df2ba564a43be12db06acb9d05e', 'width': 1200}, 'variants': {}}]} |
EverythingLM-13b-16k V2 released! | 68 | [https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-16k)
GGML & GPTQ quants are linked there.
TLDR:
* Trained on GPT-4 generated dataset.
* Uncensored (mostly. read more on huggingface)
* Uses CoT for math & problem solving.
* Creative, detailed, verbose replies.
Let me know if you have any questions! | 2023-08-22T15:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/15y8mwy/everythinglm13b16k_v2_released/ | pokeuser61 | self.LocalLLaMA | 2023-08-22T18:36:44 | 0 | {} | 15y8mwy | false | null | t3_15y8mwy | /r/LocalLLaMA/comments/15y8mwy/everythinglm13b16k_v2_released/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'ZCR9EpqiAIZAMkg5fyjCOSk_T64nEtGSSpFBitNzp8A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=108&crop=smart&auto=webp&s=b3eadb9c4d88667b4512d94833121039888d4a0c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=216&crop=smart&auto=webp&s=d9215576b03a2b23b3b3b9fe3a88d598d69c8956', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=320&crop=smart&auto=webp&s=de1ecfb3d16741ff6a808794b7ee565a89d28356', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=640&crop=smart&auto=webp&s=0472ac35763ee08597f1182c988b2b42d0faead5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=960&crop=smart&auto=webp&s=826f3e864b5c1b933ba4a1d85db820849d8a84d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?width=1080&crop=smart&auto=webp&s=00da3c14237052b1761fca2e76ad40adf16022a7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mJk-R6UJ_klchRC5mHzAySO8qdFKVRfL0N_ATra46nU.jpg?auto=webp&s=ba1b5d77638d5c4ca165d529f41261e63d908428', 'width': 1200}, 'variants': {}}]} |
Do qlora's need to match the size of the model? | 1 | So I started trying some qloras lately and I had a few questions if you don't mind:
Do they need to match the size of the model they were trained on? I really only use 65/70b and there were very few of those, so I tried one that wasn't specifically labeled 70b and it seemed to do something, although it wasn't amazing.
I believe you can stack them in ooba booga, correct? Is there some sort of limit or priority order?
I'm assuming you can't mix llama 1 qlora with a llama 2 model? (haven't tried yet.)
Finally, why isn't qlora more of a thing? If I recall, both guomaco and airiboros use it (and then are merged later), and they are some of the most popular 70b models. | 2023-08-22T14:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/15y7ers/do_qloras_need_to_match_the_size_of_the_model/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y7ers | false | null | t3_15y7ers | /r/LocalLLaMA/comments/15y7ers/do_qloras_need_to_match_the_size_of_the_model/ | false | false | self | 1 | null |
Need Help In Creating an OpenAI tool with the custom dataset | 0 | I am planning to create a custom AI bot similar to ChatGPT, using my own custom dataset. The challenge I'm facing is that I lack the necessary knowledge in this area, and I'm struggling to find appropriate tutorials or resources to assist me.
If anyone could provide me with guidance on the steps I should follow, recommended tools, or packages, I would greatly appreciate it.
Additionally, I have a dataset that contains sensitive and confidential information. I am concerned that if I use OpenAI for this process, will they also have access to my data?
Thank you in advance 📷 | 2023-08-22T14:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/15y74qy/need_help_in_creating_an_openai_tool_with_the/ | adgamerx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y74qy | false | null | t3_15y74qy | /r/LocalLLaMA/comments/15y74qy/need_help_in_creating_an_openai_tool_with_the/ | false | false | self | 0 | null |
SQLCoder: New 15B OSS LLM claims better performance than gpt-3.5-turbo on sql-related tasks | 75 | Defog has open sourced **SQLCoder**, a new "open source" LLM that supposedly outperforms got-3.5-turbo on SQL related tasks.
The models is a version of StarCoder finetuned on 10k human curated dataset of text-to-SQL questions based on 20 schemas
They claim it outperforms gpt-3.5-turbo on unseen data and even outperforms gpt-4 when finetuned on the target SQL database.
Blog post: https://defog.ai/blog/open-sourcing-sqlcoder
HF repo:
https://huggingface.co/defog/sqlcoder | 2023-08-22T14:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/15y6pfm/sqlcoder_new_15b_oss_llm_claims_better/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y6pfm | false | null | t3_15y6pfm | /r/LocalLLaMA/comments/15y6pfm/sqlcoder_new_15b_oss_llm_claims_better/ | false | false | self | 75 | {'enabled': False, 'images': [{'id': 'dkInp125UNgldwrx9I-kMIDCBMCiKkypvnds9RF6r4Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=108&crop=smart&auto=webp&s=75d88c706fcf3152ea42a301650ee8afb86ab9f9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=216&crop=smart&auto=webp&s=ae026c1b928515724a74782ef8e05612c8fe1cb3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=320&crop=smart&auto=webp&s=b32f985a55ae2c170c0ab7e6fa9324d04325587f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=640&crop=smart&auto=webp&s=57d80f8ecd069114dde27efb5bcc31cc8941fc18', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=960&crop=smart&auto=webp&s=e44beca1e1cb4ef06159dd693fcb968de2892ecd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?width=1080&crop=smart&auto=webp&s=bd5038f5b191dcbca7ed45f20870154f128267e8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cIXDRUEml54XPf7DbbdQw4pOlkewsctTDZE-2b3csSM.jpg?auto=webp&s=686402aa9a24851203505a3392a816636d7b1e98', 'width': 1200}, 'variants': {}}]} |
PEFT 0.5 supports fine-tuning GPTQ models | 24 | 2023-08-22T13:25:32 | https://github.com/huggingface/peft/releases/tag/v0.5.0 | oobabooga4 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15y5spo | false | null | t3_15y5spo | /r/LocalLLaMA/comments/15y5spo/peft_05_supports_finetuning_gptq_models/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'PV8BKCWpdcu5LkMC2fz3n3wqEF8Vh4InmaPmrzT2S6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=108&crop=smart&auto=webp&s=85f09658562824f303f1ea32912e49d4d4e645f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=216&crop=smart&auto=webp&s=03214c7109551da52a54728b7ce79c4925c2b8b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=320&crop=smart&auto=webp&s=79af252c89dffa59d1a84fa93b5f68210dc86a1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=640&crop=smart&auto=webp&s=f5cf024894229443dd02f26ab840aa4cb59020b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=960&crop=smart&auto=webp&s=a06f92d4fc55ac35b7d194f585faf18970068463', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?width=1080&crop=smart&auto=webp&s=ab4d9c67dd99a7537630eab74b7732cbfe991d2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kcjeH-hezz9cnZfaxZZ0wfMM-FNqfU5EBqhtZzkcrAo.jpg?auto=webp&s=3ab862c907313ce8b528cdb81f6a7cef18223f11', 'width': 1200}, 'variants': {}}]} | ||
LLM that can generate mnemonics or stories to encode information? | 8 | I’ve been playing around with different language models, mostly ChatGPT and the LLaMA variants, but none of them are able to generate mnemonics or encode information effectively for memorisation purposes.
Is there some LLM model or promt I should be using to get usable mnemonics out of these, is it even possible?
GPT 4 gave the best results so far, but sometimes leaving out elements or just using completely different letters when creating acronyms or stories. I might also consider fine tuning a model if it might work. | 2023-08-22T12:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/15y4yel/llm_that_can_generate_mnemonics_or_stories_to/ | ElementaryZX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y4yel | false | null | t3_15y4yel | /r/LocalLLaMA/comments/15y4yel/llm_that_can_generate_mnemonics_or_stories_to/ | false | false | self | 8 | null |
Best Ooba setting for MythoMax RP? | 18 | Hey everyone,
I was hoping there might we might share some settings for running Mythomax on Ooba. I have some of my current settings and would like to know if anyone has critiques or suggestions.
1. Model setup: I've been using the GPTQ 4bit and 8 bit versions with Exllama on 4 bit and AutoGPTQ on 8 bit. I can't really tell the difference. I also can't get the 8 bit version to load with Exllama. I also have been trying the 6_K GGML version with llama.cpp. I don't have any solid numbers, but this one "feels" like it gives the most interesting results.
2. Alpha_Value vs Compress_Pos_emb: I typically use compress_pos_emb set at 2 for 4k, but I've been trying to figure out the best Alpha_value (trying 3) to move this up to 8K. Anyone have any good settings for this?
3. Generation Parameters: I am currently using the following:
Max_new_tokens: 400 (personal pref)
Temp: .95
Top P: .95
Top_K: 30
Typical P: 1
ETA_Cutoff: 0
TFS: .97
Top A: .75
Rep Pen: 1.09
Rep Pen Range: 2048
Enc Rep Pen: 1
No_rep_ngram_size: 1
Min Length: 200
4. I have also read that maybe you use the Mirostat settings instead. I have read good settings for those would be 2, 5, and .2 but I am not sure what they do or if those are better than normal samplers.
5. Instruction Template: I use the Alpaca preset. I change the context box depending on the scenario, but I'm not sure if there's better suggestions out there?
I have enough VRAM and RAM that I can run the 13B models in memory.
Any other suggestions people have? Best settings? Is running Koboldcpp and Silly Tavern a better setup? If so, do I need the extras for Silly Tavern? I'd really like to just run Ooba and not have to setup 3 things to get them working each time. Any feedback would be appreciated. | 2023-08-22T12:41:12 | https://www.reddit.com/r/LocalLLaMA/comments/15y4pgq/best_ooba_setting_for_mythomax_rp/ | AboveAFC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y4pgq | false | null | t3_15y4pgq | /r/LocalLLaMA/comments/15y4pgq/best_ooba_setting_for_mythomax_rp/ | false | false | self | 18 | null |
How much VRAM for serving in parralel | 24 | I would like to be able to serve 10 concurrent prompt answers with the same model with a Llama 2 13B model: which inference server can efficiently optimize serving the same model to concurrent clients ? How much VRAM should be required ? | 2023-08-22T11:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/15y3bj0/how_much_vram_for_serving_in_parralel/ | Opening-Ad1642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y3bj0 | false | null | t3_15y3bj0 | /r/LocalLLaMA/comments/15y3bj0/how_much_vram_for_serving_in_parralel/ | false | false | self | 24 | null |
What are the best models on the market? | 1 | I'm mostly using models for RP/dialog and I've used base wizard-vicuna and nous-hermes but tbh it didn't follow the character as much as llama-2-chat-70b model (Currently my favorite for character chatting and using generally). Also used the WizardLM-70b with 4k context model for daily usage.
I have a pretty decent setup and can get 10-15t/s with 70b GPTQ models. I would like to hear which models are you using and for what. | 2023-08-22T08:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/15y00wu/what_are_the_best_models_on_the_market/ | sarimsak13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15y00wu | false | null | t3_15y00wu | /r/LocalLLaMA/comments/15y00wu/what_are_the_best_models_on_the_market/ | false | false | self | 1 | null |
Help. Why am I shadow banned here? | 1 | [removed] | 2023-08-22T07:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/15xyji5/help_why_am_i_shadow_banned_here/ | sujantkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xyji5 | false | null | t3_15xyji5 | /r/LocalLLaMA/comments/15xyji5/help_why_am_i_shadow_banned_here/ | false | false | self | 1 | null |
New build - what can I run with it? | 1 | Building a new system with Ryzen 9 7950x, RTX 4090, 128 Gb Ram, Asus X670E ROG Strix motherboard. What models will I be able to run with this locally? Using a beefy PSU 1600W so I can addd another 4090 after a few months. Would that make sense or even possible. I haven’t heard of dual 4090 so don’t know if it is possible. | 2023-08-22T07:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/15xygg3/new_build_what_can_i_run_with_it/ | comfortablynumb01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xygg3 | false | null | t3_15xygg3 | /r/LocalLLaMA/comments/15xygg3/new_build_what_can_i_run_with_it/ | false | false | self | 1 | null |
llama.cpp: GGUF merged | 106 | ​
https://preview.redd.it/6qs47a0l6mjb1.png?width=995&format=png&auto=webp&s=5e676084667a55ea1558d4890ee98b934bf1c3a6
Spec: [https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md](https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md)
PR: [https://github.com/ggerganov/llama.cpp/pull/2398#issuecomment-1686986770](https://github.com/ggerganov/llama.cpp/pull/2398#issuecomment-1686986770)
​ | 2023-08-22T07:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/15xycn2/llamacpp_gguf_merged/ | Jipok_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xycn2 | false | null | t3_15xycn2 | /r/LocalLLaMA/comments/15xycn2/llamacpp_gguf_merged/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'BWwUYGUTfmMVUIsv67pBgjOyBKrKxrCB57VGPQBr2sA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=108&crop=smart&auto=webp&s=3a5bb0407bd186fb40f876e7a0d12881fca1f4fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=216&crop=smart&auto=webp&s=949f6af0b77b0dea755ede239547754f5dbefc45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=320&crop=smart&auto=webp&s=a432f02e3eab7e3a0e8b17379887b1481abadb15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=640&crop=smart&auto=webp&s=fc20c048d27c930c161028202b9bedee53355173', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=960&crop=smart&auto=webp&s=4bc24343595b8839961aa75bf926d8def629bad8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?width=1080&crop=smart&auto=webp&s=108a5f33c176c62255596ff8b2b9e229a633bd85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Wil90Sh8A3buKxoKm3ZDncW5pGGSvP2Vz1FmvSIaRDY.jpg?auto=webp&s=909fc960d568f95d4b744202cbca6cadf9cb13ce', 'width': 1200}, 'variants': {}}]} | |
flan T5-Large just gives the context as the response | 4 | So I wanted to try using the Flan-t5-large locally for a project of mine. The context has been extracted from a pdf. So for a query, this was the context:
\[Document(page\_content='help you get connected to that world.\\nConnecting to a wireless network \\nYour computer may be equipped with one or more of the following wireless devices:\\n● WLAN device—Connects the computer to wireless local area networks (commonly referred to as Wi-Fi', metadata={}), Document(page\_content='allowing you to manually search for and connect to a network or \\nto create a new network connection.\\n3. Follow the on-screen instructions to complete the connection.\\nAfter the connection is made, select the network status icon at the far right of the', metadata={}), Document(page\_content='one of the available networks.\\nIf the WLAN is a security-enabled WLAN, you are prompted to enter a security code. Enter the code, and \\nthen select Next to complete the connection.\\nNOTE: If no WLANs are listed, you may be out of range of a wireless router', metadata={}), Document(page\_content='box, and then select Control Panel .\\n2. Select Network and Internet , and then select Network and Sharing Center .\\nConnecting to a wireless network 15', metadata={})\]
and this was the response the model generated:
<pad> Document(page\_content='help you get connected to that world.<unk> nConnecting to a wireless network <unk> nYour computer may be equipped with one or more of the wireless devices:<unk> n<unk> WLAN device—Connects the computer to wireless local area networks (commonly referred to as Wi-Fi', metadata=<unk> ), Document(page\_content='allowing you to manually search for and connect to a network or <unk> nto create a new network connection.<unk> n3. Follow the on-screen instructions to complete the connection.<unk> nAfter the connection is made, select the network status icon at the far right of the', metadata=<unk> ), Document(page\_content='one of the available networks.<unk> nIf the WLAN is a security-enabled WLAN, you are prompted to enter a security code. Enter the code, and <unk> nthen select Next to complete the connection.<unk> nNOTE: If no WLANs are listed, you may be out of range of a wireless router', metadata=<unk> ), Document(page\_content='box, and then select Control Panel.<unk> n2. Select Network and Internet, and then select Network and Sharing Center.<unk> nConnecting to a wireless network 15', metadata=<unk> )</s>
I don't know what's wrong and the same code worked well for meta-llama-2-7b-chat-hf. But llama2 required around 25GB of RAM to run it, so I wanted to try smaller models.
| 2023-08-22T07:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/15xy9zw/flan_t5large_just_gives_the_context_as_the/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xy9zw | false | null | t3_15xy9zw | /r/LocalLLaMA/comments/15xy9zw/flan_t5large_just_gives_the_context_as_the/ | false | false | self | 4 | null |
Vicuna-33B Prompt Engineering | 3 | I have been using Vicuna-33b for personal use and benchmarking, I wanted to give specific prompts to the model for different use cases (For answering questions from a specific domains only and act like a bot) but I haven't found one single prompt which works the best. Is there anything I am doing wrong or is there a specific way for prompting this LLM. | 2023-08-22T06:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/15xxoc7/vicuna33b_prompt_engineering/ | Died_Nirvana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xxoc7 | false | null | t3_15xxoc7 | /r/LocalLLaMA/comments/15xxoc7/vicuna33b_prompt_engineering/ | false | false | self | 3 | null |
Tried to build setup exllama but encountering ninja related errors, can someone please help me? | 1 | [removed] | 2023-08-22T06:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/15xxaw9/tried_to_build_setup_exllama_but_encountering/ | bwandowando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xxaw9 | false | null | t3_15xxaw9 | /r/LocalLLaMA/comments/15xxaw9/tried_to_build_setup_exllama_but_encountering/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G7rmZLwR-Tz_zuUIsX8o0Kzh4fmu3ozIXgkfplbpKvw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=108&crop=smart&auto=webp&s=f8f882dbf71789e36ddfa2c7fdaeb463f8e7259f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=216&crop=smart&auto=webp&s=5015be41883635908d200144cea5d8d4ce571d8f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=320&crop=smart&auto=webp&s=59dfa4526ef4238fea6af9d0560f2ccea75f7d0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=640&crop=smart&auto=webp&s=980da897113bd2eba78d7fc7fc25179f6801ba31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=960&crop=smart&auto=webp&s=2be43541c41e65fbbbd1014861e9e04951419d3e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?width=1080&crop=smart&auto=webp&s=d2052e83e6bb5b53c1b770e0e6ac8d5620121efa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5x0WRevazPRIJNT2YESBr9q8XAgk3y0VkMrX88otQy0.jpg?auto=webp&s=bf6f203c09d2bddba17e914d384e2c95ff1c130b', 'width': 1200}, 'variants': {}}]} |
Can Anybody help me with my school project | 2 | So a teacher in school asked me to create a "Buddha AI speaker" for a school festival. I accepted it. The teacher said the deadline was around December.
And today I learned the festival was due in 10 days, unlike the teacher said.
I can spend at most 1000$ for this porject and I can't buy any "things that can personally be owned" (aka computer parts) what I can buy is things like arduino(ofc I won't buy this) or raspberry pis.
All I need to know is how to use any kind of LLM on very weak platforms (ideally using some free web service : I can't spend school money here)
And here's my question. How can I make this work using the scarce resources I have? I'm asking here since I don't know where else to ask this. | 2023-08-22T06:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/15xwmeg/can_anybody_help_me_with_my_school_project/ | Wannabedankestmemer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xwmeg | false | null | t3_15xwmeg | /r/LocalLLaMA/comments/15xwmeg/can_anybody_help_me_with_my_school_project/ | false | false | self | 2 | null |
I have two cards (a 3090 and a 3060). Is it possible to use them both to run 70b? What would be the cheapest setup for that? | 8 | Hello,
I have a 24gb RTX 3090 and a spare 12gb RTX 3060 that is not getting used at the moment.
My motherboard only has a single PCIE slot, additionally, I only have 16gb of RAM and I am not interested in running 70b off the CPU (yes, I know it is possible to run it by just adding more RAM, but since I have two GPUs, on the chance of them getting used, I am not interested)
My CPU is an AM4 CPU.
​
Should I buy another motherboard? Is an eGPU adapter feasible for this goal? What do you suggest? | 2023-08-22T05:13:12 | https://www.reddit.com/r/LocalLLaMA/comments/15xvp6h/i_have_two_cards_a_3090_and_a_3060_is_it_possible/ | hellninja55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xvp6h | false | null | t3_15xvp6h | /r/LocalLLaMA/comments/15xvp6h/i_have_two_cards_a_3090_and_a_3060_is_it_possible/ | false | false | self | 8 | null |
70B LLM expected performance on 4090 + i9 | 77 | I have an Alienware R15 32G DDR5, i9, RTX4090. I was able to load 70B GGML model offloading 42 layers onto the GPU using oobabooga. After the initial load and first text generation which is extremely slow at \~0.2t/s, suhsequent text generation is about 1.2t/s. I noticed SSD activities (likely due to low system RAM) on the first text generation. There is virtually no SSD activities on subsequent text generations.
I'm thinking about upgrading the RAM to 64G which is the max on the Alienware R15. Will it help and if so does anyone have an idea how much improvement I can expect? Appreciate any feedback or alternative suggestions. | 2023-08-22T03:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/15xtwdi/70b_llm_expected_performance_on_4090_i9/ | you-seek-yoda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xtwdi | false | null | t3_15xtwdi | /r/LocalLLaMA/comments/15xtwdi/70b_llm_expected_performance_on_4090_i9/ | false | false | self | 77 | null |
LocalLLaMA noob: are my specs OK to start hosting models? | 13 | hey all, i'm very intrigued about hosting my own AI and not paying for censored ChatGPT anymore, i just have a few questions. my specs are as follows:
CPU: RX 5800xt
GPU: Nvidia 3060 12GB (this is my planned upgrade to this as its the most affordable option to start, from what i see)
Ram: 64 GB DDR4
would this be ok to start hosting my AI? i preferably want to do this through docker, and try and send queries using a copilot-like extension in vscode.
also, at some point i want to train a model on specific topics to enchance my studying efficency. i understand that my GPU would not be able to do this, but are there services available to train a model on this data so i can host it myself?
apologies for sounding like a complete noob, but after lurking for the past day i'm really intruiged on what i can do with my own hardware now.
| 2023-08-22T03:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/15xtm21/localllama_noob_are_my_specs_ok_to_start_hosting/ | asetofaces | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xtm21 | false | null | t3_15xtm21 | /r/LocalLLaMA/comments/15xtm21/localllama_noob_are_my_specs_ok_to_start_hosting/ | false | false | self | 13 | null |
What is the best model for Roleplay and Storyteller for Oobabooga? | 1 | Title | 2023-08-22T02:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/15xskg0/what_is_the_best_model_for_roleplay_and/ | TheDonnyDoggo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xskg0 | false | null | t3_15xskg0 | /r/LocalLLaMA/comments/15xskg0/what_is_the_best_model_for_roleplay_and/ | false | false | self | 1 | null |
Model suggestions for coding + workflow? | 26 | I've been experimenting with LLMs mostly for entertainment purposes but would like to try using them for something more productive for a change.
I'm looking for something to assist with code writing + debugging (primary languages: R, Python, C++), and text summarization/outline writing/reorganization/reformatting/etc. ChatGPT has been okay so far for the text tasks, but I'm looking for something local that I can theoretically use for actual work instead of hobby writing with less worry about privacy/security.
With my current home office setup I can run a 13B Q4 quantized model pretty easily with llama.cpp, and have the resources to run a 33B Q4 quantized model (slowly) if needed.
What models have you guys had good success with for these tasks? Are any 13B models currently powerful enough to do these tasks decently, or am I out of luck unless I can run larger ones (e.g. 70B)?
P.S. I'm also curious what UIs work well for coding tasks, since the ones I use for hobby things (SillyTavern / Agnaistic ) are probably not ideal for workflow unless I *really* need an anime waifu overseeing my work. | 2023-08-22T00:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/15xpdfg/model_suggestions_for_coding_workflow/ | big_kitty_enjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xpdfg | false | null | t3_15xpdfg | /r/LocalLLaMA/comments/15xpdfg/model_suggestions_for_coding_workflow/ | false | false | self | 26 | null |
Small tip: Be mindful of passive VRAM consumption | 1 | [removed] | 2023-08-22T00:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/15xpajq/small_tip_be_mindful_of_passive_vram_consumption/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xpajq | false | null | t3_15xpajq | /r/LocalLLaMA/comments/15xpajq/small_tip_be_mindful_of_passive_vram_consumption/ | false | false | self | 1 | null |
Training TheBloke Llama 2 7b Chat GPTQ | 1 | [removed] | 2023-08-21T23:50:54 | https://www.reddit.com/r/LocalLLaMA/comments/15xoibu/training_thebloke_llama_2_7b_chat_gptq/ | skdidjsj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xoibu | false | null | t3_15xoibu | /r/LocalLLaMA/comments/15xoibu/training_thebloke_llama_2_7b_chat_gptq/ | false | false | self | 1 | null |
Free Online 8k/16k+ llama2 7b/13b/70b? | 0 | Anyone? | 2023-08-21T23:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/15xo1fu/free_online_8k16k_llama2_7b13b70b/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xo1fu | false | null | t3_15xo1fu | /r/LocalLLaMA/comments/15xo1fu/free_online_8k16k_llama2_7b13b70b/ | false | false | self | 0 | null |
digestable introduction to fine-tuning | 11 | does anyone any frontend libraries that lets you fine tune through a UI? | 2023-08-21T23:09:34 | https://github.com/facebookresearch/llama-recipes/tree/main | LyPreto | github.com | 1970-01-01T00:00:00 | 0 | {} | 15xngf8 | false | null | t3_15xngf8 | /r/LocalLLaMA/comments/15xngf8/digestable_introduction_to_finetuning/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'eLKD8gOVthCQeqwrQ2HrGab0RwonNOvsOfAV-r9asfs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=108&crop=smart&auto=webp&s=79c7aad9743792f0b9b3b0052fc1d95470da1d6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=216&crop=smart&auto=webp&s=9b7459cbc40654b9778f4bf567ec242b3dfca42a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=320&crop=smart&auto=webp&s=a2f6954f9c6271eab01d0c2c755b53344418b893', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=640&crop=smart&auto=webp&s=169d68690d8e7f41a445d3800b3c6c1810dc1979', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=960&crop=smart&auto=webp&s=203a3d3d95fd1f4876d6fb53d3c3e9b4cce4d177', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?width=1080&crop=smart&auto=webp&s=f81a93bc5ae3eacb9cbb05505892472d139b51f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r9Au0QvgiMkb64QZtCVr9ZVIs-aKWLMNYyo77cNPhHY.jpg?auto=webp&s=ac2a5972c11e5915a5598c8c61b6cb455558c7e9', 'width': 1200}, 'variants': {}}]} | |
Stream Llama 2 to your MacBook Using PyXet | 5 | 2023-08-21T22:35:21 | https://xethub.com/blog/stream-llama-2-macbook-minutes-pyxet | semicausal | xethub.com | 1970-01-01T00:00:00 | 0 | {} | 15xmjtu | false | null | t3_15xmjtu | /r/LocalLLaMA/comments/15xmjtu/stream_llama_2_to_your_macbook_using_pyxet/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'h2NCs-fA1bIqcY8f6zuoe7C_zn_P_ckTJNHB_CzpDA4', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=108&crop=smart&auto=webp&s=abb4969ecfca11c2b95e162ba966f5f5a6de436a', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=216&crop=smart&auto=webp&s=32b0b99b855aea7bd748a279ddafa1731dee3281', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=320&crop=smart&auto=webp&s=21b803960b5d8487429fa7057f2a92e29e6182ae', 'width': 320}, {'height': 391, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=640&crop=smart&auto=webp&s=333ba5c57fa8ea882340b0f8ed1f015ab2e21373', 'width': 640}, {'height': 586, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=960&crop=smart&auto=webp&s=b4a01913e163625cba06023a3e85e027056a0339', 'width': 960}, {'height': 660, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?width=1080&crop=smart&auto=webp&s=1ce429d921120f16a7340d7676eb45b72af75823', 'width': 1080}], 'source': {'height': 880, 'url': 'https://external-preview.redd.it/k-mr4HUT-bl9eyih14zv3N0ohl0P8ENLIlR-VOhCWLo.jpg?auto=webp&s=584a1bf24e3eff142859d80fcb1360e46d62f342', 'width': 1440}, 'variants': {}}]} | ||
Math Is No Prob Llama. | 4 | 2023-08-21T21:24:39 | IndividualCup7493 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15xkn6x | false | null | t3_15xkn6x | /r/LocalLLaMA/comments/15xkn6x/math_is_no_prob_llama/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'U8Y7fWdexAMA7rflYIBjkibju4Ni6DpJ3ElaFfSKEX4', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=108&crop=smart&auto=webp&s=acf2c1df054b83811380f5b8c64382380cc3816c', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=216&crop=smart&auto=webp&s=bfc3059c40ed6216305bc3fbe9efd48dabea0c0a', 'width': 216}, {'height': 324, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=320&crop=smart&auto=webp&s=dc5563236877940cccb7fca1ad441598bc332a51', 'width': 320}, {'height': 649, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=640&crop=smart&auto=webp&s=0c600aa9f7dceb70844b4145a8bfd5083d712ecd', 'width': 640}, {'height': 974, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?width=960&crop=smart&auto=webp&s=5ea6a23ecd4b6866a87f97df62998daa87ae36b5', 'width': 960}], 'source': {'height': 1079, 'url': 'https://preview.redd.it/sywhdqrl6jjb1.jpg?auto=webp&s=1c5bd7ca8a3105774cad0d0e7efed88b16c56abc', 'width': 1063}, 'variants': {}}]} | |||
Is the 3090 really the best GPU for 13-30B Models? | 43 | I am wondering if the 3090 is really the most cost effectuent and best GPU overall for inference on 13B/30B parameter model.
If so, I am curious on why that's the case. The 3090's inference speed is similar to the A100 which is a GPU made for AI. In addition to this GPU was released a while back. | 2023-08-21T20:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/15xj7mc/is_the_3090_really_the_best_gpu_for_1330b_models/ | ll_Teto_ll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xj7mc | false | null | t3_15xj7mc | /r/LocalLLaMA/comments/15xj7mc/is_the_3090_really_the_best_gpu_for_1330b_models/ | false | false | self | 43 | null |
Running MTP30B on CPU, AND Gpu? | 1 | [removed] | 2023-08-21T20:15:16 | https://www.reddit.com/r/LocalLLaMA/comments/15xip0u/running_mtp30b_on_cpu_and_gpu/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xip0u | false | null | t3_15xip0u | /r/LocalLLaMA/comments/15xip0u/running_mtp30b_on_cpu_and_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SBT8VvFr6B4mOt0xyPTLhVdfJPlOBZ4fx8E1eWZK1N8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=108&crop=smart&auto=webp&s=757c2bfa441bababfe3ea962bbdb24ba174d6d73', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=216&crop=smart&auto=webp&s=46c3abc604f9f5a60376729ebe86e95a713885bb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?width=320&crop=smart&auto=webp&s=7ee0edf530e83dd12d201aee3749f3bada391eb8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/N2NAHRNIguiui5kk68xCRCinf57Vc3I1mA6eEZBFmlI.jpg?auto=webp&s=26410f00954b26c31409d25f79bdddb48071b7ab', 'width': 480}, 'variants': {}}]} |
Comprehensive questions on Llama2. | 1 | [removed] | 2023-08-21T19:28:07 | https://www.reddit.com/r/LocalLLaMA/comments/15xhf2g/comprehensive_questions_on_llama2/ | sujantkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xhf2g | false | null | t3_15xhf2g | /r/LocalLLaMA/comments/15xhf2g/comprehensive_questions_on_llama2/ | false | false | self | 1 | null |
Comprehensive questions on Llama2. | 1 | [removed] | 2023-08-21T18:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/15xfwei/comprehensive_questions_on_llama2/ | sujantkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xfwei | false | null | t3_15xfwei | /r/LocalLLaMA/comments/15xfwei/comprehensive_questions_on_llama2/ | false | false | self | 1 | null |
TCO Calculator to compare cost of local deployment vs SaaS solutions | 18 | I made a calculator to compare costs of SaaS and on-prem LLM options, and I wanted to share it with you all! Turns out that deploying your own open-source LLMs has a few more hidden costs than expected. It’s been interesting to play around with comparing costs for OpenAI, Cohere, and Llama 2 70B deployment, and it turns out that cost/request is not always so advantageous for open-source local deployment.
Want to contribute to this calculator to make it more accurate? We’d love your help and feedback!
[https://huggingface.co/spaces/mithril-security/TCO\_calculator](https://huggingface.co/spaces/mithril-security/TCO_calculator) | 2023-08-21T18:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/15xfqb7/tco_calculator_to_compare_cost_of_local/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xfqb7 | false | null | t3_15xfqb7 | /r/LocalLLaMA/comments/15xfqb7/tco_calculator_to_compare_cost_of_local/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Q_UcVgoge-jNQC8c2y1wCHsr4F79rffv_A6EvkoVF4A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=108&crop=smart&auto=webp&s=9f15c56b9f99cf318d3eb9eaa15fc5af26163333', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=216&crop=smart&auto=webp&s=fbe7b44a368fc4aa38a891be1148a9a9d7133996', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=320&crop=smart&auto=webp&s=143165fbeb8e28cd20c87190355e74ee9348703e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=640&crop=smart&auto=webp&s=202051b5d917e7dc46e3477acf6b68bffe6388b7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=960&crop=smart&auto=webp&s=2a9efa9f8cb0316fde2cbb5e893acb103cfb216a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?width=1080&crop=smart&auto=webp&s=c207ef1e3127d797b87e842c5143d8d93260ee67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YAP-habsXVu8OSxzRum1f8TPHXH797MC1y7H7Rl1Phs.jpg?auto=webp&s=057547daa352890cae00c016b5262dc18dca6777', 'width': 1200}, 'variants': {}}]} |
LLM for sexting ? | 0 | I tried the app EVA ai on android. It is good. But it only agrees with what I say and do very generic replies only
Is there is a truly uncensored open source model for this. That will do sexting without any limits and create stories of all type of trashy fantasies
If not what is preventing LLM from doing this. Is it the cost of training and only big companies like EVA ai can train their own uncensored model ? | 2023-08-21T18:20:39 | https://www.reddit.com/r/LocalLLaMA/comments/15xfjzz/llm_for_sexting/ | 3gnude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xfjzz | false | null | t3_15xfjzz | /r/LocalLLaMA/comments/15xfjzz/llm_for_sexting/ | false | false | nsfw | 0 | null |
WizardLM-13B-V1.2 RuntimeError: expected scalar type Half but found Char | 1 | Version 1.0 and 1.1 of 13B works just fine, but with 1.2 I am getting:
**RuntimeError: expected scalar type Half but found Char**
Also 15B version works okey. Is there something wrong with the model or should there be some diffrent implementation of usage? I am using Langchain and HuggingFace to import model. | 2023-08-21T17:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/15xdvqm/wizardlm13bv12_runtimeerror_expected_scalar_type/ | Kukaracax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xdvqm | false | null | t3_15xdvqm | /r/LocalLLaMA/comments/15xdvqm/wizardlm13bv12_runtimeerror_expected_scalar_type/ | false | false | default | 1 | null |
Test your LLM knowledge | 1 | [removed]
[View Poll](https://www.reddit.com/poll/15xduhr) | 2023-08-21T17:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/15xduhr/test_your_llm_knowledge/ | Emergency_Hat9105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xduhr | false | null | t3_15xduhr | /r/LocalLLaMA/comments/15xduhr/test_your_llm_knowledge/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uw3bS85Llt3J3OFTBLdIBwqpPqxfTUNQ4_IG384hNy4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=108&crop=smart&auto=webp&s=c474b6355facd419a844b240ff7ac5bf36a520fd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=216&crop=smart&auto=webp&s=c4771d652869980c05b030a09ccd0eae8cea2711', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=320&crop=smart&auto=webp&s=a84e1c30fec9a210c19d5ca89d26c6c061770cc6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=640&crop=smart&auto=webp&s=764fd9f82e0f68120ac9c3e7c315b290c5e4f099', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=960&crop=smart&auto=webp&s=eac401241419171e12928a8925cca304c722cfa0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?width=1080&crop=smart&auto=webp&s=106a85a01c180667e1a80f7a54f824dab5a682a8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/6nlEtQs-45hf8jT8M9AdNe58IufrI3OyTO2z6_xqHao.jpg?auto=webp&s=1d16f51ce89efc84abebb6e122f20b12f8124a4d', 'width': 1920}, 'variants': {}}]} |
Why am I asking so many QUESTIONS, because I'm stupid. | 1 | [removed] | 2023-08-21T17:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/15xdocr/why_am_i_asking_so_many_questions_because_im/ | sujantkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xdocr | false | null | t3_15xdocr | /r/LocalLLaMA/comments/15xdocr/why_am_i_asking_so_many_questions_because_im/ | false | false | self | 1 | null |
Extract Function Arguments from Questions via Document QA | 0 | I am wondering if it is possible for a LLM to understand the context of the question and extract embedded arguments from the question that can then be passed into actual function/API calls?
E.g. consider a document containing a table of tourism visits from different countries to a specific country X, and another document that contains the mapping of all countries to continents.
With these two documents, is it possible to post a question "How many tourists from North America visited country X?", where the LLM understands and extracts "North America", and generate a command with "North America" as argument to query the second document for the list of countries in North America? | 2023-08-21T15:55:27 | https://www.reddit.com/r/LocalLLaMA/comments/15xblr1/extract_function_arguments_from_questions_via/ | minisoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xblr1 | false | null | t3_15xblr1 | /r/LocalLLaMA/comments/15xblr1/extract_function_arguments_from_questions_via/ | false | false | self | 0 | null |
The basics: LLM learning | 3 | I shortly came accross a tutorial whoch described the difference between embedding and training and LLM through conversation. It baically boiled down to source of truth like atext book vs. learning through a conversation.
Is anybody aware of that article?
Can anybody share a link describing the different stages of LLM learning? | 2023-08-21T15:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/15xbhgf/the_basics_llm_learning/ | JohnDoe365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15xbhgf | false | null | t3_15xbhgf | /r/LocalLLaMA/comments/15xbhgf/the_basics_llm_learning/ | false | false | self | 3 | null |
LLama2 on python | 2 | Hi I'm trying to use Llama with python locally. I setup a machine that runs on ubuntu and a 2070 nvidia. Now I'm trying to setup the script as prompt -> output.
I'm looking on google on how to do it and I found a guide (working) but using [llama-2-7b-chat.ggmlv3.q8\_0.bin](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q8_0.bin).
But I have requested Llama to meta and they sent me 6 files (7,13,70b chat/non-chat) and I'm wondering if I can use directly those.
For example llama-2-7b-chat looks like this
​
https://preview.redd.it/bpz5nw377hjb1.png?width=149&format=png&auto=webp&s=8e2aedd5561edaad45ea5a3c01dd8b84cd2873ff
Thanks for the help | 2023-08-21T14:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/15x9qkc/llama2_on_python/ | Outrageous_Ad8520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x9qkc | false | null | t3_15x9qkc | /r/LocalLLaMA/comments/15x9qkc/llama2_on_python/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'n4_Lwh1TuxO7OQvNmDIuq2ka5A1IqCGieDjinkI-a3w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=108&crop=smart&auto=webp&s=4a2aa63c716d0c72b239da2925abe39712931182', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=216&crop=smart&auto=webp&s=f118a130f23ae15f1eaff2eb9e3a02982554c133', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=320&crop=smart&auto=webp&s=f1d8af37de3071d2264f2ad644b4d6624f4505b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=640&crop=smart&auto=webp&s=0aeac4a0e4c034bb303cdd4cb109be7f51655fa1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=960&crop=smart&auto=webp&s=703bf543bbde06867c6dbbd57178f90809bc0920', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?width=1080&crop=smart&auto=webp&s=29d7ce2e8bfdc2139714c54c9be499b16d7a6c66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Fkp60VMMI7o6pncWRT2-ow0eRN5tQgYoqFvhvMCAGcs.jpg?auto=webp&s=6297b238682356e47ac48a4b1628891efb949f2c', 'width': 1200}, 'variants': {}}]} | |
Error after loading the ggml model from koboldcpp.exe | 0 | I got the following error after loading the 13b ggml model in koboldcpp.exe as follow:
Exception happened during processing of request from ('[127.0.0.1](https://127.0.0.1/)', 54829)
Traceback (most recent call last):
File "http\\[server.py](https://server.py/)", line 294, in parse\_request
Traceback (most recent call last):
ValueError
File "http\\[server.py](https://server.py/)", line 294, in parse\_request
During handling of the above exception, another exception occurred:
ValueError
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
File "[socketserver.py](https://socketserver.py/)", line 316, in \_handle\_request\_noblock
File "[socketserver.py](https://socketserver.py/)", line 347, in process\_request
File "[socketserver.py](https://socketserver.py/)", line 360, in finish\_request
File "[koboldcpp.py](https://koboldcpp.py/)", line 322, in \_\_call\_\_
File "http\\[server.py](https://server.py/)", line 647, in \_\_init\_\_
File "[socketserver.py](https://socketserver.py/)", line 747, in \_\_init\_\_
File "http\\[server.py](https://server.py/)", line 427, in handle
File "http\\[server.py](https://server.py/)", line 405, in handle\_one\_request
File "http\\[server.py](https://server.py/)", line 307, in parse\_request
File "http\\[server.py](https://server.py/)", line 479, in send\_error
File "[koboldcpp.py](https://koboldcpp.py/)", line 605, in end\_headers
Traceback (most recent call last):
AttributeError: 'ServerRequestHandler' object has no attribute 'path'
Please help. I'm unable to find the solution from google | 2023-08-21T14:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/15x9mey/error_after_loading_the_ggml_model_from/ | john1106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x9mey | false | null | t3_15x9mey | /r/LocalLLaMA/comments/15x9mey/error_after_loading_the_ggml_model_from/ | false | false | self | 0 | null |
Have anyone fine tuned text-davinci-003 using some Orca style dataset? | 4 | Just out of curiosity... Has anyone ever fine tuned a close source openai model on a dataset that follows what is said in the orca papers?
I know it is really expensive and probably meaningless, but I'm wondering if someone tested it
Thanks in advance... | 2023-08-21T14:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/15x9kvw/have_anyone_fine_tuned_textdavinci003_using_some/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x9kvw | false | null | t3_15x9kvw | /r/LocalLLaMA/comments/15x9kvw/have_anyone_fine_tuned_textdavinci003_using_some/ | false | false | self | 4 | null |
NTK RoPE scaling and VLLM | 1 | [removed] | 2023-08-21T14:34:02 | https://www.reddit.com/r/LocalLLaMA/comments/15x9f7p/ntk_rope_scaling_and_vllm/ | cvdbdo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x9f7p | false | null | t3_15x9f7p | /r/LocalLLaMA/comments/15x9f7p/ntk_rope_scaling_and_vllm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IazZnSQDmkS8XsTrLroSiM30cXCdwEp4CiT81OVYynI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=108&crop=smart&auto=webp&s=638372c32bc1624617a45929e67c213c664b1fdd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=216&crop=smart&auto=webp&s=2030c36eed9bdaaf9c9a45d272511550b618ecfc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=320&crop=smart&auto=webp&s=afd9651ca477a71833b6ef4682053dd2a506eb5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=640&crop=smart&auto=webp&s=2531d80a8458d0a44e4ca51c4d3fa6bdbbb72338', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=960&crop=smart&auto=webp&s=e31f8dbc1b658ad6ae40d446f25e3e53894730b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?width=1080&crop=smart&auto=webp&s=df7e64f4ec4f057141fb9317ec6390a1d5b0069f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sXrwuMw_sxm_JDWTM3P-QrmfOExKOMqsIvi5j_P2l0A.jpg?auto=webp&s=5e00d3a99321f559307d8cdd7b4379a518857882', 'width': 1200}, 'variants': {}}]} |
Torrent training LLM ? | 51 | Hi, I saw a comment on a post saying that chatGPT was a very strong LLM mainly because of the amount of data it was trained on, pricey computers etc.
And I was wondering...
Is there anyway to build a strong LLM by a peer-to-peer training (Like torrent)
It's just theorical questionning. | 2023-08-21T14:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/15x8u3r/torrent_training_llm/ | Champignac1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x8u3r | false | null | t3_15x8u3r | /r/LocalLLaMA/comments/15x8u3r/torrent_training_llm/ | false | false | self | 51 | null |
Finetuning for Multiple Choice via LoRA on A Single 4090 | 5 | Hello All -
I'm trying to determine the feasibility of finetuning LLAMA / a LLAMA based architecture for the above task. Effectively removing the head of the model and replacing it with a classification head (0/1) feeding it the prompt followed by an answer and a label (0/1) depending on whether the question answer pair is correct or not.
I have dabbled with the hugging face api to set up a training routine to load LLAMA 2 (sequence classification head with 1 label) in 4 bit mode (bits and bytes).
I then created a LoRA trainable model from the LLAMA backbone and fired off training with peft making sure to use FP16 training.
Only 4M parameters are trainable out of the \~7B:
​
https://preview.redd.it/0mydkv8bmgjb1.png?width=646&format=png&auto=webp&s=98aeeb1e0b029f370ede9c74dc78273ae7a15e29
The training routine (even with small batch sizes) very quickly runs out of GPU memory (24GB); starts tapping into ram instead and eventually grinds to an unusable progress level.
I'm experienced with deep learning, but have not tried to pull down and train an LLM (for obvious reasons).
Is what I am trying to do achievable on consumer hardware? Could the model be sharded across 2 4090s (for example?) thanks!
​ | 2023-08-21T12:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/15x6tim/finetuning_for_multiple_choice_via_lora_on_a/ | creeky123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x6tim | false | null | t3_15x6tim | /r/LocalLLaMA/comments/15x6tim/finetuning_for_multiple_choice_via_lora_on_a/ | false | false | 5 | null | |
Train model from scratch (llama.cpp) - any experiences? | 1 | [removed] | 2023-08-21T12:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/15x6nkl/train_model_from_scratch_llamacpp_any_experiences/ | dual_ears | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x6nkl | false | null | t3_15x6nkl | /r/LocalLLaMA/comments/15x6nkl/train_model_from_scratch_llamacpp_any_experiences/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
Prompt: Create deterministic message that takes elements from another message? | 2 | I’m trying some large 70B models to create a chat message that should be constructed from elements of an original message while keeping the conversational flow congruent. Any idea how can I do this?
When I try with prompts like “Craft new message using elements from the following message: {original message}“ it keeps ignoring the original message. I’m using chat interface on ooba. Thanks! | 2023-08-21T12:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/15x5yt5/prompt_create_deterministic_message_that_takes/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x5yt5 | false | null | t3_15x5yt5 | /r/LocalLLaMA/comments/15x5yt5/prompt_create_deterministic_message_that_takes/ | false | false | self | 2 | null |
Use Oobabooga API within a streamlit interface | 2 | Dear community,
is there any python code available how to use local LLMs that run via Oobabooga in a streamlit interface?
I would appreciate any help.
Thank you! | 2023-08-21T11:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/15x51jh/use_oobabooga_api_within_a_streamlit_interface/ | Plane_Discussion_924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x51jh | false | null | t3_15x51jh | /r/LocalLLaMA/comments/15x51jh/use_oobabooga_api_within_a_streamlit_interface/ | false | false | self | 2 | null |
Open LLM Leaderboard excluded 'contaminated' models. | 67 | 2023-08-21T10:09:17 | https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard | ambient_temp_xeno | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15x3d3b | false | null | t3_15x3d3b | /r/LocalLLaMA/comments/15x3d3b/open_llm_leaderboard_excluded_contaminated_models/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} | ||
Best software and model for a personal assistant | 1 | hi all,
I was wondering if you could help me with some guidance.
Im trying to create my own assistant for in a work environment.
This assistant should have access to mail, agenda and tasks (outlook) (either automated
But also access to jira, confluence and is able to understand a large set of documents (maily pdf, word, ppt, excel, txt files)
I would like to train the LLM about the business processes, IT architecture, high level software designs, and low level / API designs, swagger files etc.
the required functionallity should be something like:
- provide a custom workrelated knowledge base
- provide a to do list
- create mail/responses in draft
- generate documents
- generate pages in confluence
- generate (UML) diagrams
I want to run this (initially) on my laptop (12th gen i5, 32GB, windows 10) and looking for the best LLM Model and software that would be able to run this.
It doesnt have to be fast ;)
Please provide any suggestion or feedback, links to other posts/blogs or other website that has useful info about setting up such thing, but also if it’s an impossible task.
Thanks! | 2023-08-21T08:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/15x1frd/best_software_and_model_for_a_personal_assistant/ | e-nigmaNL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x1frd | false | null | t3_15x1frd | /r/LocalLLaMA/comments/15x1frd/best_software_and_model_for_a_personal_assistant/ | false | false | self | 1 | null |
Santiy check: expected 3090 performance on non-quantized 13b | 8 | As per title, I need some perspective on what performance should be like. So far, I'd only used quantized models and got (ballpark) 20 tokens per second output, so very usable.
Since the 3090 has plenty of VRAM to fit a non-quantized 13b, I decided to give it a go but performance tanked dramatically, down to 1-2 tokens per second.
Before I blindly tinker with settings, is this to be expected or am I doing something wrong? Using ooba, I loaded the model with "transformers" (other loaders didn't seem to work) and did not change anything from default. | 2023-08-21T07:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/15x0wi8/santiy_check_expected_3090_performance_on/ | Herr_Drosselmeyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15x0wi8 | false | null | t3_15x0wi8 | /r/LocalLLaMA/comments/15x0wi8/santiy_check_expected_3090_performance_on/ | false | false | self | 8 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.