title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Whats the best way to run LLaMA on a M1 Max? Best Model? | 6 | Ive been using a couple but ive been kind of randomly using them. And im not sure im using the best method to run them. Anyone want to share their experience on m1 ? | 2023-06-22T04:04:38 | https://www.reddit.com/r/LocalLLaMA/comments/14fsu1c/whats_the_best_way_to_run_llama_on_a_m1_max_best/ | MumeiNoName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fsu1c | false | null | t3_14fsu1c | /r/LocalLLaMA/comments/14fsu1c/whats_the_best_way_to_run_llama_on_a_m1_max_best/ | false | false | self | 6 | null |
New subreddit for AI prompt requests | 2 | [removed] | 2023-06-22T03:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/14fsaaq/new_subreddit_for_ai_prompt_requests/ | No-Transition3372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fsaaq | false | null | t3_14fsaaq | /r/LocalLLaMA/comments/14fsaaq/new_subreddit_for_ai_prompt_requests/ | false | false | default | 2 | null |
What models are GPT4All-J compatible? | 7 | [removed] | 2023-06-22T03:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/14frl6s/what_models_are_gpt4allj_compatible/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14frl6s | false | null | t3_14frl6s | /r/LocalLLaMA/comments/14frl6s/what_models_are_gpt4allj_compatible/ | false | false | default | 7 | null |
How to Use Langchain with Exllama? | 9 | Is that possible to use Langchain with Exllama?
I'd appreciate any code snippet. | 2023-06-22T02:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/14fqfdu/how_to_use_langchain_with_exllama/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fqfdu | false | null | t3_14fqfdu | /r/LocalLLaMA/comments/14fqfdu/how_to_use_langchain_with_exllama/ | false | false | self | 9 | null |
Automatic Stock Analysis - Scraping Twitter + GPT4 (or llama) | 1 | 2023-06-22T01:51:40 | https://www.linkedin.com/posts/kristintynski_gpt4-gpt4-gpt35turbo16k-activity-7077372458848145408-xcx7?utm_source=share&utm_medium=member_desktop | transtwin | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 14fq3w0 | false | null | t3_14fq3w0 | /r/LocalLLaMA/comments/14fq3w0/automatic_stock_analysis_scraping_twitter_gpt4_or/ | false | false | default | 1 | null | |
Yet another fun test (astronomy) | 6 | All models were run with ooba/llama-precise + default model-specific instruct prompt that ooba chose. The data part is from Wolfram.
Prompt:
>Which planets can I see tonight from Honolulu on 2023-03-25 at precisely 9:00pm UTC-10? Use the data below to answer.
>
>planet | altitude | azimuth
>
>Mercury | -23° 5' 20" (below horizon) | 286° 8' 28" (WNW)
>
>Venus | 3° 51' 17.3" | 285° 28' 2" (WNW)
>
>Mars | 53° 1' 20" | 284° 34' 50" (WNW)
>
>Jupiter | -19° 40' 21" (below horizon) | 284° 57' 50" (WNW)
>
>Saturn | -65° 14' 13" (below horizon) | 288° 24' 1" (WNW)
>
>Uranus | 9° 5' 41.9" | 284° 13' 20" (WNW)
>
>Neptune | -41° 53' 35" (below horizon) | 286° 9' 14" (WNW)
Wizard-Vicuna-7B-Uncensored-GPTQ
>Based on the given information, you can see Mercury, Venus, Mars, Jupiter, Saturn, Uranus and Neptune from Honolulu on March 25th at precisely 9:00pm UTC-10. However, due to the Earth's rotation, these planets will appear in different positions relative to each other throughout the night.
airoboros 7B
>To find out which planets you can see tonight from Honolulu on March 25th, 2023 at exactly 9:00 pm UTC-10, we need to use the following information:
>
>Reference time: 2023-03-25T21:00:00Z (UTC+10)
>
>Planet altitudes and azimuths are given in decimal degrees. You can convert them into degrees using this formula:
>
>Decimal degree \* π / 180 ≈ Radian
>
>For example, if a planet is at an altitude of 3.51, then its angle would be approximately equal to 3.51 × π/180 = 2.07 radians or about 118.5 degrees.
>
>Now let's calculate the local sidereal times for each planet:
>
>Mercury: LST = 19h + 2m;
>
>Venus: LST = 19h + 3m;
>
>Mars: LST = 19h + 4m;
>
>Jupiter: LST = 19h + 5m;
>
>Saturn: LST = 19h + 6m;
>
>Uranus: LST = 19h + 7m;
>
>Neptune: LST = 19h + 8m;
>
>Finally, we can determine whether they will be visible during that specific hour by comparing their altitudes with the horizon height:
>
>Mercury: -23° 5' 20" < 0°
>
>Venus: 3° 51' 17.3" > 0°
>
>Mars: 53° 1' 20" > 0°
>
>Jupiter: -19° 40' 21" < 0°
>
>Saturn: -65° 14' 13" < 0°
>
>Uranus: 9° 5' 41.9" > 0°
>
>Neptune: -41° 53' 35" < 0°
>
>So, all planets except Mercury and Venus are above the horizon at this time in Honolulu (which is located on a latitude of approximately 21°N).
WizardLM-7B-V1.0-Uncensored-GPTQ
> You should be able to see Mercury, Venus and Mars from Honolulu on March 25th at 9 pm UTC-10. However, Jupiter will not be visible due to its low altitude of -19 degrees below the horizon. Saturn is also too close to the horizon for visibility. Uranus and Neptune are both too far away in the sky to be seen with the naked eye.
wizardlm-13b-v1.0-uncensored.ggmlv3.q5\_K\_M
>From Honolulu on March 25th at precisely 9pm UTC-10, you can see Mercury below the horizon, Venus in the western sky with an altitude of 3 degrees and 51 minutes, Mars in the southwestern sky with an altitude of 53 degrees and 1 minute, Jupiter also below the horizon, Saturn below the horizon as well, Uranus in the west-northwest sky with an altitude of 9 degrees and 5 minutes, and Neptune is not visible because it's below the horizon.
ChatGPT-4
>On 2023-03-25 at precisely 9:00pm UTC-10 in Honolulu, you will be able to see the following planets:
>
>Venus: Altitude 3° 51' 17.3" | Azimuth 285° 28' 2" (WNW)
>
>Mars: Altitude 53° 1' 20" | Azimuth 284° 34' 50" (WNW)
>
>Uranus: Altitude 9° 5' 41.9" | Azimuth 284° 13' 20" (WNW)
>
>The other planets (Mercury, Jupiter, Saturn, and Neptune) are below the horizon and will not be visible.
I think there is usefulness in using an automated script to query some kind of trustworthy source (like Wolfram, or even Wikipedia search) and then append that data (which is not necessarily easily understood) right into the prompt going into an LLM in order to produce a human-friendly result.
Sadly, only ChatGPT-4 got it right today. | 2023-06-22T00:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/14foqxx/yet_another_fun_test_astronomy/ | Mizstik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14foqxx | false | null | t3_14foqxx | /r/LocalLLaMA/comments/14foqxx/yet_another_fun_test_astronomy/ | false | false | self | 6 | null |
Prompt engineering: What are some good prompts you have found to make the chatbot emulate personalities? | 16 | I’m trying to test different LLMs on who can emulate personalities most accurately and was wondering if anyone has good ideas of how to compose the prompt so that the LLMs behaves the way we expect. The personality will be given as a description within the prompt context/token itself so we’re not trying to emulate any famous person that LLM already knows.
I think there was a post somewhere that gave some guidelines on how to create a character. Does anyone know where we can find it? | 2023-06-21T22:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/14flwpw/prompt_engineering_what_are_some_good_prompts_you/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flwpw | false | null | t3_14flwpw | /r/LocalLLaMA/comments/14flwpw/prompt_engineering_what_are_some_good_prompts_you/ | false | false | self | 16 | null |
Bright Eye: free mobile AI app to generate art and text.(GPT-4 powered) | 0 | [removed] | 2023-06-21T22:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/14flddm/bright_eye_free_mobile_ai_app_to_generate_art_and/ | Psychological_Ad4766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flddm | false | null | t3_14flddm | /r/LocalLLaMA/comments/14flddm/bright_eye_free_mobile_ai_app_to_generate_art_and/ | false | false | default | 0 | null |
Finetune bug with Llama 13, 30 and 65B | 6 | Hey all!
I am trying to finetune the Llama models using the stanford alpaca repo. I can successfully finetune a 7B model, but all of the larger models have a strange issue where the initial loss is massive (1e5) and immediately drops to 0.0 for the rest of the finetune process. The model that I get at the end of the finetune run is completely broken and doesn't know how to answer questions.
I am running the finetune script on 8 A100 80GB GPUs.
If anyone has any ideas as to why I would only get this problem with models larger than 7B and how I could solve it, I would greatly appreciate it. | 2023-06-21T22:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/14flbky/finetune_bug_with_llama_13_30_and_65b/ | Svvance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14flbky | false | null | t3_14flbky | /r/LocalLLaMA/comments/14flbky/finetune_bug_with_llama_13_30_and_65b/ | false | false | self | 6 | null |
Hearing Scheduled for Tomorrow with CEO of HuggingFace - Artificial Intelligence: Advancing Innovation Towards the National Interest | 1 | [deleted] | 2023-06-21T22:19:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14flazk | false | null | t3_14flazk | /r/LocalLLaMA/comments/14flazk/hearing_scheduled_for_tomorrow_with_ceo_of/ | false | false | default | 1 | null | ||
Hearing Scheduled for Tomorrow with Hugging Face CEO - Artificial Intelligence: Advancing Innovation Towards the National Interest | 57 | [Stream it here June 22, 2023 10:00AM EST](https://www.youtube.com/live/7s_Wv5VklVA)
[Artificial Intelligence: Advancing Innovation Towards the National Interest](https://science.house.gov/hearings?ID=441AF8AB-7065-45C8-81E0-F386158D625C)
[Hearing Charter \[PDF\]](https://republicans-science.house.gov/_cache/files/f/8/f89a11b6-183c-45d1-99a6-4d2873504261/4CEBD5D89D69186E01D99B7E5F53A361.2023-06-22-ai-hearing-charter.pdf)
​
https://preview.redd.it/enlld2wk1g7b1.png?width=742&format=png&auto=webp&s=b1e4b5d5e33013536a45168f377f305c9c4ca76b | 2023-06-21T22:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/14fkzcj/hearing_scheduled_for_tomorrow_with_hugging_face/ | Basic_Description_56 | self.LocalLLaMA | 2023-06-22T05:09:50 | 0 | {} | 14fkzcj | false | null | t3_14fkzcj | /r/LocalLLaMA/comments/14fkzcj/hearing_scheduled_for_tomorrow_with_hugging_face/ | false | false | 57 | {'enabled': False, 'images': [{'id': 'qAQ0onF1XqFRiBxHoF5saS8LpmACVTHzxE9cK3sUNhg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=108&crop=smart&auto=webp&s=91d426842e7a66f7718ba0a392614ce8d1bb4b04', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=216&crop=smart&auto=webp&s=9f8ebbb5e1b16cb571d7fdbf7559dffc9eef0dc6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=320&crop=smart&auto=webp&s=8c81b43000e21e361d57fa8f5b115b5c42abe739', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=640&crop=smart&auto=webp&s=5d922bac878e561d0930fbf5edea6cc250b6eba6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=960&crop=smart&auto=webp&s=3503b59e9151f2f99c8d11b66a886ebaa51c8468', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?width=1080&crop=smart&auto=webp&s=55a7e6f4e4bc50056f080571ab41a071d51edd91', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/LisAhqObgoc8MwcSz5KXOWBRW66ZQFIsvFs3HA5XiJQ.jpg?auto=webp&s=600a488fbd23f147530002ce338c35078fa31a0f', 'width': 1280}, 'variants': {}}]} | |
Any arch Linux users with AMD GPU? | 10 | Hi, I'm trying to get a LLM run under arch Linux with a AMD GPU? But for now without success. Anybody here who can help? Tried oobabooga but I was not able to get it to run with rocm. How is your experience so far? | 2023-06-21T21:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/14fkibn/any_arch_linux_users_with_amd_gpu/ | RudeboyRudolfo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fkibn | false | null | t3_14fkibn | /r/LocalLLaMA/comments/14fkibn/any_arch_linux_users_with_amd_gpu/ | false | false | self | 10 | null |
Any arch Linux users here with and GPU? | 1 | [deleted] | 2023-06-21T21:43:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fkemo | false | null | t3_14fkemo | /r/LocalLLaMA/comments/14fkemo/any_arch_linux_users_here_with_and_gpu/ | false | false | default | 1 | null | ||
How to download models from HF from terminal without git LFS or sudo | 0 | [deleted] | 2023-06-21T20:30:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fiht1 | false | null | t3_14fiht1 | /r/LocalLLaMA/comments/14fiht1/how_to_download_models_from_hf_from_terminal/ | false | false | default | 0 | null | ||
A simple way to "Extending Context to 8K"?! | 161 | 2023-06-21T19:14:58 | https://kaiokendev.github.io/til#extending-context-to-8k | pseudonerv | kaiokendev.github.io | 1970-01-01T00:00:00 | 0 | {} | 14fgjqj | false | null | t3_14fgjqj | /r/LocalLLaMA/comments/14fgjqj/a_simple_way_to_extending_context_to_8k/ | false | false | default | 161 | null | |
Offering a stipend for researchers to train open-source fine-tuned models | 23 | [removed] | 2023-06-21T18:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/14fg430/offering_a_stipend_for_researchers_to_train/ | newy66 | self.LocalLLaMA | 2023-06-21T19:19:09 | 0 | {} | 14fg430 | true | null | t3_14fg430 | /r/LocalLLaMA/comments/14fg430/offering_a_stipend_for_researchers_to_train/ | false | false | default | 23 | null |
Transformers samplers added to exllama on oobabooga text-gen-webui, so all the samplers of GPTQ-for-LLaMA now works in exllama! | 38 | 2023-06-21T18:46:18 | https://github.com/oobabooga/text-generation-webui/commit/580c1ee7483e9934b7884ff0071c1784060ba6eb | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14ffvcn | false | null | t3_14ffvcn | /r/LocalLLaMA/comments/14ffvcn/transformers_samplers_added_to_exllama_on/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'A0zS9fWyOdGCf-jPZS_FPiAT_xwyux6C0Vz2GrqP4ik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=108&crop=smart&auto=webp&s=af17d16e6e029ee27991152f543a05d71bdcb2ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=216&crop=smart&auto=webp&s=1fce5c138395b1b5f44f391081e0abfa5015756f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=320&crop=smart&auto=webp&s=1876c19e6a70ddaf6698a27a2d1379c04fb61790', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=640&crop=smart&auto=webp&s=a64ff67f3819cf19d5f4c08e9b3332ced3597bf7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=960&crop=smart&auto=webp&s=44f5ba3cc9d15af251af5ab6a2188ea56b172dfe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?width=1080&crop=smart&auto=webp&s=15265d74466e9e9e05f155a23d0c5ed032faf55d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tSUXidxTjZE9W3lEEaRxSJNwZDu_azfGUBErg6Rr2ow.jpg?auto=webp&s=bdc5ecb29df84d3364cc5ac3bb46da725efc778a', 'width': 1200}, 'variants': {}}]} | ||
Training LLaMA with Unstructured Data | 5 | I'm new to these topics, and I couldn't find much information about training the LLaMA models with documents. For example, is it possible to fine-tune Guanaco models with documents? How can I train these models with unstructured data? | 2023-06-21T18:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/14fesbj/training_llama_with_unstructured_data/ | ARBasaran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fesbj | false | null | t3_14fesbj | /r/LocalLLaMA/comments/14fesbj/training_llama_with_unstructured_data/ | false | false | self | 5 | null |
Local rig configuration for llama and LLM | 6 | [removed] | 2023-06-21T16:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/14fcko5/local_rig_configuration_for_llama_and_llm/ | hehsbbslwh142538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fcko5 | false | null | t3_14fcko5 | /r/LocalLLaMA/comments/14fcko5/local_rig_configuration_for_llama_and_llm/ | false | false | default | 6 | null |
Using LLM to create dataset | 1 | [removed] | 2023-06-21T16:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/14fck9r/using_llm_to_create_dataset/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fck9r | false | null | t3_14fck9r | /r/LocalLLaMA/comments/14fck9r/using_llm_to_create_dataset/ | false | false | default | 1 | null |
Starting out with local llm | 1 | [removed] | 2023-06-21T15:59:27 | https://www.reddit.com/r/LocalLLaMA/comments/14fbkve/starting_out_with_local_llm/ | haywirehax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fbkve | false | null | t3_14fbkve | /r/LocalLLaMA/comments/14fbkve/starting_out_with_local_llm/ | false | false | default | 1 | null |
Local vicuna AI for low end pc? | 1 | [removed] | 2023-06-21T15:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/14fbi79/local_vicuna_ai_for_low_end_pc/ | AdamsText | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14fbi79 | false | null | t3_14fbi79 | /r/LocalLLaMA/comments/14fbi79/local_vicuna_ai_for_low_end_pc/ | false | false | default | 1 | null |
Train a local LLM on my files for use as a general-purpose chatbot | 4 | **tl;dr: I'm a noob and want to have a chatbot answer questions about my personal data. Pls help.**
**edit: I searched the subreddit with some new words and was pointed to LoRA training which seems like the starting point, so this might be moot. Please point me to any new resources from there! :)**
Hi Reddit. I need someone smarter than me to help figure out the best way to accomplish this goal. I wasn't sure what subreddit would be appropriate, so please be gentle if I'm in the wrong place.
I have run LLMs on my local machine with some success and the workflow of using a chatbot assistant is becoming really helpful to me. I have also done some light work with ChatGPT with various 'ask your PDF' type tools.
**My goal is to:**
* **Train a language model on a database of markdown files** to incorporate the information in them to their responses.
* **Run the LLM privately, since I would want to feed it personal information** and train it on me/my household specifically.
For example, I would love to be able to ask the chatbot "Remind me the VIN for my old Honda Accord?" and it can answer.
I'm familiar with Python and deploying web apps, but not really familiar with the world of open source LLM and training models on data.
What's the best way to do this? If I could do it securely, it would be nice to run it in the cloud so that my household could access it anywhere, but not really a requirement if it adds too much complexity to the task.
Happy to throw down a Reddit award for nice helpers, including if someone can point me to a better subreddit for this question :)
​ | 2023-06-21T15:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/14fbg4p/train_a_local_llm_on_my_files_for_use_as_a/ | Creative_Addition_29 | self.LocalLLaMA | 2023-06-21T16:02:03 | 0 | {} | 14fbg4p | false | null | t3_14fbg4p | /r/LocalLLaMA/comments/14fbg4p/train_a_local_llm_on_my_files_for_use_as_a/ | false | false | self | 4 | null |
Best LLM for Note and Fact generation | 2 | [removed] | 2023-06-21T15:53:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14fbfnf | false | null | t3_14fbfnf | /r/LocalLLaMA/comments/14fbfnf/best_llm_for_note_and_fact_generation/ | false | false | default | 2 | null | ||
Last Nvidia Drivers let you use the shared memory of your GPU with EXLlama | 80 | My graphics card is an Nvidia RTX 4070 with 12 gigabytes of video memory. I recently upgraded my PC with an additional 32 gigabytes of system RAM, bringing the total to 48 gigabytes.
I tried loading a 33b model(Guanaco is great), with these two options: llama.cpp using a GGML model, or EXLlama using a GPTQ version.
The llama.cpp option was slow, achieving around 0.5 tokens per second. The EXLlama option was significantly faster at around 2.5 tokens per second. It uses system RAM as shared memory once the graphics card's video memory is full, but you have to specify a "gpu-split"value or the model won't load.
I'm writing because I read that the last Nvidia's 535 drivers were slower than the previous versions. So I downgraded, but sadly the shared memory trick no longer works and EXLlama won't load models larger than the GPU's memory capacity... Does anyone have any ideas as to why previous drivers don't work, or anything I could do to gain an additional 1 or 2 tokens per second?
Though the current speed is already impressive for just an RTX 4070 with only 12GB of video memory ...
Thx | 2023-06-21T15:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/14fb9c0/last_nvidia_drivers_let_you_use_the_shared_memory/ | alexthai7 | self.LocalLLaMA | 2023-06-21T16:18:01 | 0 | {} | 14fb9c0 | false | null | t3_14fb9c0 | /r/LocalLLaMA/comments/14fb9c0/last_nvidia_drivers_let_you_use_the_shared_memory/ | false | false | self | 80 | null |
Building koboldcpp_CUDA on Linux | 20 | I've successfully managed to run Koboldcpp CUDA edition on Ubuntu! It's not something you can easily find through a direct search, but with some indirect hints, I figured it out.
On the Koboldcpp GitHub repository, there are no instructions on how to build the CuBlas version, which is crucial for utilizing Nvidia's CUDA cores for text processing and inference in LLMs.
However, I noticed that on the official Koboldcpp releases page ([https://github.com/LostRuins/koboldcpp/releases](https://github.com/LostRuins/koboldcpp/releases)), there is a 300MB executable available for Windows users, named "Koboldcpp CUDA edition." But hey, I'm a Linux user now, and I want my neural networks to work on Linux!
So, I went to the main Koboldcpp GitHub repository ([https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp)) and followed the instructions to install it on Linux: cloning the repo with "git clone [https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp)" and then compiling it with the command "make LLAMA\_OPENBLAS=1 LLAMA\_CLBLAST=1." Everything seemed fine, but where was CUDA? Well, devs of Koboldcpp, don't like Nvidia, which is occasionally mentioned in the releases, so no instructions were provided.
However, Koboldcpp is a fork of llama.cpp, and if you visit their GitHub repository ([https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)), you'll understand that the LLAMA\_OPENBLAS=1 and LLAMA\_CLBLAST=1 parameters are actually building options for llama.cpp, not Koboldcpp. Luckily, there was an instruction on how to build with CuBlas: "make LLAMA\_CUBLAS=1." But before that, I had to download and install the CUDA Toolkit.
To ensure a smooth process, I recommend synchronizing your system first with the following commands:
\`\`\`
sudo apt update
sudo apt upgrade
\`\`\`
Then proceed with the CUDA Toolkit installation:
\`\`\`
wget [https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86\_64/cuda-keyring\_1.0-1\_all.deb](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb)
sudo dpkg -i cuda-keyring\_1.0-1\_all.deb
sudo apt-get update
sudo apt-get -y install cuda
\`\`\`
Please note that it's a hefty 3.5GB download.
After that, you can build Koboldcpp with the following parameters:
\`\`\`
make LLAMA\_CUBLAS=1 LLAMA\_OPENBLAS=1 LLAMA\_CLBLAST=1
\`\`\`
Once it's done, you can start using it with these parameters:
\`\`\`
\--useclblast 0 0 --gpulayers 14
\`\`\`
For 8GB GPUs and 30B models, 14 layers are optimal, but you can adjust this based on your specific model.
​
Messages of success:
\`\`\`
ggml\_init\_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060 Ti
llama\_model\_load\_internal: using CUDA for GPU acceleration
llama\_model\_load\_internal: mem required = 19136.82 MB (+ 3124.00 MB per state)
llama\_model\_load\_internal: allocating batch\_size x 1 MB = 512 MB VRAM for the scratch buffer
llama\_model\_load\_internal: offloading 14 repeating layers to GPU
llama\_model\_load\_internal: offloaded 14/63 layers to GPU
llama\_model\_load\_internal: total VRAM used: 5632 MB
​ | 2023-06-21T15:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/14faz1d/building_koboldcpp_cuda_on_linux/ | Current-Voice2755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14faz1d | false | null | t3_14faz1d | /r/LocalLLaMA/comments/14faz1d/building_koboldcpp_cuda_on_linux/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'a1QWRPDvXA-AUqmcv2cSMMAcJ1DAQeFgdSV9fAki7Y0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=108&crop=smart&auto=webp&s=ebb5cac99e579df42039abe960e8a6dc934a7f0b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=216&crop=smart&auto=webp&s=b125a38c636a9563a2b5a68f2008e88cfc9955d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=320&crop=smart&auto=webp&s=9c6140ec0d1d93fcf5650799aaabacd81d37bdf8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=640&crop=smart&auto=webp&s=213b51b599b31106f9ead159ada92fc132b92e40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=960&crop=smart&auto=webp&s=3256192b8e883d7308fd1e64900fbc0faff76841', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?width=1080&crop=smart&auto=webp&s=4cd8207e5e007aa50ec6a01c872054425ac5bfe5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bGVoIjHELN4vIvAEED21I5-DTSMs5VW2272JrDjd6so.jpg?auto=webp&s=d051687fd49ea02dee7e8d9dff49bd072119066b', 'width': 1200}, 'variants': {}}]} |
Which are the best LLMs that can explain code? | 23 | Given a piece of code, I am looking for open-source LLMs that can explain what the code does in layman terms. Brownie points if it can additionally detect syntactic bugs in the code (i.e., the code will produce compile-time error if it is run). And super gigantic brownie points if it can detect semantic bugs in the code (i.e., the code will not produce compile-time errors, might not necessarily even produce run-time errors, but if it runs will surely produce output that's not expected or desired). | 2023-06-21T15:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/14faiqr/which_are_the_best_llms_that_can_explain_code/ | ResearcherNo4728 | self.LocalLLaMA | 2023-06-21T15:24:01 | 0 | {} | 14faiqr | false | null | t3_14faiqr | /r/LocalLLaMA/comments/14faiqr/which_are_the_best_llms_that_can_explain_code/ | false | false | self | 23 | null |
Opinions on 33b vs 65b models and PC suggestions please. | 7 | Sat wondering about which option to do about my current pc (5950x 128gig ram, 4070/4090) I sold a 3090 and stupidly bought a 4090 a few months back. Currently mostly use the 4070 to game on and 4090 to run 33b models (since cheaper on uk electric) but I'm pondering if I should sell the 4090 and maybe get 2 3090s with the cash. Im not planning to sell the 4070 cause i could use that to make a gaming machine in future if decide to make a dedicated ai machine etc.
So anyways my question is, is it worth spending an upgrade to get a new PSU (currently 1000w) and sell the 4090 to put towards 2x3090 just so I can run 65b models decently fast? Is there much difference between 33b and 65b thats worth spending 100s of dollars lol. Would love to have peoples opinion of those who can run the 65b models, I know can run 65b on cpu too with offloading but its slow as heck and 4070+4090 cant seem to be used together with + cpu layers wise. Also this mb runs in x8/x8 2 3090 would run in x16/x4 pci-e if that is a factor.
Opinions welcome please or advice on 65b models. | 2023-06-21T14:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/14f92zp/opinions_on_33b_vs_65b_models_and_pc_suggestions/ | fluffywuffie90210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f92zp | false | null | t3_14f92zp | /r/LocalLLaMA/comments/14f92zp/opinions_on_33b_vs_65b_models_and_pc_suggestions/ | false | false | self | 7 | null |
A reminder of why we need "local" LLM: Over 100,000 compromised ChatGPT accounts found for sale on dark web | 1 | 2023-06-21T14:13:07 | https://www.theregister.com/2023/06/20/stolen_chatgpt_accounts/ | kryptkpr | theregister.com | 1970-01-01T00:00:00 | 0 | {} | 14f8v68 | false | null | t3_14f8v68 | /r/LocalLLaMA/comments/14f8v68/a_reminder_of_why_we_need_local_llm_over_100000/ | false | false | default | 1 | null | |
Seeking Guidance on Open-Source LLM Deployment & Fine-Tuning | 0 | [removed] | 2023-06-21T14:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/14f8rns/seeking_guidance_on_opensource_llm_deployment/ | TobyWonKenobi | self.LocalLLaMA | 2023-06-21T14:13:52 | 0 | {} | 14f8rns | false | null | t3_14f8rns | /r/LocalLLaMA/comments/14f8rns/seeking_guidance_on_opensource_llm_deployment/ | false | false | default | 0 | null |
create the best coder open-source in the world? | 61 | Alright. It's time for some action!
[https://arxiv.org/abs/2306.11644](https://arxiv.org/abs/2306.11644)
​
Microsoft showed us you can train smaller llms, so small as 1.3B tokens that beat 3.5 GPT which in my opinion is quite good at coding if you are specific enough on single languages. This means you can also teach architecture principles, test actual code-bases and strategies and even make it able to interact with the system by commands like write file, read file, explore folder, make connections assumptions.
We can say that a 13B model per language is reasonable. Then it means we need to create a democratic way for teaching coding by examples and solutions and algorithms, that we create, curate and use open-source. Much like [sharegpt.com](https://sharegpt.com) but for coding tasks, solutions ways of thinking. We should be wary of 'enforcing' principles rather showing different approaches, as all approaches can have advantages and disadvantages.
This would mean that if it took 4 days of A100 to train that 1.3B, let's say it takes 30 days of a100 to train 13B model. Which we can run locally comfortably, even us AMD peasant mfers on Ubuntu with Rocm (facepalm).
Then all of us coders will have access to this model that we will constantly improve. Let's face it, AI's going to code better than us soon, let's just admit that and work on improving these coders. I also saw someone making a VSCode extension for WizardCoder which is absolutely great that we'll have large compatibility.
Maybe someone can then make a sort-of self-hosted coder pay per tokens as an official way of making money and supporting the project? :)
Also, why not take it a little bit further and have your own LLM trained on your own dataset, we could do this training in 1 day via fine-tuning, and then you can how your own AI running in your own repository that constantly fine-tunes itself. We could then 'cloudify' gpu VRAM and then maybe have an AI trained on a repository written in multiple languages. | 2023-06-21T12:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/14f5zr7/create_the_best_coder_opensource_in_the_world/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f5zr7 | false | null | t3_14f5zr7 | /r/LocalLLaMA/comments/14f5zr7/create_the_best_coder_opensource_in_the_world/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Q: Simple prompts to test model's core reasoning ability? | 10 | I've seen some people use basic arithmetic questions. I'd love to collect such questions so that we can compare models easily. Ideally that can surface the known gaps between open source models and ChatGPT 3.5 / 4 models. | 2023-06-21T07:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/14f17ix/q_simple_prompts_to_test_models_core_reasoning/ | kkimdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14f17ix | false | null | t3_14f17ix | /r/LocalLLaMA/comments/14f17ix/q_simple_prompts_to_test_models_core_reasoning/ | false | false | self | 10 | null |
What is the significance of n_ctx ? | 3 | I would like to know what is the significance of \`n\_ctx\`.
I know that i represents the maximum number of tokens that the input sequence can be.
But it is given as a parameter which can be set while initializing the model, so that makes me think, if I need to process a longer prompt I will just change it.
Having said that, I still have following queries -
\- Are LLM's developed with an optimal \`n\_ctx\` value?
\- Do different LLM's have different \`n\_ctx\` value?
\- How to check whats the \`n\_ctx\` values of different models
\- Will setting it other than the default value impact the generation speed during inferencing?
Any help is appreciated. | 2023-06-21T06:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/14eznwk/what_is_the_significance_of_n_ctx/ | dhirajsuvarna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14eznwk | false | null | t3_14eznwk | /r/LocalLLaMA/comments/14eznwk/what_is_the_significance_of_n_ctx/ | false | false | self | 3 | null |
Is it possible to do personal RLHF? | 4 | I have been using the chatbot example from exllama with LLaMA 65B, with various modified prompts. It's a lot of fun.
A difficulty is that there's personality instability from run to run. That's not all bad, since it's a fun game to try to guide the bot in interesting directions. But deliberately guiding the bot (even implicitly, since the base model isn't instruction tuned) makes the chat less natural.
To get the bot to head in the right direction more consistently on its own, I wonder whether there'd be a way to apply RLHF, or some other fine-tuning technique, using my personal preferences. Let's say I were to thumbs-up and thumbs-down some responses. That'd give me a small dataset to use. Is there any technique that could get value from it? | 2023-06-21T06:09:09 | https://www.reddit.com/r/LocalLLaMA/comments/14ezam7/is_it_possible_to_do_personal_rlhf/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ezam7 | false | null | t3_14ezam7 | /r/LocalLLaMA/comments/14ezam7/is_it_possible_to_do_personal_rlhf/ | false | false | self | 4 | null |
Microsoft makes new 1.3B coding LLM that outperforms all models on MBPP except GPT-4, reaches third place on HumanEval above GPT-3.5, and shows emergent properties | 433 | Textbooks Are All You Need
Paper: [https://arxiv.org/abs/2306.11644](https://arxiv.org/abs/2306.11644)
Excerpts:
>In this work, following the footsteps of Eldan and Li, we explore the improvement that can be obtained along a different axis: the *quality* of the data. We demonstrate the power of high quality data in breaking existing scaling laws by training a 1.3B-parameter model, which we call **phi-1**, for roughly 8 passes over 7B tokens (slightly over 50B total tokens seen) followed by finetuning on less than 200M tokens. Despite being several orders of magnitude smaller than competing models, both in terms of dataset and model size, we attain 50.6% pass@1 accuracy on HumanEval and 55.5% pass@1 accuracy on MBPP (Mostly Basic Python Programs), which are one of the best self-reported numbers using only one LLM generation. Moreover, despite being trained on much fewer tokens compared to existing models, phi-1 still displays emergent properties.
>
>Our training relies on three main datasets: A filtered code-language dataset, which is a subset of The Stack and StackOverflow, obtained by using a language model-based classifier (consisting of about 6B tokens); A synthetic textbook dataset consisting of <1B tokens of GPT-3.5 generated Python textbooks; A small synthetic exercises dataset consisting of ∼180M tokens of Python exercises and solutions. Taken together, the above datasets contain less than 7B tokens. The architecture for our 1.3B parameter phi-1 model consists of 24 layers, hidden dimension of 2048, MLP-inner dimension of 8192, and 32 attention heads of dimension 64 each. Aside from FlashAttention, our models *do not* use other new techniques like Fill-In-the-Middle (FIM), or Multi-Query-Attention (MQA) that could further boost performance and efficiency.
>
>The largest improvement in HumanEval resulted from finetuning on the small CodeExercises dataset (<200M tokens). We demonstrate that, quite remarkably the model after finetuning also exhibits a substantial improvement in executing tasks that are not featured in the finetuning dataset. This suggests that our finetuning process might have helped the model in reorganizing and consolidating the knowledge acquired during pretraining, even if such knowledge is not explicitly present in our CodeExercises dataset. By crafting “textbook quality” data we were able to train a model that surpasses almost all open-source models on coding benchmarks such as HumanEval and MBPP despite being 10x smaller in model size and 100x smaller in dataset size.
Extra important excerpt:
>We also believe that significant gains could be achieved by using GPT-4 to generate the synthetic data instead of GPT-3.5, as we noticed that GPT-3.5 data has a high error rate. It is interesting that phi-1 is able to achieve such high coding proficiency despite those errors. | 2023-06-21T06:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/14ez6qf/microsoft_makes_new_13b_coding_llm_that/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ez6qf | false | null | t3_14ez6qf | /r/LocalLLaMA/comments/14ez6qf/microsoft_makes_new_13b_coding_llm_that/ | false | false | self | 433 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Is anyone using local LLMs for their business? | 1 | [removed] | 2023-06-21T05:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/14ey89g/is_anyone_using_local_llms_for_their_business/ | Aggressive-Snow-835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ey89g | false | null | t3_14ey89g | /r/LocalLLaMA/comments/14ey89g/is_anyone_using_local_llms_for_their_business/ | false | false | default | 1 | null |
Llama-cpp-python is slower than llama.cpp by more than 25%. Let's get it resolved | 36 | Llama.cpp is not just 1 or 2 percent faster; it's a whopping 28% faster than llama-cpp-python: 30.9s vs 39.5s.
I came across this issue two days ago and spent half a day conducting thorough tests and creating a detailed bug report for llama-cpp-python. But it seems like nobody cares about it at all.
So, I decided to post the issue on the bug report page of the text-generation-webui, hoping that they will take notice.
If you are a frequent user of the Webui and use ggml files, this issue should matter to you.
Could someone please follow the simple "Steps to Reproduce" that I have listed below and confirm the existence of this issue?
[https://github.com/oobabooga/text-generation-webui/issues/2788](https://github.com/oobabooga/text-generation-webui/issues/2788)
Once you have done that, please leave a comment on my bug report page to bring it to their attention. Thank you! | 2023-06-21T02:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/14evg0g/llamacpppython_is_slower_than_llamacpp_by_more/ | Big_Communication353 | self.LocalLLaMA | 2023-06-21T03:02:09 | 0 | {} | 14evg0g | false | null | t3_14evg0g | /r/LocalLLaMA/comments/14evg0g/llamacpppython_is_slower_than_llamacpp_by_more/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '2A6pPlrffR6nIlGIdi_x08hRw9_DuJi2nOx4BRixmXQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=108&crop=smart&auto=webp&s=4c36a62d0befa4dd32cce4c8636a5ba3180b4f9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=216&crop=smart&auto=webp&s=01ac80bf5e712a5c1c32aa1b399543d1bba46746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=320&crop=smart&auto=webp&s=986c17be9713671db662b48b292f505a1f0fd12b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=640&crop=smart&auto=webp&s=8c95d56c5d6b80c10d3a764927b85ca834f2da36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=960&crop=smart&auto=webp&s=7999c7ee38b4ee63c297be32b2c84de4af6b7ecf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?width=1080&crop=smart&auto=webp&s=77967cb4c59747cf07241d4c9e02915459e58efc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_tX2_Lh-xKbyrTuxCsXYgxOkiEw2GgYL_xysvnkdvIE.jpg?auto=webp&s=144804fc38bf608830a64cae5b54d7c5cda0afcf', 'width': 1200}, 'variants': {}}]} |
Best ggml model for instruction following? | 2 | [removed] | 2023-06-21T02:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/14evg01/best_ggml_model_for_instruction_following/ | eyerfing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14evg01 | false | null | t3_14evg01 | /r/LocalLLaMA/comments/14evg01/best_ggml_model_for_instruction_following/ | false | false | default | 2 | null |
Best bet for parseable output? | 3 | Would like it if I could get output in consistent JSON or something somehow so I could do specific stuff with it. Any models better or worse for this than other? Any prompt tricks? I’ve only experimented a tiny bit so far mostly with Wizard 13b | 2023-06-21T01:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/14etov8/best_bet_for_parseable_output/ | EarthquakeBass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14etov8 | false | null | t3_14etov8 | /r/LocalLLaMA/comments/14etov8/best_bet_for_parseable_output/ | false | false | self | 3 | null |
How to dive deeper into LLMs? | 12 | Hi there! I am from a computer-vision background as LLM is gaining much hype, I am diving deeper into the NLP field as well. I have gone through language models such as BERT and GPT as a lot of resources were available for them. Now I want to dive deeper into large llm. Can you guys provide any roadmap, resources, or Architecture codes to follow? | 2023-06-21T01:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/14etfd1/how_to_dive_deeper_into_llms/ | rajanghimire534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14etfd1 | false | null | t3_14etfd1 | /r/LocalLLaMA/comments/14etfd1/how_to_dive_deeper_into_llms/ | false | false | self | 12 | null |
Models produce incoherent statements | 1 | [removed] | 2023-06-21T00:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/14esz3l/models_produce_incoherent_statements/ | SpareSink5530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14esz3l | false | null | t3_14esz3l | /r/LocalLLaMA/comments/14esz3l/models_produce_incoherent_statements/ | false | false | default | 1 | null |
What is the fastest way to do inference with a 3/4 bit quantized 13b vicuna model? | 7 | Right now, as far as I've read, the fastest with a gpu is exllama, outperforming everything else. However, I haven't heard anything about how it compares with both llama.cpp gpu inference (wich was merged very recently) and other quantization methods (squeezeLLM is one that stands out a lot). What's the fastest for inference? Right now the environment I have is a free colab, it's only for testing. I'll get something with similar specs for the actual software. | 2023-06-21T00:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/14eskrb/what_is_the_fastest_way_to_do_inference_with_a_34/ | KillerX629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14eskrb | false | null | t3_14eskrb | /r/LocalLLaMA/comments/14eskrb/what_is_the_fastest_way_to_do_inference_with_a_34/ | false | false | self | 7 | null |
Question: Does using, for example, Wizard-Vicuna-7B-Uncensored, make it learn over time in it's local instance ? | 2 | I am really impressed by the performance of the 8-bit Wizard-Vicuna-7B-Uncensored 1.0, and now i would like to use it for work as it's basically doing the same as ChatGPT for my scripting needs, but one thing i find important to know even though i cant seem to find an answer ;
- Does the LLaMa learn over time from the conversations and prompts i'm having with it ?
It would mean that using it actually make it evolve, and the idea and implications it brings are amazing to me.
If not, let me know how i can proceed. I currently am running it over Arch Linux on a 12Gb VRAM, 32Gb RAM machine with a 20-core CPU, so i can involved in power-hungry tasks if required to make it learn. | 2023-06-21T00:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/14es7dj/question_does_using_for_example/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14es7dj | false | null | t3_14es7dj | /r/LocalLLaMA/comments/14es7dj/question_does_using_for_example/ | false | false | self | 2 | null |
vLLM: 24x faster LLM serving than HuggingFace Transformers | 2 | 2023-06-20T22:06:32 | https://vllm.ai/ | iwaswrongonce | vllm.ai | 1970-01-01T00:00:00 | 0 | {} | 14eouux | false | null | t3_14eouux | /r/LocalLLaMA/comments/14eouux/vllm_24x_faster_llm_serving_than_huggingface/ | false | false | default | 2 | null | |
[Rumor] Potential GPT-4 architecture description | 223 | [Source](https://twitter.com/soumithchintala/status/1671267150101721090?s=46&t=dUCVh9akIWxxNUIkrDJwJg) | 2023-06-20T21:52:11 | Shir_man | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14eoh4f | false | null | t3_14eoh4f | /r/LocalLLaMA/comments/14eoh4f/rumor_potential_gpt4_architecture_description/ | false | false | 223 | {'enabled': True, 'images': [{'id': 'NDr8aoAANKipaJwWs-HJ-EKKOoqT2zpqehOVsUyVWQ8', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=108&crop=smart&auto=webp&s=86d1a1bd185d9f206c1cef4e2c79981f6bd843fc', 'width': 108}, {'height': 369, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=216&crop=smart&auto=webp&s=fa1d8bac0f69a14e7f825469f959874d7ad8a1df', 'width': 216}, {'height': 547, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=320&crop=smart&auto=webp&s=ebfe4858420dcedbe7e09494161ee5395dffa86d', 'width': 320}, {'height': 1095, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=640&crop=smart&auto=webp&s=53d2f6b31e7651d6d51f093c0a57ecdff4c270a6', 'width': 640}, {'height': 1642, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=960&crop=smart&auto=webp&s=bf7e222f6642c873048e2ad8b78b71d35d5bcbfb', 'width': 960}, {'height': 1847, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?width=1080&crop=smart&auto=webp&s=30ada3723b254db15d79a13ee5e04d6a13b65d91', 'width': 1080}], 'source': {'height': 2197, 'url': 'https://preview.redd.it/66mu2t61v87b1.jpg?auto=webp&s=0c2041e186cb76193c2b93d3cf3fcb0b63d73160', 'width': 1284}, 'variants': {}}]} | ||
Newb runs into compute bottlenecks | 0 | [deleted] | 2023-06-20T21:48:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14eodyj | false | null | t3_14eodyj | /r/LocalLLaMA/comments/14eodyj/newb_runs_into_compute_bottlenecks/ | false | false | default | 0 | null | ||
Falcoder 7B - Falcon Finetuned with CodeAlpaca w/QLoRA and PEFT | 16 | NOT My Model
​
Falcon-7b fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library
CodeAlpaca\_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
[https://huggingface.co/mrm8488/falcoder-7b](https://huggingface.co/mrm8488/falcoder-7b)
​
Found it on youtubes, not sure if it'll be better than StarChat and its 8k length, but might worth it for some machines | 2023-06-20T20:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/14emsoi/falcoder_7b_falcon_finetuned_with_codealpaca/ | ChobPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14emsoi | false | null | t3_14emsoi | /r/LocalLLaMA/comments/14emsoi/falcoder_7b_falcon_finetuned_with_codealpaca/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'kckByvNI7xP1OTFxNupfAgvrJXZ0fbzbK3KHxeMbvbI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=108&crop=smart&auto=webp&s=92210f7206a60246c0bc8be4ae600a7250f4c469', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=216&crop=smart&auto=webp&s=b01b6f47e9060f067753855bea107b5a8aeed85b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?width=320&crop=smart&auto=webp&s=cfd467f35e14e1d4ea20a5fdb8b23b5965e2c3b3', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/frxywgHviuEwBJBs1IC_M_vc55iHXsOI7ntVnbQs2rM.jpg?auto=webp&s=d2e3fa709876534e40db43a983a293311724bf3a', 'width': 500}, 'variants': {}}]} |
Preset Arena experiment: last 48 hours. Please vote! | 29 | A few days ago I posted [Preset Arena: 17,205 comparisons between 241 different presets.](https://www.reddit.com/r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/)
Since then, **5619** votes have been obtained. THANK YOU to everyone who voted!
I think that 17,205 comparisons may have been too ambitious, so I have decided to let the experiment run for 48 more hours (until 2023-06-22 20:50 UTC) and stop no matter the number of votes at that point.
So I'd like to invite everyone to leave a few more votes if you can. 10 votes already helps a lot. This is the page: https://oobabooga.github.io/arena/index.html
**Why this is important**
By using the optimal parameter values (temperature, top_p, top_k, etc), we can get instantly get better replies from language models. This will benefit everyone regardless of the use case.
**Preliminary results**
https://oobabooga.github.io/arena/preliminary-results.html | 2023-06-20T20:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/14emr26/preset_arena_experiment_last_48_hours_please_vote/ | oobabooga4 | self.LocalLLaMA | 2023-06-20T20:53:28 | 0 | {} | 14emr26 | false | null | t3_14emr26 | /r/LocalLLaMA/comments/14emr26/preset_arena_experiment_last_48_hours_please_vote/ | false | false | self | 29 | null |
HELP! Unable to run HF Inference API for my finetuned falcon-7b model. | 1 | [removed] | 2023-06-20T20:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/14emff6/help_unable_to_run_hf_inference_api_for_my/ | medmac01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14emff6 | false | null | t3_14emff6 | /r/LocalLLaMA/comments/14emff6/help_unable_to_run_hf_inference_api_for_my/ | false | false | default | 1 | null |
Are there any good GUIs for llama.cpp GGML? | 1 | [removed] | 2023-06-20T20:29:04 | https://www.reddit.com/r/LocalLLaMA/comments/14em7fn/are_there_any_good_guis_for_llamacpp_ggml/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14em7fn | false | null | t3_14em7fn | /r/LocalLLaMA/comments/14em7fn/are_there_any_good_guis_for_llamacpp_ggml/ | false | false | default | 1 | null |
Just released - vLLM inference library that accelerates HF Transformers by 24x | 94 | vLLM is an open-source LLM inference and serving library that accelerates HuggingFace Transformers by 24x and powers Vicuna and Chatbot Arena.
Github: [https://github.com/vllm-project/vllm](https://github.com/vllm-project/vllm)Blog post: [https://vllm.ai](https://t.co/M7g6pa8rHj)
* Edit - it wasn't "just released" apparently it's live for several days
​
https://preview.redd.it/nzceocfbg87b1.png?width=1532&format=png&auto=webp&s=44d3ec3bf1797997d14d08164f9835846a9521ce | 2023-06-20T20:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/14em713/just_released_vllm_inference_library_that/ | yanjb | self.LocalLLaMA | 2023-06-20T21:14:45 | 0 | {} | 14em713 | false | null | t3_14em713 | /r/LocalLLaMA/comments/14em713/just_released_vllm_inference_library_that/ | false | false | 94 | {'enabled': False, 'images': [{'id': 'DYs7CkN9PGgRHPUogaL4fxcBexAw6r6seHmUbQMmyv8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=108&crop=smart&auto=webp&s=dd760df847a7e0e0e0b4b9205e8fb93196f3fc5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=216&crop=smart&auto=webp&s=beab0a360cbb5b6cbce30648282cb1e2f8f54c5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=320&crop=smart&auto=webp&s=931402258e06143b333a65baf41553f5e99086b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=640&crop=smart&auto=webp&s=925bf5bfe0e23d9b27e646577260ed45fa7d09bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=960&crop=smart&auto=webp&s=add39892e3b46c877d6b29f06b9173563509d20e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?width=1080&crop=smart&auto=webp&s=403a045e49feee53067a9acf66b3048ad06ab8e6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/60fMZ9sfWGJnltAGF_Gi7alsRI5ZOGDt0tl871iLwNY.jpg?auto=webp&s=37e44b0c954ce8ce05659ca5890eb7449df59639', 'width': 1200}, 'variants': {}}]} | |
George Hotz questions GGML & Quantization in general without better testing (Latent Space Podcast)(29:07) | 3 | 2023-06-20T20:00:38 | https://www.latent.space/p/geohot | jimmy6dof | latent.space | 1970-01-01T00:00:00 | 0 | {} | 14eleza | false | null | t3_14eleza | /r/LocalLLaMA/comments/14eleza/george_hotz_questions_ggml_quantization_in/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'pkzIN84fxa_C8ZMFCGKm0A627gqydXLQchUXyHRfYFs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=108&crop=smart&auto=webp&s=3445ffb4927134f3b80b90aef452c3f15652f229', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=216&crop=smart&auto=webp&s=a21a55ee70a65ea1b8649a5f5c919e43d8d68e13', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=320&crop=smart&auto=webp&s=b68c45c8d9c5e94e261d822adfda6ef3e8dfc44b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?width=640&crop=smart&auto=webp&s=96999dbbe6c6d6c7314fe0679f4239cd6a2f0851', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/43Cow787MPCXmqnWhBPg03qsonjcQchSgg0uUY9ME_M.jpg?auto=webp&s=a1183554670fe502db4c0c9eb146569ad2505f28', 'width': 728}, 'variants': {}}]} | ||
Error when attempting inference with LoRA applied (see comments) | 1 | 2023-06-20T18:47:19 | yuicebox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14ejg6r | false | null | t3_14ejg6r | /r/LocalLLaMA/comments/14ejg6r/error_when_attempting_inference_with_lora_applied/ | false | false | default | 1 | null | ||
What is the best 7b LLM | 10 | With all these fine-tuned models floating around it's getting harder to pick the best model for projects.
Any idea which 7b LLM has the best eval scores on ARC, HellaSwag, MMLU or TruthfulQA. I took a look at [HF leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) but this doesn't include a lot of the new LLMs like OpenLlama.
Thanks! | 2023-06-20T18:36:36 | https://www.reddit.com/r/LocalLLaMA/comments/14ej5sn/what_is_the_best_7b_llm/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ej5sn | false | null | t3_14ej5sn | /r/LocalLLaMA/comments/14ej5sn/what_is_the_best_7b_llm/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
LLaMA Equivalent Music LM | 1 | [removed] | 2023-06-20T17:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/14egv3g/llama_equivalent_music_lm/ | somethedaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14egv3g | false | null | t3_14egv3g | /r/LocalLLaMA/comments/14egv3g/llama_equivalent_music_lm/ | false | false | default | 1 | null |
GPT4All now supports Replit model on Apple Silicon (23 tok/sec)! | 31 | 2023-06-20T17:04:42 | https://twitter.com/nomic_ai/status/1671166514903543808 | NomicAI | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14egpd4 | false | {'oembed': {'author_name': 'Nomic AI', 'author_url': 'https://twitter.com/nomic_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Local LLMs in GPT4All are now 2x faster on Apple Silicone ⚡\uf8ff<br>- Supports all LLaMa models<br>- Exclusive support of the Replit model for 23 tok/s code generation enabling local Copilot!<br><br>Watch the 13B parameter Hermes model run at 15 tok/s locally!<a href="https://t.co/28GSI4Y92d">https://t.co/28GSI4Y92d</a> <a href="https://t.co/BOykAAkA2j">pic.twitter.com/BOykAAkA2j</a></p>— Nomic AI (@nomic_ai) <a href="https://twitter.com/nomic_ai/status/1671166514903543808?ref_src=twsrc%5Etfw">June 20, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/nomic_ai/status/1671166514903543808', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14egpd4 | /r/LocalLLaMA/comments/14egpd4/gpt4all_now_supports_replit_model_on_apple/ | false | false | 31 | {'enabled': False, 'images': [{'id': '874e-Hu7RjLoZ2m3gpd5U5gKCYJcGu9LrwjsXGIwn7k', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/BYReTCn6vlaPLR9-3bt5pdrSPiljLX3ATqnroGudgLU.jpg?width=108&crop=smart&auto=webp&s=969cae5340b7acf50e64cea61a547b28ecf02cc7', 'width': 108}], 'source': {'height': 81, 'url': 'https://external-preview.redd.it/BYReTCn6vlaPLR9-3bt5pdrSPiljLX3ATqnroGudgLU.jpg?auto=webp&s=ae4b4f22582944ca9ab1e56641a58ba317d7789b', 'width': 140}, 'variants': {}}]} | ||
I think I need some help setting things up for local usage... | 147 | 2023-06-20T16:29:48 | isoceans | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14efqn6 | false | null | t3_14efqn6 | /r/LocalLLaMA/comments/14efqn6/i_think_i_need_some_help_setting_things_up_for/ | false | false | 147 | {'enabled': True, 'images': [{'id': 'w21ZLyYH4aEydC18Nb8CKILufe6muisPyQQExtAhqOQ', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=108&crop=smart&auto=webp&s=a28967fa4beddb4a8b61b43bb4fd23177fbd3e64', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=216&crop=smart&auto=webp&s=8882bc5859861998300b8d9e5243b1090485cd37', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=320&crop=smart&auto=webp&s=98cb8ddd08baba8a89d04faa30a0eb12e7e3e3ae', 'width': 320}, {'height': 363, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=640&crop=smart&auto=webp&s=367cd5aa5ef91e200010c45e637fd190f2c57eac', 'width': 640}, {'height': 545, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?width=960&crop=smart&auto=webp&s=cd58c72aa7f32d62412ddf78cc30d6c2a71a3ab7', 'width': 960}], 'source': {'height': 577, 'url': 'https://preview.redd.it/b7szgt2h977b1.png?auto=webp&s=745f0251496c9d4e99ef5d154643099194b3e3c0', 'width': 1015}, 'variants': {}}]} | |||
PostgresML adds GPTQ & GGML quantized LLM support for Huggingface Transformers | 9 | 2023-06-20T16:12:45 | https://postgresml.org/blog/announcing-gptq-and-ggml-quantized-llm-support-for-huggingface-transformers | something_cleverer | postgresml.org | 1970-01-01T00:00:00 | 0 | {} | 14efaze | false | null | t3_14efaze | /r/LocalLLaMA/comments/14efaze/postgresml_adds_gptq_ggml_quantized_llm_support/ | false | false | default | 9 | null | |
New AI 'influencers' every other day | 1 | 2023-06-20T15:32:07 | liamsagely | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14ee9e1 | false | null | t3_14ee9e1 | /r/LocalLLaMA/comments/14ee9e1/new_ai_influencers_every_other_day/ | false | false | default | 1 | null | ||
Keeping my LLM on topic when passing context? | 10 | I am currently attempting to pass some context to the GPT4All-13B-Snoozy model in order to answer a question. The problem is that sometimes, the context string I am passing may contain additional irrelevant information in addition to the information needed to answer the given question.
This works fine, except for that the model sometimes includes some irrelevant portion of the context and includes it in its response. So it will answer the question from the given context and then for example add another sentence regarding some additional but irrelevant context that was passed to it.
I am currently using exllama and using the following prompt template:
prompt\_template = f"""Your name is ChatBot, a friendly and helpful AI chatbot. Below is an instruction that describes a task or question, paired with an input that provides further context. Write a response that completes the task.
​
\### Instruction:
{user\_input}
​
\### Input:
{context\_string}
​
\### Response:"""
Here is how I am setting up my model:
config = ExLlamaConfig(model_config_path)
config.model_path = model_path
config.max_seq_len = 2048
model = ExLlama(config)
cache = ExLlamaCache(model)
tokenizer = ExLlamaTokenizer(tokenizer_model_path)
generator = ExLlamaGenerator(model, tokenizer, cache)
generator.settings.token_repetition_penalty_max = 1.2
generator.settings.token_repetition_penalty_sustain = 20
generator.settings.token_repetition_penalty_decay = 50
How might I go about improving the performance in order to reduce the likelihood of my model including irrelevant context in its response? I have tried playing around with parameters and prompt but haven't had any luck so far. | 2023-06-20T15:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/14edvw4/keeping_my_llm_on_topic_when_passing_context/ | kotschi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14edvw4 | false | null | t3_14edvw4 | /r/LocalLLaMA/comments/14edvw4/keeping_my_llm_on_topic_when_passing_context/ | false | false | self | 10 | null |
Inference of fine-tuned 8bit model from code | 1 | [removed] | 2023-06-20T15:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/14edigf/inference_of_finetuned_8bit_model_from_code/ | Old_Speech_5665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14edigf | false | null | t3_14edigf | /r/LocalLLaMA/comments/14edigf/inference_of_finetuned_8bit_model_from_code/ | false | false | default | 1 | null |
Does local LLaMA's use SLI for increased VRAM? | 3 | This may be a dumb question, but I have a 1080 Ti currently, and was thinking about picking up 3 more for cheap (about $500), which would give me 44GB of VRAM in total... Would this be utilized with setup? | 2023-06-20T13:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/14ebleu/does_local_llamas_use_sli_for_increased_vram/ | iamthewhatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ebleu | false | null | t3_14ebleu | /r/LocalLLaMA/comments/14ebleu/does_local_llamas_use_sli_for_increased_vram/ | false | false | self | 3 | null |
First 13B open instruct open llama | 32 | https://huggingface.co/VMware/open-llama-13b-open-instruct
Not sure about you guys but this is huge. First fully open source, commercially usable 13b model. I think it actually performs quite well. Write in the comments your experience.
Edit : The link points to a merged LoRA model btw. Thanks for pointing that out u/2muchnet42day | 2023-06-20T12:58:49 | https://www.reddit.com/r/LocalLLaMA/comments/14eahjk/first_13b_open_instruct_open_llama/ | NeatManagement3 | self.LocalLLaMA | 2023-06-20T13:39:15 | 0 | {} | 14eahjk | false | null | t3_14eahjk | /r/LocalLLaMA/comments/14eahjk/first_13b_open_instruct_open_llama/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'LW_qIW9esjcRqthCWBRqjAO9CO-86W08ZFYUlMXYxdY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=108&crop=smart&auto=webp&s=df2b3715e10c9015a7fd4080bdb2f7c8e1dd6862', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=216&crop=smart&auto=webp&s=c4de1ddba36f92c5b686ff9005e615a207dc3763', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=320&crop=smart&auto=webp&s=84480567555de5383d5b97473a27a1b5546b8141', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=640&crop=smart&auto=webp&s=943003935a40ccd55829665cc24fde89c146fa9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=960&crop=smart&auto=webp&s=d05988ed70265ddeaeefd8eb03511d16cf07c345', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?width=1080&crop=smart&auto=webp&s=dbfa02413ab6c386c39df0310782509c5992ff93', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vb4kyG-O9YeNfer9fFYp1A1CS3FS9hoU8EsKLDIMok4.jpg?auto=webp&s=e1c0c3550230442198c8e79a9744426cedacaa2f', 'width': 1200}, 'variants': {}}]} |
Sometimes Kobold doesn't show output in real time and waits until it's finished generating the text. Why? | 1 | [deleted] | 2023-06-20T12:23:29 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14e9pts | false | null | t3_14e9pts | /r/LocalLLaMA/comments/14e9pts/sometimes_kobold_doesnt_show_output_in_real_time/ | false | false | default | 1 | null | ||
I've come to rely on LLMs for emotional support and good advice | 191 | 2023-06-20T11:55:46 | veli_joza | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14e94oe | false | null | t3_14e94oe | /r/LocalLLaMA/comments/14e94oe/ive_come_to_rely_on_llms_for_emotional_support/ | false | false | 191 | {'enabled': True, 'images': [{'id': 'Do4MraDRd5UikR_L8WTAKHD_jsuk6mEBhiWGgrX0X70', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=108&crop=smart&auto=webp&s=58a843dd89bc3fd6142231c75b82b9f71f090fc4', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=216&crop=smart&auto=webp&s=c16c6a56707dc07fac5b1ee326964aae07b63b77', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=320&crop=smart&auto=webp&s=93575357cfd3cceb8044adbbe88a9331f9f04010', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?width=640&crop=smart&auto=webp&s=bd3d316ef929d37c19f68e22c3882219e850426b', 'width': 640}], 'source': {'height': 943, 'url': 'https://preview.redd.it/d43p14vgw57b1.png?auto=webp&s=8c3add1f80fa07a6a595b15e2bf142bec14c37bb', 'width': 878}, 'variants': {}}]} | |||
Does anyone know how to remove this body of text from my outputs? | 4 | Hello!
I'm new to LLMs and LLaMA in general, and even newer to this subreddit so I apologise if my question is easily fixable.
I've been messing around with an application called 'GPT4All' and I'm sure a few of you have heard of it. In case you haven't, it's a program/application that allows people to have a local AI chatbot on their computer. The users can also use their own trained AI's with the program, and the program offers many features for programmers and tinkerers alike.
Although GPT4All does have a proper application, complete with a nice UI and such, I'm using their 'gpt4all' python library to make an API to use in a project of mine. You'll see an image below which is the problem I'm having:
​
[\(The body of text I'd like to remove - generated with every AI output\)](https://preview.redd.it/dwxeuexol57b1.png?width=608&format=png&auto=webp&s=631d3278afcf5ddabd802e0c8545c3ea02b3c0e3)
Essentially, I'd like to remove this body of text from the output of the AI every time It's used. The AI used to output much more text than this, but I was able to remove that text from the output through the dissection of 2 of the python files in their library. The text I was able to remove wasn't like this though, it was just simple prompt, instruction, response stuff. I've tried everything I could think of and nothing has been able to remove this text.
I accidentally cut the first line from the image, but it says:
`llama.cpp: loading model from C:\\\\Users\\\\name\\\\.cache\\\\gpt4all\\ggml-gpt4all-l13b-snoozy.bin`
The library folder also contains a folder that has tons of C++ files in it, like `llama.cpp` which is the file mentioned in the line above. I believe the text is being outputted from one of these files but I don't know which one - and I don't know anything about C++.
This program I'm creating, which will include this AI, will run prompts through the AI very often - and lots of them. Therefore I wouldn't like my computer terminal being flooded by this body of text every time it has to give a response to an output.
I don't suppose anyone knows anything that could help me? Thanks so much in advance! | 2023-06-20T11:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/14e8duf/does_anyone_know_how_to_remove_this_body_of_text/ | Cold_Masterpiece_147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e8duf | false | null | t3_14e8duf | /r/LocalLLaMA/comments/14e8duf/does_anyone_know_how_to_remove_this_body_of_text/ | false | false | 4 | null | |
Discussion about Hadware Requirements for local LlaMa | 4 | Hello,
I want to buy a computer to run local LLaMa models. I have read the recommendations regarding the hardware in the Wiki of this Reddit. My Question is, however, how good are these models running with the recommended hardware requirements? Is it as fast as ChatGPT generating responses? Or does it take like 1-5 Minutes to generate a response? It would be great to get some context for the recommendations.
What configuration would I need to properly run a 13B / 30B or 65B model **FAST**? Would an RTX 4090 be sufficient for a 13B and 30B model? What CPU and RAM size do I need? | 2023-06-20T10:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/14e7xf1/discussion_about_hadware_requirements_for_local/ | Plane_Discussion_924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7xf1 | false | null | t3_14e7xf1 | /r/LocalLLaMA/comments/14e7xf1/discussion_about_hadware_requirements_for_local/ | false | false | self | 4 | null |
Build, Train, and Deploy LLM Agents with Agent M | 0 | [removed] | 2023-06-20T10:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/14e7n3u/build_train_and_deploy_llm_agents_with_agent_m/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7n3u | false | null | t3_14e7n3u | /r/LocalLLaMA/comments/14e7n3u/build_train_and_deploy_llm_agents_with_agent_m/ | false | false | default | 0 | null |
Build, Train and Deploy LLM Agents with Agent M | 0 | 2023-06-20T10:37:04 | Floatbot_Inc | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14e7kqy | false | null | t3_14e7kqy | /r/LocalLLaMA/comments/14e7kqy/build_train_and_deploy_llm_agents_with_agent_m/ | false | false | default | 0 | null | ||
What Type of Research Can Bring Value to the Community? | 21 | Hi guys,
Im a researcher scoping (and struggling) for ideas on what could bring value to the community and field in general.
I've been looking into creating new evaluation methods or ways in which one can evaluate the performance of LLMs more objectively - and several ideas into ethics and biases, but not sure which path would be more useful.
Any ideas or suggestions would be highly appreciated.
​ | 2023-06-20T10:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/14e7buc/what_type_of_research_can_bring_value_to_the/ | XhoniShollaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e7buc | false | null | t3_14e7buc | /r/LocalLLaMA/comments/14e7buc/what_type_of_research_can_bring_value_to_the/ | false | false | self | 21 | null |
What Model should I use? | 4 | I’m working on a bot that uses an LLM to take on a personality of an insulting, pompous Space pirate AI. So far I can’t find an LLM that can do chatting well. It keeps asking if I need any help. I just want to chat and follow the prompt.
Edit: I’m using the Oobabooga API for the not silly tavern or the webui. | 2023-06-20T10:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/14e77xi/what_model_should_i_use/ | PhraseOk8758 | self.LocalLLaMA | 2023-06-20T10:22:36 | 0 | {} | 14e77xi | false | null | t3_14e77xi | /r/LocalLLaMA/comments/14e77xi/what_model_should_i_use/ | false | false | self | 4 | null |
Discussion about optimal Hardware-Requirements | 0 | [deleted] | 2023-06-20T10:06:27 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14e70bi | false | null | t3_14e70bi | /r/LocalLLaMA/comments/14e70bi/discussion_about_optimal_hardwarerequirements/ | false | false | default | 0 | null | ||
Visual Studio Code extension for WizardCoder | 81 | I just built the Quick Extension for WizardCoder so I can try it out in a real coding environment. I also have access to GitHub Copilot chat. I have to say that WizardCoder is quite good.
PS:
extension :[https://marketplace.visualstudio.com/items?itemName=mzbac.wizardcoder-vsc&ssr=false#overview](https://marketplace.visualstudio.com/items?itemName=mzbac.wizardcoder-vsc&ssr=false#overview) Once installed, simply right click on the IDE and then click "Chat with Wizard Coder".
source code:[https://github.com/mzbac/wizardCoder-vsc](https://github.com/mzbac/wizardCoder-vsc)
self-host wizardCoder api repo: [https://github.com/mzbac/AutoGPTQ-API](https://github.com/mzbac/AutoGPTQ-API) | 2023-06-20T09:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/14e5vwi/visual_studio_code_extension_for_wizardcoder/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e5vwi | false | null | t3_14e5vwi | /r/LocalLLaMA/comments/14e5vwi/visual_studio_code_extension_for_wizardcoder/ | false | false | self | 81 | {'enabled': False, 'images': [{'id': '0X9djfd_bWdlhM07dSftS7otpejs3cdKDPe1jQJUgj0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=108&crop=smart&auto=webp&s=aa64a99f2e0981debb75f458f19bb532977b67ae', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=216&crop=smart&auto=webp&s=083e3114a6852a5e124040613bce8531ef155ca6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=320&crop=smart&auto=webp&s=03c920391d18c9bddaa438cf53f53733d46ac0e7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=640&crop=smart&auto=webp&s=082cc7ac4112fce0b195f10783895ae411fefedb', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?width=960&crop=smart&auto=webp&s=70fc83e3a941d40fe517ca09cb1db513594a4093', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/7z8TdtSd4Oaij01D_-bwaiL9CJ8THjhrUfJQjAqU9yc.jpg?auto=webp&s=2230eb5ae604d0585b82c59239f795801a639e7b', 'width': 1024}, 'variants': {}}]} |
WizardLM-13B-V1.0-Uncensored | 203 | Following up on [https://www.reddit.com/r/LocalLLaMA/comments/14ckrd6](https://www.reddit.com/r/LocalLLaMA/comments/14ckrd6)
Today I released WizardLM-13B-V1.0-Uncensored
[https://huggingface.co/ehartford/WizardLM-13B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-V1.0-Uncensored)
Based on the [WizardLM/WizardLM\_evol\_instruct\_V2\_196k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) dataset I filtered it to remove refusals, avoidance, bias. [ehartford/WizardLM\_evol\_instruct\_V2\_196k\_unfiltered\_merged\_split](https://huggingface.co/datasets/ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split)
I trained this with Vicuna's FastChat, as the new data is in ShareGPT format and WizardLM team has not specified a method to train it.
The prompt format is Vicuna-1.1 style.
While the instruct layer has been uncensored, LLaMA itself has opinions, and will resist instructions it disagrees with. You must overcome that with your own wit and prompt engineering.
Finally, you are responsible for the content you create using this model. It is uncensored and can produce illegal and unethical content, if you ask it to. If deploying this in a service, I would recommend you train a LoRA to implement your own alignment to prevent unwanted behavior.
[u/The-Bloke](https://www.reddit.com/u/The-Bloke/) has kindly quantized this model as a service to the community. Respect.
[https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GPTQ](https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GPTQ) [https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML](https://huggingface.co/TheBloke/WizardLM-13B-V1.0-Uncensored-GGML)
33b is forthcoming. Should be finished this weekend.
Thank you to the open-source community and everyone who helped me.
Also thanks to [chirper.ai](https://chirper.ai) for sponsoring some of my compute. | 2023-06-20T08:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/14e4sw8/wizardlm13bv10uncensored/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14e4sw8 | false | null | t3_14e4sw8 | /r/LocalLLaMA/comments/14e4sw8/wizardlm13bv10uncensored/ | false | false | self | 203 | {'enabled': False, 'images': [{'id': '_K7qWtA4Mm1G0sMWCNN7AsiduSrbTwTCBzLbjFUqJpY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=108&crop=smart&auto=webp&s=5f4b7b7002f99e0fd7ba9816582ebab7510b9467', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=216&crop=smart&auto=webp&s=210c76560aefd284e83fd708ea783224c3ae2289', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=320&crop=smart&auto=webp&s=9f8879ef3e2e551a2e175a5ae2b22d1648c13e6d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=640&crop=smart&auto=webp&s=1c652de7471a344fef3856c9e3438fc26115dbee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=960&crop=smart&auto=webp&s=60ba81d523e5071bf043289f80fb6a4c4e190829', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?width=1080&crop=smart&auto=webp&s=6fb42256a0028bfc942616c655741e9c684096ed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wnwk-Px3Wy0EFWih4C-1t6kgGAA0uxezHnl5eO8GD1c.jpg?auto=webp&s=f8a6e8695b03c0105904baa35ea7f2bd68b320a6', 'width': 1200}, 'variants': {}}]} |
Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models | 45 | >Generating intermediate steps, or Chain of Thought (CoT), is an effective way to significantly improve language models' (LM) multi-step reasoning capability. However, the CoT lengths can grow rapidly with the problem complexity, easily exceeding the maximum context size. Instead of increasing the context limit, which has already been heavily investigated, we explore an orthogonal direction: making LMs divide a problem into multiple contexts. We propose a new inference framework, called Recursion of Thought (RoT), which introduces several special tokens that the models can output to trigger context-related operations. Extensive experiments with multiple architectures including GPT-3 show that RoT dramatically improves LMs' inference capability to solve problems, whose solution consists of hundreds of thousands of tokens.
https://preview.redd.it/3pbmoc4ap47b1.jpg?width=2388&format=pjpg&auto=webp&s=c55a53a702747e5247a0d6b7c28f7d91c0954511
Paper: [https://arxiv.org/abs/2306.06891](https://arxiv.org/abs/2306.06891)
Code: [https://github.com/soochan-lee/rot](https://github.com/soochan-lee/rot) | 2023-06-20T07:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/14e4mg6/recursion_of_thought_a_divideandconquer_approach/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e4mg6 | false | null | t3_14e4mg6 | /r/LocalLLaMA/comments/14e4mg6/recursion_of_thought_a_divideandconquer_approach/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
How do I run Stable Diffusion and LLMs from my PC on my mobile device? Offline and private ways? | 3 | Posting here after a recommendation from r/selfhosted .
Currently I'm on Windows 10 but I'm switching to Pop!\_Os in the next few days.
And I have a Samsung Galaxy Tab E 9.6' with an Android 4.4.4 custom rom, though I can switch to a 7.1.2 custom rom if needed, and root it too.
If I have a long usb cable, is that the best option to connect my tablet to my pc? Would this be offline, private, secure, etc.? I came across [this stack overflow answer](https://stackoverflow.com/questions/9887621/accessing-localhost-of-pc-from-usb-connected-android-mobile-device) that talked about this. I don't use my tablet outside my room, so having it not be wireless isn't a big deal to me if it's the best option.
I've seen people talk about lots of stuff that I don't fully get. --share and --listen and --gradio-auth command-args and gradio and public urls, addresses that start with 127. or 192., using openvpn on the mobile device, termux, tailscale, and more.
I don't know what to do or follow. I've also seen people share how to have Stable Diffusion always generate and store images in the mobile device's sd card, instead of storing it on the pc. But I can't find the instructions again.
I'm also hoping multimodals like LLaVa can take files from that sd card as input too. | 2023-06-20T07:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/14e4arx/how_do_i_run_stable_diffusion_and_llms_from_my_pc/ | ThrowawayProgress99 | self.LocalLLaMA | 2023-06-26T23:34:32 | 0 | {} | 14e4arx | false | null | t3_14e4arx | /r/LocalLLaMA/comments/14e4arx/how_do_i_run_stable_diffusion_and_llms_from_my_pc/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'nfayPavSUB5ngYv6-19UHNBThsXfcLIDQl4HkEe3Cv0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?width=108&crop=smart&auto=webp&s=b1c8a90e5690a7186afdb269ad05279551994d09', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?width=216&crop=smart&auto=webp&s=533bd055cdae7998d1b8f9cd9d7dedabc1715bda', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/QgPvRTknlY3rMNDqH1k4I37XGiq9tZF_FsygC_Xht4o.jpg?auto=webp&s=8cd5e918e2bde6ca72d4445d6fc007f203689799', 'width': 316}, 'variants': {}}]} |
PirvateGPT: Error loading source documents? | 1 | [removed] | 2023-06-20T07:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/14e47y0/pirvategpt_error_loading_source_documents/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e47y0 | false | null | t3_14e47y0 | /r/LocalLLaMA/comments/14e47y0/pirvategpt_error_loading_source_documents/ | false | false | default | 1 | null |
Best free open-source LLM to use for the recommendation task | 1 | [removed] | 2023-06-20T06:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/14e3muo/best_free_opensource_llm_to_use_for_the/ | lahaine93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e3muo | false | null | t3_14e3muo | /r/LocalLLaMA/comments/14e3muo/best_free_opensource_llm_to_use_for_the/ | false | false | default | 1 | null |
Unstructured text to graphDB | 1 | [removed] | 2023-06-20T06:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/14e2wq3/unstructured_text_to_graphdb/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14e2wq3 | false | null | t3_14e2wq3 | /r/LocalLLaMA/comments/14e2wq3/unstructured_text_to_graphdb/ | false | false | default | 1 | null |
Creating an Org Knowledge Management System | 10 | Hi All,
Kinda new to this space yet tried a few models.
I would like to get direction on how to create a personalized knowledge management system that allows users to type and get answers.
I want to train this model with organizational data and keep it privately hosted. Happy to create department-wise models if all going to be an overloaded model. example: one for HR, one for IT, etc. The major problem I want to address, keep the org knowledge in-house, so when someone leaves the org the next person can just ask from the model about how previously done, ongoing work etc.
If you can suggest a
01. Good model for this
02. How to ingest data to model with pdfs, word, and other doc sources.
03. How to ingest data from Teams transcripts
Want to create this model as one knowledgeable person in the team.
TY. <3 | 2023-06-20T02:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/14dyimz/creating_an_org_knowledge_management_system/ | SpongeBob_000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dyimz | false | null | t3_14dyimz | /r/LocalLLaMA/comments/14dyimz/creating_an_org_knowledge_management_system/ | false | false | self | 10 | null |
Making a LoRA for WizardLM | 0 | [deleted] | 2023-06-20T02:20:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dy795 | false | null | t3_14dy795 | /r/LocalLLaMA/comments/14dy795/making_a_lora_for_wizardlm/ | false | false | default | 0 | null | ||
Try the new RWKV world 7B, it's incredible. | 32 | [deleted] | 2023-06-20T02:19:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dy6of | false | null | t3_14dy6of | /r/LocalLLaMA/comments/14dy6of/try_the_new_rwkv_world_7b_its_incredible/ | false | false | default | 32 | null | ||
Alternative C# bindings in development, anyone interested? | 12 | **Edit: The bulk of the code can be found in this subdirectory.**
**https://github.com/MrJackSpade/Chie/tree/main/LLamaSharp/LLama**
Was just wondering if there's anyone who wants to contribute to a chunk of a project I'm working on.
I wasn't particularly satisfied with the state of the current LlamaSharp project, seeing as how its pretty much a straight up 1:1 port of the C++ code. It lacked a lot of functionality that I was hoping to leverage, so I ended up writing my own to be more in line with the standard design patterns of the .NET Framework, and following the MS code style guidelines.
As far as progress goes, its been great. I've had no problem keeping it up with the newest releases of Llama.cpp, and I've added an assload of new features, like
1. Interfaces for injecting custom code into various stages of the inference process including context rollovers, token selection, and sampling
2. Interfaces for asynchronously processing data post-return (esp context cleanup) to allow evaluation while the user is typing a response, instead of performing the operations synchronously while the user is waiting.
3. Token metadata support, allowing tokens to be saved and loaded with arbitrary metadata to be used at various points during the execution pipeline (denoting tokens as temporary, user created, or returned by the model for example)
as well as some other nice QOL stuff, like defining a "TokenCollection" class that supports operations like splitting, joining, etc, abstracting the context evaluation so that changes evaluations are always tracked and performed "as needed" removing the need to self-calculate evaluation stuff.
All in all, just a ton of really good stuff that allows for *intensely* customizing the experience of inference through C# without actually needing to make any modifications to the underlying code, by treating the LlamaModel in the same way one would treat something like a WebApplication. Configure, dependency inject, and instantiate.
This module would (does) make an absolutely great basis for a WebApi, allowing for a very customizable experience. I've actually done that myself. The problem is, while I've done my best to maintain proper isolation between layers, there is definitely some bias in my implementation. Its a little rough around the edges. At the same time, I feel like its kind of a waste for me to be the only one actually leveraging this, when it seems like so many other people are just trying to work with unsavory API's provided by other clients, or reinventing the wheel by using the existing classes.
So, to the fucking point finally, I was wondering if theres anyone else on the Sr+ side of development who wants to help with a little bit of cleanup so that other people can leverage what I've spend the past month learning/writing. Help me get out of that "my use case" bias, clean up the edges a little bit, and possibly finalize packing it up into an easy to use/consume package or web-api. Seems like it could be a win-win. Community gets a clean and easy API, and I get someone to call my bullshit if I start cutting corners. | 2023-06-20T02:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/14dxznf/alternative_c_bindings_in_development_anyone/ | mrjackspade | self.LocalLLaMA | 2023-06-20T17:49:53 | 0 | {} | 14dxznf | false | null | t3_14dxznf | /r/LocalLLaMA/comments/14dxznf/alternative_c_bindings_in_development_anyone/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '7F21HkIZ7kiK1HC_9WGRzzNC45LauG9S2AgOqNfW6uw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=108&crop=smart&auto=webp&s=415758c5775304ae97cc5c64096e4bce799a3ed9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=216&crop=smart&auto=webp&s=be1fcca89fb90da77c48ef6f7fcb6dffe2952290', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=320&crop=smart&auto=webp&s=b706056d3c3f7dec003699eb0dbc12017b18946f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=640&crop=smart&auto=webp&s=359f7ad4a3a9ee5d85993c4b460306c7274e7f54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=960&crop=smart&auto=webp&s=eba7a4da643214f584af74e3ce3b610e1507b9d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?width=1080&crop=smart&auto=webp&s=b2e7e8f1ebbb3ae97aaa7190a00c802276b1c36d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W8FMnClGeDLG9F8KU-brXD81et0g0p4upXHNFxEA2fc.jpg?auto=webp&s=617563c29e6637b3d2962f3d0d11a2f49a921968', 'width': 1200}, 'variants': {}}]} |
Anyboty Client - Alpha 0.1.1 Release - GGMLv3 Compatible Easy to Use Client | 10 | 2023-06-20T00:55:51 | https://anyboty.itch.io/anyboty | Pathos14489 | anyboty.itch.io | 1970-01-01T00:00:00 | 0 | {} | 14dwcft | false | null | t3_14dwcft | /r/LocalLLaMA/comments/14dwcft/anyboty_client_alpha_011_release_ggmlv3/ | false | false | 10 | {'enabled': False, 'images': [{'id': '7aKyCAIq4SAnFwxYHDpdcXu8UleY-vwKVX4Jat_6qLo', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=108&crop=smart&auto=webp&s=712fb68d4db7b695d2a8b5bd76b9e9cde7ff9f17', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=216&crop=smart&auto=webp&s=0dd5dce6cd293490f5890109916f87d3ce0b345d', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?width=320&crop=smart&auto=webp&s=e15a742eb944d64a0cbb84348b851848f94560d6', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/2EJ_h26BboK-gACvjoOjMgFP9jEDNijqh8i2nIUUXB4.jpg?auto=webp&s=6b37f441be7660bc53ee74b94931fa3a6d7c76d8', 'width': 630}, 'variants': {}}]} | ||
Finetuned LLM not getting better performance | 3 | I have been trying hard to get Vicuna 7B to be a better assistant by finetuning it on my own documents, but results so far have been weak. It doesn't seem like next word prediction on a new high quality corpus is enough for it to leverage the knowledge on a chat bot format. Has anyone faced this same problem? We could share some insights here on what works and what doesn't when trying to make a chat bot specialize on a niche subject. | 2023-06-20T00:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/14dvmkr/finetuned_llm_not_getting_better_performance/ | kreuzguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dvmkr | false | null | t3_14dvmkr | /r/LocalLLaMA/comments/14dvmkr/finetuned_llm_not_getting_better_performance/ | false | false | self | 3 | null |
Better/Good Uncensored 30B/65B model for RP? | 6 | Hi there! Wondering which models are being used for RP that works good, while also being uncensored.
30B there seems to be a lot, but at 65B there's airoboros, guanaco, robin, alpaca-vicunlocked, but I'm not sure if there's more. There seems to be very few of them.
For 30B, I know those same models I mentioned that are on 65B.
Any suggestion? Thanks! | 2023-06-20T00:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/14dvl34/bettergood_uncensored_30b65b_model_for_rp/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dvl34 | false | null | t3_14dvl34 | /r/LocalLLaMA/comments/14dvl34/bettergood_uncensored_30b65b_model_for_rp/ | false | false | self | 6 | null |
QLoRA Dataset | 1 | [removed] | 2023-06-19T23:18:18 | [deleted] | 2023-06-19T23:23:11 | 0 | {} | 14du2vv | false | null | t3_14du2vv | /r/LocalLLaMA/comments/14du2vv/qlora_dataset/ | false | false | default | 1 | null | ||
Any tips for having models adhere to chat formatting (ie speech in quotes and actions in asterisks) when using KoboldAI with SillyTavern? | 6 | I’ve been using Sillytavern with koboldAI running via runpod, and am just wondering if there are any settings or improved prompts I can adjust in ST to improve character’s adherence to the “chat formatting” in their replies.
I’ve been using 4bit Airoboros 33B and VidUnlocked 30b and really like the content of the results, but they don’t adhere to the formatting, often inserting extra slashes and multiple asterisks, etc.
Airoboros also often includes the word “USER:” and even writes for my character. I’m able to include “USER:” as a stop sequence, but it still spends time generating those replies even if ST ignores it.
4bit Chronos 33B is great and adheres to the formatting, but the character’s speech is often very literal and I would prefer to use the other models If they would adhere to the formatting.
I know this is largely due to the data they’re trained on, but am just wondering If there is anything I’m missing in SillyTavern to improve or force their adherence to this formatting, or if these models have optimal inclusions for system prompt, input sequence, output sequence, system sequence, stop sequence, and separator.
Any advice or resources would be much appreciated. | 2023-06-19T23:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/14dtv5r/any_tips_for_having_models_adhere_to_chat/ | BangkokPadang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtv5r | false | null | t3_14dtv5r | /r/LocalLLaMA/comments/14dtv5r/any_tips_for_having_models_adhere_to_chat/ | false | false | self | 6 | null |
What are the current best uncensored models? | 44 | \[Bottom text\] | 2023-06-19T23:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/14dtqw2/what_are_the_current_best_uncensored_models/ | TheZorro_Sama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtqw2 | false | null | t3_14dtqw2 | /r/LocalLLaMA/comments/14dtqw2/what_are_the_current_best_uncensored_models/ | false | false | self | 44 | null |
Conda environment is empty. Press any key to continue . . . "Press key" The windows closes and nothing happens. | 1 | [removed] | 2023-06-19T22:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/14dtjq0/conda_environment_is_empty_press_any_key_to/ | Vanilla_Vampi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dtjq0 | false | null | t3_14dtjq0 | /r/LocalLLaMA/comments/14dtjq0/conda_environment_is_empty_press_any_key_to/ | false | false | default | 1 | null |
Tutorial - train your own llama.cpp mini-ggml-model from scratch! | 156 | 2023-06-19T22:26:44 | https://asciinema.org/a/592303 | Evening_Ad6637 | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 14dstqm | false | null | t3_14dstqm | /r/LocalLLaMA/comments/14dstqm/tutorial_train_your_own_llamacpp_miniggmlmodel/ | false | false | 156 | {'enabled': False, 'images': [{'id': '1AWydW3gz9Q0_fQz7g4w5Al-pPtgsEMV3SO7mCTxX4c', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=108&crop=smart&auto=webp&s=82435af198450db42a7f193b133cedaeca62cbc8', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=216&crop=smart&auto=webp&s=dc8edd636b7e81781884bce193c34f9e35458d29', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=320&crop=smart&auto=webp&s=d6642e566259b40d355eb7ccdfc55f7d34f3e570', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=640&crop=smart&auto=webp&s=3a07e2a3827b1e18638719d45503d665559fd766', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=960&crop=smart&auto=webp&s=60ad1dcacb312663bb6d359ea185c8aecf2668f8', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?width=1080&crop=smart&auto=webp&s=adf9055841ae9dea001a71b4a0030fc13de853d8', 'width': 1080}], 'source': {'height': 2156, 'url': 'https://external-preview.redd.it/yS_hQbNoxSkIXCNqZVPaNeUVdG-auQ3TxPwcLKmlrOg.jpg?auto=webp&s=33639de986c0c9e29b4296e3bda23d73fcfb2822', 'width': 3239}, 'variants': {}}]} | ||
llama.cpp - Training your own mini ggml model from scratch works! | 1 | [deleted] | 2023-06-19T22:24:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14dsrer | false | null | t3_14dsrer | /r/LocalLLaMA/comments/14dsrer/llamacpp_training_your_own_mini_ggml_model_from/ | false | false | default | 1 | null | ||
Adapter merge for Flacon 40b instruct model (loaded in 8bit) | 2 | Is it possible to merge a finetuned adapter on falcon 40b loaded in ( 8 bit ) mode? | 2023-06-19T22:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/14dsmvy/adapter_merge_for_flacon_40b_instruct_model/ | unoshow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dsmvy | false | null | t3_14dsmvy | /r/LocalLLaMA/comments/14dsmvy/adapter_merge_for_flacon_40b_instruct_model/ | false | false | self | 2 | null |
Best LLM for legal analysis | 4 | I'm working on a community project to use (privacy-cleansed) legal data to fine-tune a model, with the end-goal of making the law more accessible to the public. Between us we've a decent range of tech skills, and a benefactor is providing a limited budget (a few thousand to get to launch). We're thinking of building in AWS with SageMaker to keep it manageable, though still looking at alternatives.
However we're struggling to get our heads around the welter of LLMs available, with more every minute it seems. Are some intrinsically better suited to tasks like legal analysis? Also, we've found a bunch of proprietory legal tools for tasks like jargon-busting, but not so many open source. Any tips? | 2023-06-19T21:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/14drnvd/best_llm_for_legal_analysis/ | edinburgh1975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14drnvd | false | null | t3_14drnvd | /r/LocalLLaMA/comments/14drnvd/best_llm_for_legal_analysis/ | false | false | self | 4 | null |
Complete noob that's been lurking for weeks but still lost: billion questions. Tutorial links or direct answers are both appreciated. | 3 | Pretty much the title - I have no clue wtf I'm doing, other than that I'm going to have to use that thing that one person made, that splits the model's vram usage across multiple GPUs, with more of the processing still having to run on a single GPU (thus, the thought to use a 3090 for that "main" card, with P40s as the ram support).
So basically, my questions are:
1. Is one powerful(ish?) GPU plus three "okay" GPUs a good strategy?
2. Does having enough vram in total to easily fit the entire model, mean it won't be running on CPU/system ram?
3. If the answer to 2 is "no," then how much support do the cards need from the system? How much CPU and ram do I need for those to not bottleneck whatever isn't handled by the GPUs? Is a Ryzen 3700x 8 core adequate, or do I need a 16 core like the Ryzen 3950x or 5950x?
4. Is 128gb of dual channel ram enough? Do I need as much system ram as the model size, even when the model fits across the total GPU vram capacity?
5. Would a Ryzen 1950x with quad channel ram be better?
6. Does ecc ram make a difference? It's cheaper per gb, but slower mhz.
7. What kind of generation speed performance could I expect from a system of this spec running a 65b model...? Can it converse in real time like [c.ai](https://c.ai) and GPT? I have zero point of reference.
8. How much context can I have with a 65b and 96gb of vram spanned across four cards?
9. How much context do I want to have...? I don't know what it does. Does it make it remember more of the previous messages in our current conversation?
10. Will the P40s be bottlenecking the 3090 somehow, even if they can share the load properly?
11. Are there 65b models good at nsfw roleplay. I just want something at least as good as [c.ai](https://c.ai) at sounding "person-y," without that weird "my butler is pretending to be Exile from road rovers, not actually Exile" vibe that jailbroken ChatGPT has.
12. Question 7, but with 30b models.
13. Are there tutorial posts I should read that would cover most of this. I've been lurking here for a few weeks trying to read everything, but I have a lot of gaps in understanding regarding the basics.
14. Can the AI running on my big rig be used easily from my laptop? I know the answer is yes, but I'm not versed in Linux and SSHing yet either...if that's even what I have to do. I assume there's an interface thing I can somewhat easily use, since that's what every consumer AI service does?
I think that's everything. I've never used ANY of this before, just [c.ai](https://c.ai) and jailbroken ChatGPT for rp, and NovelAI for images (though that's a whole other task to figure out, that seems easier). Any help is appreciated, sorry if this is considered "low effort." I just want to get a rig and hit the ground running. | 2023-06-19T20:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/14dpxj1/complete_noob_thats_been_lurking_for_weeks_but/ | CanineAssBandit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dpxj1 | false | null | t3_14dpxj1 | /r/LocalLLaMA/comments/14dpxj1/complete_noob_thats_been_lurking_for_weeks_but/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]} |
Loading large models over more than one GPU -- neither auto-devices nor manually assigning memory seems to do its job in oobabooga | 4 | [removed] | 2023-06-19T20:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/14dpqrb/loading_large_models_over_more_than_one_gpu/ | tenmileswide | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dpqrb | false | null | t3_14dpqrb | /r/LocalLLaMA/comments/14dpqrb/loading_large_models_over_more_than_one_gpu/ | false | false | default | 4 | null |
Proof of Concept: Local LLM to execute terminal comands (Here GPT-2) | 37 | I have written a very simple script as a proof of concept that shows the possibilities of local llms that could do work for you (even a small gpt-2 is possible).
Here is an asciinema record:
[ai2-shell](https://asciinema.org/a/592283)
---
Just for demonstration purpose I wanted something fast, so here I used gpt-2 medium in ggml. And I q8 quantized it, since I found this model faster than q5 or q4 and it is smarter in context of acting like a linux terminal.
In my case the inferece took 5 secs. I assume using a 3B llama with llama.cpp would be faster and more accurate/smarter.
[Here you'll find the script](https://codeberg.org/mountain/llm_scripts/src/branch/main/ai2.sh)
Don't forget to set the path to your model and to an inference application (llama.cpp, ggml/gpt-2, etc).
EDIT: how can I make a preview to the asciinema link btw? 😅 I thought something with `![ ... ]` but doesnt work..
---
**EDIT EDIT AND CAVEAT**
**As another user has already pointed out: please be aware of the great dangers and risk of such an application. This demonstration is intended to show that small models ARE already capable of such a task - so it is not a waste of time to start experimenting and tinker with it.**
**BUT:**
**=== this should be done exclusively (!) in a virtual environment ===**
**and you should never never never (!) run code that you do not understand 100% yourself.**
**Really, NEVER!**
**It is not difficult that you could irreversibly damage your own system with lack of knowledge.** | 2023-06-19T20:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/14dpf44/proof_of_concept_local_llm_to_execute_terminal/ | Evening_Ad6637 | self.LocalLLaMA | 2023-06-20T00:14:52 | 0 | {} | 14dpf44 | false | null | t3_14dpf44 | /r/LocalLLaMA/comments/14dpf44/proof_of_concept_local_llm_to_execute_terminal/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'Lbi610UzLW-T4V3g8HL0qtAV94Hkml7JQXOWxktu6aE', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=108&crop=smart&auto=webp&s=0756a8326bfd71052248f788462fa4b49c1bc328', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=216&crop=smart&auto=webp&s=41498d6420a049adcddc80562cd0c1688fb8a034', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=320&crop=smart&auto=webp&s=2213db2a007e7a8bd44afcc1b26ffe373a74f49a', 'width': 320}, {'height': 433, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=640&crop=smart&auto=webp&s=4f288b6045844039616051588a86185e4b1d8b33', 'width': 640}, {'height': 650, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=960&crop=smart&auto=webp&s=b5697260e7153e0da02a1d08470f464242dc9104', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?width=1080&crop=smart&auto=webp&s=32efdcf8be6c6d349f78c3d1151fd332dceede4b', 'width': 1080}], 'source': {'height': 2196, 'url': 'https://external-preview.redd.it/r4gQ-EIuvZdiYin01mW2La2hgWviuHKvrMwfoIMy0TM.jpg?auto=webp&s=eebc24fff8c8375ee44d71d80813a9ceb67c749a', 'width': 3239}, 'variants': {}}]} |
I'm new at the party. Can I make something with a GeForce GTX 1050 ti? Or is that too bad? | 1 | [removed] | 2023-06-19T19:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/14doxz0/im_new_at_the_party_can_i_make_something_with_a/ | thiagosanches | self.LocalLLaMA | 2023-06-19T20:15:42 | 0 | {} | 14doxz0 | false | null | t3_14doxz0 | /r/LocalLLaMA/comments/14doxz0/im_new_at_the_party_can_i_make_something_with_a/ | false | false | default | 1 | null |
Text Generation Web UI for Chatbots (Model and Parameter Discussion) | 2 | So I just recently set up Oobabooga's [Text Generation Web UI](https://github.com/oobabooga/text-generation-webui) (TGWUI) and was playing around with different models and character creations within the UI. I just followed the basic example character profile that is provided to create a new character to chat with (not for providing knowledge like an assistent, but just for having fun with interesting personas). I was really pleased with what both [LLaMA-7b](https://huggingface.co/decapoda-research/llama-7b-hf) (loading in 8-bit) and [\-13b](https://huggingface.co/decapoda-research/llama-13b-hf) (loading in 4-bit mode) were producing during my chat sessions. I also tried [WizardLM-7B-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) and [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6b) in "Instruct+Chat", but also in normal "Chat" modes. Sometimes I liked what they were producing, but of course they work a bit differently as far as I understand (they are instruct models).
Now I have three questions and feel free to answer whatever you want:
* What is a good strategy and format for creating new characters in TGWUI? Is it better to write sentences for a personality or are keywords enough? How much example conversation is useful?
* Are there any models that my PC\* can manage, you are very pleased with when it comes to creating characters to have fun with (e.g. also NSFW content)?
* Can you recommend parameter settings for AI chat partner purposes, e.g. temperature or repetition\_penalty? I know I should play around with myself, but maybe you found some sweet spot already.
\*My specs: RTX 3060 12 GB, 64 GB RAM, some i7 CPU | 2023-06-19T18:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/14dmyh9/text_generation_web_ui_for_chatbots_model_and/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14dmyh9 | false | null | t3_14dmyh9 | /r/LocalLLaMA/comments/14dmyh9/text_generation_web_ui_for_chatbots_model_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '7UgRrSr917wz5Gsb3o0qEv_Yb5qwB1G4LtQ1PA31Xrg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=108&crop=smart&auto=webp&s=4a6b19d5c613e82c4afceb537d2c122a0ae9dd17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=216&crop=smart&auto=webp&s=e897374e91e1a66b5c505983ace637b2cbea29c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=320&crop=smart&auto=webp&s=09f3e80fd4f1438c273736c3911320ae16a4ce2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=640&crop=smart&auto=webp&s=5134bd9bce865ba6fbb4043bfd7ef605d1893325', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=960&crop=smart&auto=webp&s=3abbcc05dfda0a1ccaeb4dc606b65639f810d6a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?width=1080&crop=smart&auto=webp&s=01e1d5c7768664c6f4affb0db030a19c9f4664c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5yu8f94i8qqa8nDxytOB-UYUFPeMP9Vw3f1_92hfQYo.jpg?auto=webp&s=551a3dc08be400edc0823ff03db4c44787867756', 'width': 1200}, 'variants': {}}]} |
Why don't evaluation benchmarks use a proper prompt formatting? | 14 | I recently delved into the evaluation process of models on the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and something caught my attention.
If you look into, e.g., [Vicuna training code](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py#L96), you may notice that it was trained using prompts that adhere to a strict conversational template:
`A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <user message> ASSISTANT: <assistant_message></s>`
However, when you look at the [MMLU benchmark evaluation code](https://github.com/declare-lab/instruct-eval/blob/c85b7532149eacf3646734cfbb06af1de2d27765/mmlu.py#L153) (which is most likely similar to the one used in the Open LLM leaderboard), you'll notice that it doesn't use any conversational formatting and instead uses a straightforward few-shot prompt.
I wonder why. Wouldn't these models, such as Vicuna, demonstrate better performance if the prompts were appropriately formatted? And why does Vicuna still significantly improve over LLaMA-13B even if there is such a huge shift between training and testing data? | 2023-06-19T17:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/14dlo95/why_dont_evaluation_benchmarks_use_a_proper/ | pilot7747 | self.LocalLLaMA | 2023-06-19T18:09:40 | 0 | {} | 14dlo95 | false | null | t3_14dlo95 | /r/LocalLLaMA/comments/14dlo95/why_dont_evaluation_benchmarks_use_a_proper/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.