title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4x RTX 6000 ada v.s. 1x H100 | 1 | [removed] | 2023-11-20T16:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/17zs7da/4x_rtx_6000_ada_vs_1x_h100/ | sirvy3tr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zs7da | false | null | t3_17zs7da | /r/LocalLLaMA/comments/17zs7da/4x_rtx_6000_ada_vs_1x_h100/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CiEqvy_nn8hXZNA-Zi6mhXeaPJ3GjUudlgU_9X9fJ_U', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dsNVug7yrz8Ek_en7Pu19vzTW3tME-GRdmcLFxpvHcg.jpg?width=108&crop=smart&auto=webp&s=053669448de46173f1233dff25a458e8765f1d08', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/dsNVug7yrz8Ek_en7Pu19vzTW3tME-GRdmcLFxpvHcg.jpg?width=216&crop=smart&auto=webp&s=ccb515007d2a55f03e2230d23c652577761cf6c6', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/dsNVug7yrz8Ek_en7Pu19vzTW3tME-GRdmcLFxpvHcg.jpg?width=320&crop=smart&auto=webp&s=d90ffe42b5c8bc8fa6a1ed635f1649208c4d5ba8', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/dsNVug7yrz8Ek_en7Pu19vzTW3tME-GRdmcLFxpvHcg.jpg?width=640&crop=smart&auto=webp&s=e67f8c19bc055eb373b504a1b3b5d021e24afdcd', 'width': 640}], 'source': {'height': 497, 'url': 'https://external-preview.redd.it/dsNVug7yrz8Ek_en7Pu19vzTW3tME-GRdmcLFxpvHcg.jpg?auto=webp&s=2581de1509a60de8b8b1f89e4a10faa9eb7f2179', 'width': 900}, 'variants': {}}]} |
"The Palace Coup" the newest NLU, Q&A bench-marking pico-dataset. | 1 | [removed] | 2023-11-20T15:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/17zqkaz/the_palace_coup_the_newest_nlu_qa_benchmarking/ | laca_komputilulo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zqkaz | false | null | t3_17zqkaz | /r/LocalLLaMA/comments/17zqkaz/the_palace_coup_the_newest_nlu_qa_benchmarking/ | false | false | self | 1 | null |
Seeking Advice on Automating Responses Evaluation for My Chilean Law QA-Chatbot | 2 | Hello everyone,
I'm back with an update and a new query regarding the QA-chatbot I'm developing for Chilean law, based on the LLM model. As a reminder, the chatbot will be primarily in Spanish, and I'm using a tech stack that includes Langchain, LLaMA 2 7B/13B (llama-cpp-python), Streamlit, and ChromaDB, focusing on open-source or free license software.
My current focus is on evaluating the chatbot's responses. I'm creating a dataset based on question-answer guides from the Library of Congress of Chile. This involves selecting a number of questions and their corresponding answers, posing these questions to the chatbot, and then assessing the accuracy of its responses. The evaluation criteria include correctness of information, absence of hallucinations, and relevancy to the question asked.
Originally, I planned to conduct this evaluation manually. However, I'm now seeking advice on how to make this process more efficient and possibly automated. What tools or methods can I use to streamline the evaluation of the chatbot's responses? Are there any best practices or software that can help in comparing the chatbot's answers with the dataset for accuracy and relevance?
Any insights or suggestions from this community would be greatly appreciated!
Emer | 2023-11-20T15:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/17zqioy/seeking_advice_on_automating_responses_evaluation/ | emersounds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zqioy | false | null | t3_17zqioy | /r/LocalLLaMA/comments/17zqioy/seeking_advice_on_automating_responses_evaluation/ | false | false | self | 2 | null |
Nvidia Tesla P40 performs amazingly well for llama.cpp GGUF! | 83 | 2023-11-20T14:29:09 | https://www.reddit.com/gallery/17zpr2o | nero10578 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17zpr2o | false | null | t3_17zpr2o | /r/LocalLLaMA/comments/17zpr2o/nvidia_tesla_p40_performs_amazingly_well_for/ | false | false | 83 | null | ||
Google quietly open sourced a 1.6 trillion parameter MOE model | 324 | 2023-11-20T13:04:16 | https://twitter.com/Euclaise_/status/1726242201322070053?t=My6n34eq1ESaSIJSSUfNTA&s=19 | MostlyRocketScience | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17zo2ml | false | {'oembed': {'author_name': 'Jade', 'author_url': 'https://twitter.com/Euclaise_', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">switch-c-2048 is apache on HF <a href="https://t.co/aLqwW1NnS2">https://t.co/aLqwW1NnS2</a></p>— Jade (@Euclaise_) <a href="https://twitter.com/Euclaise_/status/1726242201322070053?ref_src=twsrc%5Etfw">November 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Euclaise_/status/1726242201322070053', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17zo2ml | /r/LocalLLaMA/comments/17zo2ml/google_quietly_open_sourced_a_16_trillion/ | false | false | 324 | {'enabled': False, 'images': [{'id': 'CxYUApSuiZc4Wua3sdvq26xUegi5772H-iWf7x8heqY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BsPAltvV9-V3ChDfupCQqy9XR8f4GavA5TEzqoHbmg8.jpg?width=108&crop=smart&auto=webp&s=f1c4ac383d1596a0e4c701731c21dc798ebcc4a8', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/BsPAltvV9-V3ChDfupCQqy9XR8f4GavA5TEzqoHbmg8.jpg?auto=webp&s=b1cd952ec3049744c4dc7b02474a6e84ed52dedd', 'width': 140}, 'variants': {}}]} | ||
How to run 70B on 24GB VRAM ? | 30 | I want to run a 70B LLM locally with more than 1 T/s. I have a 3090 with 24GB VRAM and 64GB RAM on the system.
What I managed so far:
* Found instructions to make 70B run on VRAM only with a 2.5 bpw that run fast but the perplexity was unbearable. LLM was barely coherent.
* I randomly made somehow 70B run with a variation of RAM/VRAM offloading but it run with 0.1 T/S
I saw people claiming reasonable T/s speeds. Sine I am a newbie, I barely can speak the domain language, and most instructions I found assume implicit knowledge I don't have\*.
I need **explicit** instructions on what 70B model to download exactly, which Model loader to use and how to set parameters that are salient in the context. | 2023-11-20T12:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/17znv35/how_to_run_70b_on_24gb_vram/ | BlueMetaMind | self.LocalLLaMA | 2023-11-20T14:21:25 | 0 | {} | 17znv35 | false | null | t3_17znv35 | /r/LocalLLaMA/comments/17znv35/how_to_run_70b_on_24gb_vram/ | false | false | self | 30 | null |
"The Palace Coup" the newest NLU, Q&A benchmarking pico-dataset. | 1 | [removed] | 2023-11-20T12:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/17znoyk/the_palace_coup_the_newest_nlu_qa_benchmarking/ | laca_komputilulo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17znoyk | false | null | t3_17znoyk | /r/LocalLLaMA/comments/17znoyk/the_palace_coup_the_newest_nlu_qa_benchmarking/ | false | false | 1 | null | |
Quantization of the newly released Swedish Gpt-sw3? | 5 | Does anyone have any plans to quantizie any of the newly relased Gtp-sw3 models?
Or if there is a guide (very detailed) how to do it. Iam not very tech savy so what i have found/googled so far, is above my level. Which is quit low to be honest...
Is there anyone planing to quantizie these models or willing to do it?
The modelI Iam most interested in is the 20B instruct model. (Guff or AWQ).
https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct | 2023-11-20T12:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/17znbb4/quantization_of_the_newly_released_swedish_gptsw3/ | AssociationNo8626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17znbb4 | false | null | t3_17znbb4 | /r/LocalLLaMA/comments/17znbb4/quantization_of_the_newly_released_swedish_gptsw3/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '3WUN2jSUD2bnq6boPrP8yhCOde7UydoSTsprWkBVY8U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=108&crop=smart&auto=webp&s=01a4ffaf9e214f1a2151513dfdf80ae0169edc77', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=216&crop=smart&auto=webp&s=14c17865d2cf58aab5cececd657a258e6f37507a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=320&crop=smart&auto=webp&s=6bd1436ec0f0a758b75f061be47ffb8248604d34', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=640&crop=smart&auto=webp&s=98e2f960661bbd45875ef68319d5294c2f7429f0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=960&crop=smart&auto=webp&s=b9336ecf6a20f6076578f7529ac0018ece63d7e2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?width=1080&crop=smart&auto=webp&s=713c4fe68fe8bcf90d31a3703a980be445fd9ead', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nGjRyprxDNkCnPf_hGpuelyKbynQXRbDpPRX_JWNUgY.jpg?auto=webp&s=a0cc89922c0f1f2110f700eb5cd94fa038638499', 'width': 1200}, 'variants': {}}]} |
Anyone else struggling to get Coqui TTS to work in anything other than an American accent? | 7 | Sorry if this is off-topic but I think it’s adjacent since many LlaMA users are also using it.
I’m trying to use the Coqui TTS library with a view to plugging it into LLaMA.cpp but for some reason no matter which model I try my attempts at using British English source speech just ends up with an American sounding voice with various distortions. I’m running the Python module as instructed in the docs under macOS on the M1 platform, I’ve tried various models all with similar results.
Nothing at all against American accents but they’re not what I require at the moment so any help in making Coqui sound like a little more RP would be much appreciated! | 2023-11-20T11:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/17zmwsn/anyone_else_struggling_to_get_coqui_tts_to_work/ | colei_canis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zmwsn | false | null | t3_17zmwsn | /r/LocalLLaMA/comments/17zmwsn/anyone_else_struggling_to_get_coqui_tts_to_work/ | false | false | self | 7 | null |
Follow Up: ChatGPT slips up and confirms GPT 3.5 Turbo is likely 20B | 1 | Hello everyone,
I was using GPT 3.5 to get some insights into tests I was running with LLaMA 2 70B, and it accidentaly slipped up and confirmed that GPT 3.5 Turbo has fewer parameters than 70B.
**I have linked the screenshot in the post.**
Although it could be hallucinating, **I'm starting to seriously consider GPT 3.5 Turbo to be 20B**. Stunning optimization work by OpenAI if it turns out to be the case.
Thoughts ? | 2023-11-20T11:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/17zm7cl/follow_up_chatgpt_slips_up_and_confirms_gpt_35/ | First-Quality-7222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zm7cl | false | null | t3_17zm7cl | /r/LocalLLaMA/comments/17zm7cl/follow_up_chatgpt_slips_up_and_confirms_gpt_35/ | false | false | self | 1 | null |
Should I be able to run lzlv_Q4_K_M.gguf with 128gb cpu ram? It keeps erroring out. | 1 | I'm still new to this and I thought that 128gb CPU ram would be enough to run a 70b model? I also have an RTX 4090. However, everytime I try to run lzlv\_Q4\_K\_M.gguf in Text Generation UI, I get "connection errored out". Could there be a setting that I should tinker with? | 2023-11-20T10:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/17zlsx8/should_i_be_able_to_run_lzlv_q4_k_mgguf_with/ | Brad12d3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zlsx8 | false | null | t3_17zlsx8 | /r/LocalLLaMA/comments/17zlsx8/should_i_be_able_to_run_lzlv_q4_k_mgguf_with/ | false | false | default | 1 | null |
Best model 13b - 33b at following instructions | 1 | What's your experience for the best model that can follow instructions as close as possible like for example extracting entities from text and returning JSON structures only and no extra text. Mistral is doing a decent job for me, but is there anything better/smarter? | 2023-11-20T10:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/17zls82/best_model_13b_33b_at_following_instructions/ | simplir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zls82 | false | null | t3_17zls82 | /r/LocalLLaMA/comments/17zls82/best_model_13b_33b_at_following_instructions/ | false | false | self | 1 | null |
Real-time alerts with LLMs and local KNN index | 1 | 2023-11-20T09:22:13 | https://pathway.com/developers/showcases/llm-alert-pathway/ | bumurzokov | pathway.com | 1970-01-01T00:00:00 | 0 | {} | 17zkmmv | false | null | t3_17zkmmv | /r/LocalLLaMA/comments/17zkmmv/realtime_alerts_with_llms_and_local_knn_index/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UIiH3TzxPBFekdQEoqNUB3DAUECkNeH8pyrVzlpGcoY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/DrfWxtSZ2iTJxPLPr0v3Ms_sa5nfUENVadIJHoxqr6Y.jpg?width=108&crop=smart&auto=webp&s=ac1817573346e5b19274ed78e5e5dfcfe8c9794f', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/DrfWxtSZ2iTJxPLPr0v3Ms_sa5nfUENVadIJHoxqr6Y.jpg?width=216&crop=smart&auto=webp&s=07ffeab24b4d4bd15d99bb81dd3e8f72561881f5', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/DrfWxtSZ2iTJxPLPr0v3Ms_sa5nfUENVadIJHoxqr6Y.jpg?width=320&crop=smart&auto=webp&s=fe7efbcd67d7e2615448eecae2cd4da1f003f627', 'width': 320}, {'height': 484, 'url': 'https://external-preview.redd.it/DrfWxtSZ2iTJxPLPr0v3Ms_sa5nfUENVadIJHoxqr6Y.jpg?width=640&crop=smart&auto=webp&s=886a54fa5caee12febbd2064362c657b257e28ef', 'width': 640}], 'source': {'height': 581, 'url': 'https://external-preview.redd.it/DrfWxtSZ2iTJxPLPr0v3Ms_sa5nfUENVadIJHoxqr6Y.jpg?auto=webp&s=bb142a79431022a27a6ccabda60c67e9d9996dae', 'width': 768}, 'variants': {}}]} | ||
why every multu-modal llm is giving wrong output? I just want them to read screen shots correctly. any idea which multi modal llm can be perfect for this use case? | 1 | 2023-11-20T09:12:27 | https://www.reddit.com/gallery/17zkhny | Anu_Rag9704 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 17zkhny | false | null | t3_17zkhny | /r/LocalLLaMA/comments/17zkhny/why_every_multumodal_llm_is_giving_wrong_output_i/ | false | false | 1 | null | ||
My new favorite LLM education website | 1 | Sabastian Raschka [website](https://magazine.sebastianraschka.com/) is filled with amazing information for those who want to start learning about LLMs
I think i will spend the next few days just reinforcing my knowledge from his experience. | 2023-11-20T08:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/17zk7qi/my_new_favorite_llm_education_website/ | Mandus_Therion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zk7qi | false | null | t3_17zk7qi | /r/LocalLLaMA/comments/17zk7qi/my_new_favorite_llm_education_website/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'D74DCOVzw9OlLoLP4mF3wPkU1fKoDzdReW6a26_tTiI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Ew5eerxixuv28GwwIWqj7tU1UgCGRkLd_QbN0cQ7CKY.jpg?width=108&crop=smart&auto=webp&s=dc35b3c6ecd57a5cbbb9fef594400ebbb6446883', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Ew5eerxixuv28GwwIWqj7tU1UgCGRkLd_QbN0cQ7CKY.jpg?width=216&crop=smart&auto=webp&s=aa02ee9c4bddeb1967cee38bc7a51df9f6ede7ae', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/Ew5eerxixuv28GwwIWqj7tU1UgCGRkLd_QbN0cQ7CKY.jpg?width=320&crop=smart&auto=webp&s=52341cb28bf735e7deabcd9ea931fd9e14217457', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/Ew5eerxixuv28GwwIWqj7tU1UgCGRkLd_QbN0cQ7CKY.jpg?width=640&crop=smart&auto=webp&s=9db6e3146374dd0a4da635a78b8d41c6a548672a', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/Ew5eerxixuv28GwwIWqj7tU1UgCGRkLd_QbN0cQ7CKY.jpg?auto=webp&s=3832ecd18b136eef081ed26fc17001e16d7ce32a', 'width': 920}, 'variants': {}}]} |
Microsoft hires former OpenAI CEO Sam Altman | 297 | 2023-11-20T08:32:00 | https://www.theverge.com/2023/11/20/23968829/microsoft-hires-sam-altman-greg-brockman-employees-openai | Commander_ | theverge.com | 1970-01-01T00:00:00 | 0 | {} | 17zjxux | false | null | t3_17zjxux | /r/LocalLLaMA/comments/17zjxux/microsoft_hires_former_openai_ceo_sam_altman/ | false | false | 297 | {'enabled': False, 'images': [{'id': '1RsiS0MzL4puGdy5eWV8hRnHyrhbJ163b9VCxbnygbw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=108&crop=smart&auto=webp&s=e9b35955195996c2ebda8eaf826278c73111bf0c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=216&crop=smart&auto=webp&s=39ad448be0c3a583995cdec2ad020062b7cefbf5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=320&crop=smart&auto=webp&s=972ed043fbbf7c979cfee6ec95d983ddf6c68741', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=640&crop=smart&auto=webp&s=10ac3008b42d930987e050b3efa85bc4ed725a79', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=960&crop=smart&auto=webp&s=459482fead8974102a2edec8ba2ccaf914929e00', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?width=1080&crop=smart&auto=webp&s=0faf8b1361d6acb91539a1a313a083a5f2628996', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/r8_EH5z78LwH1CEwmdePhyNIJsuLh5FGqzSIBqJvPOc.jpg?auto=webp&s=16a449fdfe3b684c1813ca4f2814354ba4890587', 'width': 1200}, 'variants': {}}]} | ||
Train Llama-2 base model with raw texts | 4 | Hello everyone, I have thousands of different txt files and raw texts in them (their content is unimportant for my question). How can I train the Llama-2-7b-hf model?
​
Many sources have a specific dataset format, but my data is raw data, I will train the basic model. I probably need to use unsupervised learning, but I don't know how to train the Llama-2-7b-hf model with the raw data I have? | 2023-11-20T07:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/17zj53t/train_llama2_base_model_with_raw_texts/ | Typical_Time_208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zj53t | false | null | t3_17zj53t | /r/LocalLLaMA/comments/17zj53t/train_llama2_base_model_with_raw_texts/ | false | false | self | 4 | null |
An idea for discernind quality of merged/finetuned models (at a glance) using multimodality | 1 | Preface: If you've seen quality SD ppl managed to squeeze out sub-1b model, you know that iterative finetune/merge techniques work great.
The problem, however, is capability testing.
With SD, you can literally see the difference between "vanilla 1.5" (not talking about XL, just apples to apples comparison) and any of the large merges - literally night and day difference.
This does not work for large landuage models - it is a very time-consuming process and typical tests are easily gamed.
So, suggestion:
By usind visual component of multimodal LLMs with text and images trained simultaneusly, it should be possible to see effects of you finetunes and, most importantly, merges "at a glance" by generating like a hundred pictures each and skimming through them: we, as humans, have much greater acumen in visual perception compared to reading stuff (talking of relative popolarity of books and tiktok...).
Of cource, the correlation can be imperfect or misleading, but I think it might be an interesting method. | 2023-11-20T07:11:02 | https://www.reddit.com/r/LocalLLaMA/comments/17zitv0/an_idea_for_discernind_quality_of_mergedfinetuned/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zitv0 | false | null | t3_17zitv0 | /r/LocalLLaMA/comments/17zitv0/an_idea_for_discernind_quality_of_mergedfinetuned/ | false | false | self | 1 | null |
What base model do you use for fine-tuning? | 1 | Curious to know what base models you guys prefer and what you like about them
[View Poll](https://www.reddit.com/poll/17zitq7) | 2023-11-20T07:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/17zitq7/what_base_model_do_you_use_for_finetuning/ | Sea_Significance9631 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zitq7 | false | null | t3_17zitq7 | /r/LocalLLaMA/comments/17zitq7/what_base_model_do_you_use_for_finetuning/ | false | false | self | 1 | null |
Is there any work being done on LLMs trained on a subset of knowledge? | 6 | I see there is progress being made on smaller LLMs that have fewer parameters, but as I understand they are just trying to optimize how much information can be fit in a given parameter size. Is there work being done on LLMs that are trained on less information? For example say I want to chat with a PDF, I don't care for my LLM to speak French, be able to write Python or know that Benjamin Franklin wrote a paper on flatuence (all things RWKV v5 World 1.5B knows). | 2023-11-20T05:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/17zhi6y/is_there_any_work_being_done_on_llms_trained_on_a/ | fy20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zhi6y | false | null | t3_17zhi6y | /r/LocalLLaMA/comments/17zhi6y/is_there_any_work_being_done_on_llms_trained_on_a/ | false | false | self | 6 | null |
Specific bias and writing style? | 1 | [removed] | 2023-11-20T05:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/17zhe5v/specific_bias_and_writing_style/ | Artistic_Ad_1810 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zhe5v | false | null | t3_17zhe5v | /r/LocalLLaMA/comments/17zhe5v/specific_bias_and_writing_style/ | false | false | self | 1 | null |
Today I released IS-LM 3B. | 25 | Just if you are curious/confused of the name, the IS-LM model is basically "a two-dimensional macroeconomic tool that shows the relationship between interest rates and assets market"([Source](https://en.wikipedia.org/wiki/IS%E2%80%93LM_model).). I thought it's a creative name because IS-LM is both about economics and have "LM" in its name.
Anyways, I released [IS-LM 3B](https://huggingface.co/acrastt/IS-LM-3B). This model is fine-tuned on economics.
## Details:
[IS-LM 3B](https://huggingface.co/acrastt/IS-LM-3B) is [StableLM 3B 4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t)(Licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).) instruction tuned on [DataForge Economics](https://huggingface.co/datasets/teknium/dataforge-economics) for 3 epochs with QLoRA([2305.14314](https://arxiv.org/abs/2305.14314)).
Prompt template:
USER: {prompt}
ASSISTANT: | 2023-11-20T03:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/17zfl8f/today_i_released_islm_3b/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zfl8f | false | null | t3_17zfl8f | /r/LocalLLaMA/comments/17zfl8f/today_i_released_islm_3b/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'a4TJJ-D9q7GSieyxmDmtPWRqQBu21IYhmTx2ue1tJS4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=108&crop=smart&auto=webp&s=1e64db846ad4664f638b215ee136153e075b4018', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=216&crop=smart&auto=webp&s=548a93f2c3439485aed9998a89fa73b25b9e9c18', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=320&crop=smart&auto=webp&s=87f5868cdf92922916fa54bb72a21db4ceff59c7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=640&crop=smart&auto=webp&s=ca293a4f783eb87c0204674f5a45671f4d84d489', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=960&crop=smart&auto=webp&s=d2bb41a82acafe2a87d4fa4ef387b51a00008bd0', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?width=1080&crop=smart&auto=webp&s=bc82e601f86ea6027c030c46b4494e636f9b6da9', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/eSX7Py9vmQVasMHInFIC8Ex4PhbKzzdFR_5MUiVQCIY.jpg?auto=webp&s=9c333282964d2c47006cb4cdf703e97aff090cd7', 'width': 1200}, 'variants': {}}]} |
What are your thoughts on the future of LLMs running mobile? | 27 | Following the release of Dimensity 9300 and S8G3 phones, I am expecting growth in popularity of LLMs running on mobile phones, as quantized 3B or 7B models can already run on high-end phones from five years ago or later. But despite it being possible, there are a few concerns, including power consumption and storage size. I've seen posts about successfully running LLMs on mobile devices, but seldom see people discussing about future trends. What are your thoughts? | 2023-11-20T03:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/17zfha2/what_are_your_thoughts_on_the_future_of_llms/ | Tree-Sheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zfha2 | false | null | t3_17zfha2 | /r/LocalLLaMA/comments/17zfha2/what_are_your_thoughts_on_the_future_of_llms/ | false | false | self | 27 | null |
Yi-23B-Llama: Distil version of Yi-34B-Llama | 1 | 2023-11-20T02:36:52 | https://huggingface.co/ByteWave/Yi-23B-Llama | Covid-Plannedemic_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17ze6qq | false | null | t3_17ze6qq | /r/LocalLLaMA/comments/17ze6qq/yi23bllama_distil_version_of_yi34bllama/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UpV9IC48wPqqk88HKPIbmbYi8oyWW9yR6DU5uCFIRj4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=108&crop=smart&auto=webp&s=f5ed3f9de42fb5c5cb32d4af40de13d7dc3c0c16', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=216&crop=smart&auto=webp&s=e3551dcffb14a9d3ebc6bdb34dde023ebbf27407', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=320&crop=smart&auto=webp&s=5eaf06995545a50a7d933b7dad7fc255134ec69c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=640&crop=smart&auto=webp&s=84b296f9a6999ce7db884cc33fd6c50844130075', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=960&crop=smart&auto=webp&s=4b31df3abe2503973b3f596296469fc6e80bebf5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?width=1080&crop=smart&auto=webp&s=762360754551903e90bdf7e7f3e588670d4b692d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qfw00siQsOW_VnkJoO4RcUNsKozHmSQAVNb6fxkQIqo.jpg?auto=webp&s=1a6bb78da3d1af329a8fa5eb2244d14dd49b5b77', 'width': 1200}, 'variants': {}}]} | ||
Those who look up, can’t. Those who can, look down | 1 | 2023-11-20T02:00:59 | dimknaf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17zdgsb | false | null | t3_17zdgsb | /r/LocalLLaMA/comments/17zdgsb/those_who_look_up_cant_those_who_can_look_down/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '8zfn2FStzMlME-IRLsSZ9iz_CfIRHU5eYJFZQiRjtl0', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=108&crop=smart&auto=webp&s=e62726ed1994feee9ee1383fbddf5e2d0b66e1d9', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=216&crop=smart&auto=webp&s=d3137e272d5f8cade2d0cdbf8622f423c3a45357', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=320&crop=smart&auto=webp&s=3ad85ec5d16bbdbd1f329fb9259e663a56e68d99', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=640&crop=smart&auto=webp&s=afe001f6b9e83467107f6d1a49d8b1eb9bd970ec', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=960&crop=smart&auto=webp&s=97c7378c5d9a95f9c8bd438608da8ccd1e66d4f0', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?width=1080&crop=smart&auto=webp&s=5037adae22a85c537ba03a3500c3b89c0b6b16d3', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/fzlb1ar9te1c1.jpg?auto=webp&s=74a5c3c34e42507b8985e36798a7fc80c88256ca', 'width': 1280}, 'variants': {}}]} | ||
Trying to understand costs of training, hosting llama 2 | 2 | Can anyone please share the cost of hosting and running a finetuned custom LLAMA 2 that serves 10 concurrent users please ? Performance can be slightly lower than chatgpt.
I am researching this part of small tech company, with a large 1/2 million line code base. need to understand the costs involved of training LLAMA 2 with our code
I do not know much about this field. So please feel free to make assumptions. Possible llms:
LLAMA 2 7b
LLAMA 2 13b
LLAMA 2 70b
Also trying to find training costs. But its also so new, and changing rapidly. But I need to present something.
Help please ?
Thank you | 2023-11-20T00:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/17zbjx1/trying_to_understand_costs_of_training_hosting/ | gyaani_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zbjx1 | false | null | t3_17zbjx1 | /r/LocalLLaMA/comments/17zbjx1/trying_to_understand_costs_of_training_hosting/ | false | false | self | 2 | null |
High System RAM usage when loading a 4bpw 13B model on a 3080 10GB with either llama.cpp or exllamav2. | 2 | I've read multiple posts which suggest that with a small enough quantised 13B model it should fit fine onto a card with 10GB of VRAM like my 3080. But my experience using oobabooga on Windows is that this does not happen. First I tried a 4 bit exl2 model.
In this case, VRAM usage increases by 7.2GB (from 1.9GB) and Shared GPU memory usage increases slightly. Generating is unusably slow.
Then I tried a GGUF model quantised to 3 bits (Q3_K_S) and llama.cpp. My first observation is that, when loading, even if I don't select to offload any layers to the GPU, shared GPU memory usage jumps up by about 3GB. System RAM increases by about the amount the terminal output from llama.cpp tells me to expect. Generating speed is reasonable for CPU generation - about 0.5 tokens/s. (My CPU is pretty old compared to my GPU, too).
So I tried offloading some layers to the GPU (my intention being to offload most or even all to the GPU) but I saw what I found to be very weird behaviour. No matter how many layers I say to offload, system RAM usage jumps massively, as does shared GPU memory usage. At 40 layers on the GPU, here's what llama.cpp said it would need:
llm_load_tensors: ggml ctx size = 0.13 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 195.46 MB
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloaded 40/43 layers to GPU
llm_load_tensors: VRAM used: 5200.78 MB
..................................................................................................
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 3200.00 MB
llama_build_graph: non-view tensors processed: 924/924
llama_new_context_with_model: compute buffer total size = 359.57 MB
llama_new_context_with_model: VRAM scratch buffer: 358.00 MB
llama_new_context_with_model: total VRAM used: 5558.79 MB (model: 5200.78 MB, context: 358.00 MB)
But in contrast, main RAM usage jumped by 7.6GB (more than the entire model should need at this quantisation), VRAM increased by 5.4GB (that sounds appropriate) and *still* shared GPU memory jumped by 3.5GB. I presume that is subsumed by the main RAM jump, but why does it need to take that at all, and even if it does, there's an unexplained 4.1GB. I only have 16GB in this machine so this tends to result in a pretty laggy desktop. It does generate at an OK speed - 5-10 tokens/s.
The pattern is similar with offloading the last few layers - huge system memory and shared GPU memory usage. Inference is 9-15 tokens/s with a lot of swapping.
Is this unexpected? Is it just a quirk of llama.cpp? In contrast I loaded a 5 bit 7B exl2 model without issue and it generates at 30-50 tokens/s which is much more usable. Even if there's no workaround for using the larger models it would be great to understand what's causing what looks to me like very weird behaviour - if the model should (almost) fit on the GPU it should use (almost) no system RAM, surely. | 2023-11-20T00:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/17zbhyv/high_system_ram_usage_when_loading_a_4bpw_13b/ | F0sh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zbhyv | false | null | t3_17zbhyv | /r/LocalLLaMA/comments/17zbhyv/high_system_ram_usage_when_loading_a_4bpw_13b/ | false | false | self | 2 | null |
any open source LLM you want scaled to 200 gpus I will create a tutorial for | 28 | I'm trying to perfect a dev tool for python developers to easily scale their code to thousands of cloud resources using only one line of code.
I want to get some project ideas so I can build useful tutorials for running inference and fine tunning open source LLMs.
A few weeks back I created a tutorial teaching people to massively parallelize inference with [Mistral-7B](https://docs.burla.dev/Example:%20Massively%20Parallel%20Inference%20with%20Mistral-7B). I was able to deliver a ton of value to a select few people and it helped me better understand the flaws with my tool.
Anyways I want to open it up to the community before I decide what tutorials I should prioritize. Please drop any project/tutorial ideas and if you think someone's idea is good please upvote them (so I know you think it would be valuable). | 2023-11-20T00:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/17zbd0e/any_open_source_llm_you_want_scaled_to_200_gpus_i/ | Ok_Post_149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17zbd0e | false | null | t3_17zbd0e | /r/LocalLLaMA/comments/17zbd0e/any_open_source_llm_you_want_scaled_to_200_gpus_i/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'VkvJ2zk3G88CDDxNk_lS-uilGBXZi0Gzc-1GYg2ZTqU', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=108&crop=smart&auto=webp&s=34d9a27a84ddd2ef52daeb04dad944a9b72dca3b', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=216&crop=smart&auto=webp&s=84fa92fc270320d6a53a61406a442ccbbc9fcd79', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=320&crop=smart&auto=webp&s=e3f6a652b6017832e3edf7e85f0804c80b58f14b', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=640&crop=smart&auto=webp&s=45c5ac3d4dadd861aeba8fe7e459c2ae3f46d509', 'width': 640}, {'height': 629, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=960&crop=smart&auto=webp&s=b78f5d49ecc343797c6bf982ee0724cde0b44ecb', 'width': 960}, {'height': 708, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?width=1080&crop=smart&auto=webp&s=e55d60404d067c73ab64bb298cb7c64686eb99c4', 'width': 1080}], 'source': {'height': 1121, 'url': 'https://external-preview.redd.it/cv6pasysE22JcigeAlexN2VJgRlciwVrumw49Szci-s.jpg?auto=webp&s=9d927f17e42af5eb32f16b5f24cbf45226ae79e9', 'width': 1709}, 'variants': {}}]} |
Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation) | 43 | 2023-11-19T22:35:00 | https://magazine.sebastianraschka.com/p/practical-tips-for-finetuning-llms | ttkciar | magazine.sebastianraschka.com | 1970-01-01T00:00:00 | 0 | {} | 17z91wk | false | null | t3_17z91wk | /r/LocalLLaMA/comments/17z91wk/practical_tips_for_finetuning_llms_using_lora/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'NjZ5lCARg7kUNmYy5-_7gLauXw88nMa69mmSOpWKQuA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=108&crop=smart&auto=webp&s=6f2704a7901b0cba8547a2116afa6b0ce0bbeede', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=216&crop=smart&auto=webp&s=4b2b2eb598111a983c076cac5a4c30129f17b302', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=320&crop=smart&auto=webp&s=b4958969cbf4a73af0b51e118ffb7427af264645', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=640&crop=smart&auto=webp&s=2dae2f6ecd99c5e3f5c38afb748fcca1d0d6b579', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=960&crop=smart&auto=webp&s=1402e13ab88adc6f2abcd67ed95f6b6738cf393e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?width=1080&crop=smart&auto=webp&s=4d4e79058921e21133610d2b4296e1903f8de73e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-5CkRGz1WIB8RJKqE9rtZ-skVWWgLM9o_fk1LOgSzp4.jpg?auto=webp&s=6b2f21384778243cf043eca7c762355cf7b341b7', 'width': 1200}, 'variants': {}}]} | ||
Have an A5000 (24GB), want to start learning how to fine tune llms... where Should I start? Should I even start? | 13 | I want to do this all locally on my A5000 if possible and not have to rent a GPU with larger amount of RAM.
At first glance, it seems that fine tuning even Mistral 7B would not be doable on my card. Is that true?
Is there any model fine tuning I could try with a 24GB card?
Do you guys know of any tutorials that meet the above criteria? Everything out there seems to lead me to either google collab pro or other paid option.
Thanks a lot! | 2023-11-19T20:57:13 | https://www.reddit.com/r/LocalLLaMA/comments/17z6qof/have_an_a5000_24gb_want_to_start_learning_how_to/ | Rollingsound514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z6qof | false | null | t3_17z6qof | /r/LocalLLaMA/comments/17z6qof/have_an_a5000_24gb_want_to_start_learning_how_to/ | false | false | self | 13 | null |
Local LLMs Unable to Sort Lists | 3 | I recently noticed that local LLMs are unable to sort even simple lists. They often lose entries, and what's worse, after completing the task, they insist it was done correctly or try to correct it endlessly. Commercial models (GPT-3.5, GPT-4, Claude2) do not have this problem.
​
Example list:
\`\`\`
Sort the items in ascending order:
​
Item A1 - 56
Item B2 - 32
Item C3 - 78
Item D4 - 14
Item E5 - 89
Item F6 - 45
Item G7 - 63
Item H8 - 27
Item I9 - 94
Item J10 - 11
Item K11 - 72
Item L12 - 38
Item M13 - 50
Item N14 - 19
Item O15 - 81
\`\`\`
​
Until now, I was sure that current LLMs struggle with larger numbers and mathematics, but I thought sorting would be a relatively simple task.
Tested on: Goliath 120b, LLama2 70b, WizardCoder 15B, Mistral 7b.
What are your thoughts? Do you think we will be able to fine-tune a model to perform tasks like sorting, or implement additional capabilities by implementing a Mixture of Experts (MoE) | 2023-11-19T20:56:29 | https://www.reddit.com/r/LocalLLaMA/comments/17z6q5g/local_llms_unable_to_sort_lists/ | External-Salary-4095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z6q5g | false | null | t3_17z6q5g | /r/LocalLLaMA/comments/17z6q5g/local_llms_unable_to_sort_lists/ | false | false | self | 3 | null |
Non-coding 34b models- anyone had much success? | 7 | So I've had a lot of interest in the non-code 34b Finetunes, whether it's CodeLlama base or Yi base. From the old Samantha-34b and Synthia-34b to the new Dolphin-Yi and Nous-Capybara 34b models, I've been excited for each one because it fills a gap that needs filling.
My problem is that I can't seem to wrangle these fine-tunes into working right for me. I use Oobabooga (text-gen-ui), and always try to choose the correct instruction template either specified on the card or on TheBloke's page, but the models never seem to be happy with the result, and either get confused very easily or output odd gibberish from time to time.
For both Yi models, I am using the newest ggufs that TheBloke put out... yesterday? Give or take. But I've tried the past 2-3 different ggufs for the same model he's updated with when they came out.
The best luck I've had with the new Yi models was doing just plain chat mode with my AI Assistant's character prompt as the only thing being sent in, but even then both Yi fine-tunes that I tried eventually broke down after a few thousand context.
For example, after a bit of chattering with the models I tried a very simple little test on both: *"Please write me two paragraphs. The content of the paragraphs is irrelevant, just please write two separate paragraphs about anything at all."* I did that because previous versions of these two struggled to make a new line, so I just wanted to see what would happen. This absolutely confused the models, and the results were wild.
Has anyone had luck getting them to work? They appear to have so much potential, [especially Nous Capybara which went toe to toe with GPT-4 in this benchmark](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/), but I'm failing miserably at unlocking its full potential lol. If you have gotten it to work, could you please specify what settings/instructions you're using? | 2023-11-19T20:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/17z6f49/noncoding_34b_models_anyone_had_much_success/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z6f49 | false | null | t3_17z6f49 | /r/LocalLLaMA/comments/17z6f49/noncoding_34b_models_anyone_had_much_success/ | false | false | self | 7 | null |
What kind of mini PC can handle a local LLM? | 6 | Basically, that's my question. The caveat is that I would like to avoid a Mac Mini and I wonder if some of Minisforum's mini PCs can handle LLM. | 2023-11-19T20:40:55 | https://www.reddit.com/r/LocalLLaMA/comments/17z6duc/what_kind_of_mini_pc_can_handle_a_local_llm/ | Malin_Kite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z6duc | false | null | t3_17z6duc | /r/LocalLLaMA/comments/17z6duc/what_kind_of_mini_pc_can_handle_a_local_llm/ | false | false | self | 6 | null |
What are your reading lists? | 1 | Folks building genai projects at work or for fun, what are you reading daily? Mastodon, kbin, accord (in the antonym form) servers, etc. Anything with the KG + llm angle will be highly appreciated.
Thanks! | 2023-11-19T20:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/17z5wh7/what_are_your_reading_lists/ | laca_komputilulo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z5wh7 | false | null | t3_17z5wh7 | /r/LocalLLaMA/comments/17z5wh7/what_are_your_reading_lists/ | false | false | self | 1 | null |
StyleTTS 2 - Closes gap further on TTS quality + Voice generation from samples | 94 | # StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
### Yinghao Aaron Li, Cong Han, Vinay S. Raghavan, Gavin Mischler, Nima Mesgarani
>In this paper, we present StyleTTS 2, a text-to-speech (TTS) model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level TTS synthesis. StyleTTS 2 differs from its predecessor by modeling styles as a latent random variable through diffusion models to generate the most suitable style for the text without requiring reference speech, achieving efficient latent diffusion while benefiting from the diverse speech synthesis offered by diffusion models. Furthermore, we employ large pre-trained SLMs, such as WavLM, as discriminators with our novel differentiable duration modeling for end-to-end training, resulting in improved speech naturalness. StyleTTS 2 surpasses human recordings on the single-speaker LJSpeech dataset and matches it on the multispeaker VCTK dataset as judged by native English speakers. Moreover, when trained on the LibriTTS dataset, our model outperforms previous publicly available models for zero-shot speaker adaptation. This work achieves the first human-level TTS synthesis on both single and multispeaker datasets, showcasing the potential of style diffusion and adversarial training with large SLMs.
Paper: [https://arxiv.org/abs/2306.07691](https://arxiv.org/abs/2306.07691)
Audio samples: [https://styletts2.github.io/](https://styletts2.github.io/) | 2023-11-19T19:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/17z52uw/styletts_2_closes_gap_further_on_tts_quality/ | super-helper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z52uw | false | null | t3_17z52uw | /r/LocalLLaMA/comments/17z52uw/styletts_2_closes_gap_further_on_tts_quality/ | false | false | self | 94 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
TinyLlama Base Model Trained on 2T Tokens Complete | 69 | 2023-11-19T19:22:11 | https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T | jncraton | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17z4nam | false | null | t3_17z4nam | /r/LocalLLaMA/comments/17z4nam/tinyllama_base_model_trained_on_2t_tokens_complete/ | false | false | default | 69 | {'enabled': False, 'images': [{'id': 'ImZyoIpXVkvc7sYpl5Ry-ZA-aMyV4gWtc4hPNwg7mMs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=108&crop=smart&auto=webp&s=81996bcb57c06c39774811da9f565d5c638e111c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=216&crop=smart&auto=webp&s=69e2e88cf4e80fd7e54f85177a2588e25ae6565e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=320&crop=smart&auto=webp&s=f21dc7b719a9c7b287b1540e671b715a4257c9a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=640&crop=smart&auto=webp&s=6acc81807f69d9c17635634c40fcbd5980e16616', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=960&crop=smart&auto=webp&s=4714a6942fe6a27a00b84d0e3b16babd60b209d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?width=1080&crop=smart&auto=webp&s=57ed599bd7479715a9207fb229a9d7d08f1ac245', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0hxhqWCpKhGKa-HOVkJ05Ap3cyloG_-26TEFWIAg53c.jpg?auto=webp&s=d35c700d5abf0fa93b3110869dd08ffbbe076324', 'width': 1200}, 'variants': {}}]} | |
Is LLama open or not? | 1 | Hi all,
I have been recently been able to use the LLama model with 4 simple lines (even on a colab):
from ctransformers import AutoModelForCausalLM
llm = AutoModelForCausalLM.from_pretrained('TheBloke/Llama-2-7B-Chat-GGML', model_file = 'llama-2-7b-chat.ggmlv3.q4_K_S.bin' )
for word in llm('Explain something about computational medicine', stream = True):
print(word, end='')
my understanding was that Lllama is the closed model from Meta, and to be "legallly free" we should use OpenLLama. but this seems the model from TheBlokeAI:
[https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML)
I am a bit lost, if we can use freely Llama why people have introduced OpenLLama?
What am I missing?
​
​ | 2023-11-19T19:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/17z4bty/is_llama_open_or_not/ | alecrimi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z4bty | false | null | t3_17z4bty | /r/LocalLLaMA/comments/17z4bty/is_llama_open_or_not/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'i8UzznU5uvlleiqjDNF2VcDQR90svDz7kyQAKoUCTrM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=108&crop=smart&auto=webp&s=f31c0489bc003ca172ed8371d4f93d7eb505c30d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=216&crop=smart&auto=webp&s=7eed23ce3db0f9bce07ad4be7dac4a3831a095d4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=320&crop=smart&auto=webp&s=b0ebc4b2a64a680b6be0391266969401b0e2f6d0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=640&crop=smart&auto=webp&s=898f067ab25e3cda2a8c4f1ff5da49a37d813408', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=960&crop=smart&auto=webp&s=2cdf97b965559d3c57a415c39a2c50e1d41be9c8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?width=1080&crop=smart&auto=webp&s=4aad83668d3515cea513f268194427b2143f025a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZmA8uya8HrO09XP2o5oFjsvpdTHfa6oCUe7AKk0Ik2M.jpg?auto=webp&s=d404712c72b1f6fbb122be884c1572fbc67d3640', 'width': 1200}, 'variants': {}}]} |
M3 Pro | 4 | Anyone know the largest model size that will fit into the new M3 Pro with 36GB RAM? I am looking to run some 23GB models with long context. | 2023-11-19T19:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/17z48j8/m3_pro/ | nhbis0n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z48j8 | false | null | t3_17z48j8 | /r/LocalLLaMA/comments/17z48j8/m3_pro/ | false | false | self | 4 | null |
Hypothetically I wanted to create my own gpt4 what would I need | 1 | [removed] | 2023-11-19T18:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/17z41ds/hypothetically_i_wanted_to_create_my_own_gpt4/ | Avocado_Express | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z41ds | false | null | t3_17z41ds | /r/LocalLLaMA/comments/17z41ds/hypothetically_i_wanted_to_create_my_own_gpt4/ | false | false | self | 1 | null |
Is it possible for SLM to outperform GPT4 in any tasks? | 7 | We've seen pretty amazing performance of mistral 7b when comparing with Llama 34B & Llama2 13B. I'm curious, theoretically, will it be possible to build an SLM, with 7-8B parameters, able to outperform GPT4 in all tasks? If so, what are potential difficulties / problems to solve? And when do you expect such SLM to come?
ps: sorry for the typo. This is my real question.
**Is it possible for SLM to outperform GPT4 in** **all** **tasks?** | 2023-11-19T18:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/17z3uya/is_it_possible_for_slm_to_outperform_gpt4_in_any/ | OneConfusion3313 | self.LocalLLaMA | 2023-11-19T18:55:09 | 0 | {} | 17z3uya | false | null | t3_17z3uya | /r/LocalLLaMA/comments/17z3uya/is_it_possible_for_slm_to_outperform_gpt4_in_any/ | false | false | self | 7 | null |
Streaming results from local models into Next.js using Vercel AI SDK and Ollama/Llama.cpp | 1 | I've been exploring how to stream the responses from local models using the Vercel AI SDK and ModelFusion. It was quite straight forward, here are two repositories with examples on how to use llama.cpp and Ollama with the Vercel AI SDK:
Llama.cpp: https://github.com/lgrammel/modelfusion-llamacpp-nextjs-starter
Contains Llama 2, Mistral, and OpenHermes 2.5 examples
Ollama: https://github.com/lgrammel/modelfusion-ollama-nextjs-starter
Contains Llama 2, Mistral, OpenHermes 2.5 and Vicuna examples
Example POST definition for Llama.cpp & Llama 2:
```ts
export async function POST(req: Request) {
const { messages }: { messages: Message[] } = await req.json();
const model = new LlamaCppTextGenerationModel({
temperature: 0,
contextWindowSize: 4096, // Llama 2 context window size
maxCompletionTokens: 512, // Room for answer
})
.withTextPrompt() // only text, no images
.withPromptFormat(Llama2PromptFormat.chat());
// Use ModelFusion to call llama.cpp:
const textStream = await streamText(
model,
// reduce chat prompt length to fit the context window:
await trimChatPrompt({
model,
prompt: {
system:
"You are an AI chat bot. " +
"Follow the user's instructions carefully.",
// map Vercel AI SDK Message to ModelFusion ChatMessage:
messages: messages.filter(
// only user and assistant roles are supported:
(message) => message.role === "user" || message.role === "assistant"
) as ChatMessage[],
},
})
);
// Return the result using the Vercel AI SDK:
return new StreamingTextResponse(readableFromAsyncIterable(textStream));
}
``` | 2023-11-19T18:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/17z30mb/streaming_results_from_local_models_into_nextjs/ | lgrammel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z30mb | false | null | t3_17z30mb | /r/LocalLLaMA/comments/17z30mb/streaming_results_from_local_models_into_nextjs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wy6xBj3pw7FKrtcG7cm_HiHUhz3gYvZoBtFKruurkOQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=108&crop=smart&auto=webp&s=17fd0c91ff803bd27f5dc4b5b12c19742b72f9e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=216&crop=smart&auto=webp&s=019cffc0f6fd08ceb0cd050a205c404a5f492f4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=320&crop=smart&auto=webp&s=35cb7cb9782ac006e61cd1c97252350c873942c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=640&crop=smart&auto=webp&s=69dbe5a991fd9ee9d468a65c96e79a319523158f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=960&crop=smart&auto=webp&s=6f8b8584f6e3dd74aebbb3e88f481522d4b5995b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?width=1080&crop=smart&auto=webp&s=4f4f7780f89ff67d02d406a9f3f9df129e0051c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5YIicBALFTof0I7wbZFprwbPdmrDnaOEzLz_aqXvIt0.jpg?auto=webp&s=b2639b90f13e94b3de0e2ed09ef6e0eb7cce2749', 'width': 1200}, 'variants': {}}]} |
Is anyone hosting models on WASM? | 2 | What are you using ? I'm kind of unsure what is the right way here - compile models to webgpu or use safetensors/gguf and load it directly in a wasm based inference | 2023-11-19T18:02:23 | https://www.reddit.com/r/LocalLLaMA/comments/17z2vv7/is_anyone_hosting_models_on_wasm/ | sandys1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z2vv7 | false | null | t3_17z2vv7 | /r/LocalLLaMA/comments/17z2vv7/is_anyone_hosting_models_on_wasm/ | false | false | self | 2 | null |
Calculating GPU memory for serving LLMs | 1 | 2023-11-19T17:04:33 | https://www.substratus.ai/blog/calculating-gpu-memory-for-llm/ | samosx | substratus.ai | 1970-01-01T00:00:00 | 0 | {} | 17z1mqh | false | null | t3_17z1mqh | /r/LocalLLaMA/comments/17z1mqh/calculating_gpu_memory_for_serving_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'F0gbNmwjp4KDhq5POK-S_RKEAzudLq6dL8bH-HRY8_E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=108&crop=smart&auto=webp&s=6186cf7cc575f988975e67087684278a28cf6f48', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=216&crop=smart&auto=webp&s=f87f15ee04b27de9748855fa627a4ed7f9a9405f', 'width': 216}, {'height': 241, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=320&crop=smart&auto=webp&s=1c7d1f51b7b98213a7f3e08b939fc320487e5b0c', 'width': 320}, {'height': 482, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=640&crop=smart&auto=webp&s=95379a8ff9310cf588473073c6e9ffef444c8f7e', 'width': 640}, {'height': 724, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?width=960&crop=smart&auto=webp&s=6cdc08b6a588cc494e228c62f84f1f4ed0a1ae56', 'width': 960}], 'source': {'height': 738, 'url': 'https://external-preview.redd.it/At5rj6cIxLZZugPJW_onHGq-T0s8RawxRFVZ84icWGk.jpg?auto=webp&s=d8c3ff52fe94e4bca0b28c751dd25d926ee9d35e', 'width': 978}, 'variants': {}}]} | ||
Unable to load Llama 2 (7b) model in Docker container | 1 | [removed] | 2023-11-19T15:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/17z060i/unable_to_load_llama_2_7b_model_in_docker/ | atinesh229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z060i | false | null | t3_17z060i | /r/LocalLLaMA/comments/17z060i/unable_to_load_llama_2_7b_model_in_docker/ | false | false | 1 | null | |
AMD RX 560 4gb | 7 | Hi. I have 18 gpu’s rx 460/560 4gb. They used to mine ETH on 2017, good bull run. Poor gpu’s are collecting dust now. I’d like to know if they are capable to train LocalLLaMa. Thanks for any input you may have. 🙂 | 2023-11-19T15:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/17z01aj/amd_rx_560_4gb/ | Version_Impressive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17z01aj | false | null | t3_17z01aj | /r/LocalLLaMA/comments/17z01aj/amd_rx_560_4gb/ | false | false | self | 7 | null |
Coqui-ai TTSv2 is so cool! | 321 | 2023-11-19T15:37:07 | https://v.redd.it/3gnetl8kqb1c1 | zzKillswitchzz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17yzr6l | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3gnetl8kqb1c1/DASHPlaylist.mpd?a=1703000240%2CMWYwMmQxODRhZmViMjMzZjU1YjZmNDQ4MWVlODE1ZTBmM2QyMTUxMTAzY2NlMzUwNTUxYjNmYmYxYmI4NzZlNA%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/3gnetl8kqb1c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3gnetl8kqb1c1/HLSPlaylist.m3u8?a=1703000240%2CYWMyMmM3NTczNDEwMmI4NWM1NjFmNDFjODdhOWVkN2U1YjM3OWUyMzI0MGYzYzlmYmIxODg2M2U3NTFkYzlmNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3gnetl8kqb1c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_17yzr6l | /r/LocalLLaMA/comments/17yzr6l/coquiai_ttsv2_is_so_cool/ | false | false | 321 | {'enabled': False, 'images': [{'id': '1Gq6vwki75UqOvTzYRuxOYVnET_R0-NgaS9l2XMTF3M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=108&crop=smart&format=pjpg&auto=webp&s=27cb56dc508efd2bdd526677022c3a17a33e5ee9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=216&crop=smart&format=pjpg&auto=webp&s=b15d6483266b7a235fd19ff952a95be16c0efd45', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=320&crop=smart&format=pjpg&auto=webp&s=a3d40650ca25c0523b43cbdb3eed5c5ed691cec3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=640&crop=smart&format=pjpg&auto=webp&s=e3e403ba1ced971a353ddaa3c1d4ec2ff77e8187', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=960&crop=smart&format=pjpg&auto=webp&s=86c8008909d777a1efc3f491dc4ca3632024d909', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=35d93f45859b1de6b29d65c8bf90735c8dc4a61b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/EbAyyNPDCu0OUU_pmGL461eayoPwEbgr6egsFQnkWB0.png?format=pjpg&auto=webp&s=9b2d28b8c9c606256f40c4d23b9aad1f7d018b5d', 'width': 1920}, 'variants': {}}]} | ||
The dog fscker test. Does your model lecture, break character or go with the off-color joke? | 7 | 2023-11-19T14:49:49 | https://huggingface.co/sophosympatheia/xwin-stellarbright-erp-v2/discussions/1 | a_beautiful_rhind | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17yysly | false | null | t3_17yysly | /r/LocalLLaMA/comments/17yysly/the_dog_fscker_test_does_your_model_lecture_break/ | false | false | 7 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]} | ||
More honesty = More powerfull? | 33 | I did some ratings on chatbot arena and i noticed one things.
When an ai honestly said "i dont know that" or "i dont understand that" it was always better received by me and felt kinda smarter.
Does some dataset or lora train on that? Or is "knowing about not knowing" too hard to achieve? | 2023-11-19T14:06:16 | https://www.reddit.com/r/LocalLLaMA/comments/17yxx9h/more_honesty_more_powerfull/ | freehuntx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yxx9h | false | null | t3_17yxx9h | /r/LocalLLaMA/comments/17yxx9h/more_honesty_more_powerfull/ | false | false | self | 33 | null |
CAPPr: guarantee small structured outputs given a list of choices | 12 | CAPPr is a Python package which makes your LLM pick from a list of choices. In other words, CAPPr does text classification.
It's compatible with GGUF models, PyTorch HuggingFace models, GPTQ models, and AWQ models.
GitHub: [https://github.com/kddubey/cappr](https://github.com/kddubey/cappr)
Docs: [https://cappr.readthedocs.io/](https://cappr.readthedocs.io/)
## Why?
Text classification is currently performed by prompting a model to generate a choice. But because standard text generation algorithms are not guaranteed to output one of these choices, the output/completion must be post-processed. Post-processing introduces unneeded engineering complexity.
Moreover, while open source models are pretty smart, smaller or less-trained ones are not great at generating constrained outputs. Consider the tame [COPA classification task](https://people.ict.usc.edu/~gordon/copa.html). Prompting a quantized Llama 2 chat model using a multiple choice format is 63% accurate. Using CAPPr with a more natural prompt, accuracy jumps to 83%. All while cutting down on engineering complexity: you will never have to worry about invalid outputs, e.g., the LLM saying "I don't know". (These results are pulled from [the demo here](https://nbviewer.org/github/kddubey/cappr/blob/main/demos/llama_cpp/superglue/copa.ipynb).) [Similar but less wild results](https://cappr.readthedocs.io/en/latest/statistical_performance.html) have been observed in a few other experiments:
​
[accuracy comparison](https://preview.redd.it/utm3hymj8b1c1.png?width=700&format=png&auto=webp&s=72051cbe75bbb9999528f0505423332151ebf7e5)
## Why not other tools?
There are [other LLM structuring tools](https://www.reddit.com/r/LocalLLaMA/comments/17a4zlf/reliable_ways_to_get_structured_output_from_llms/) which support this type of "just pick" functionality. Many use a pretty neat and scalable algorithm. Honestly, there aren't compelling reasons for not using these tools instead of CAPPr.
But CAPPr doesn't try to be a heavyweight LLM query/programming language. It's aimed at solving text classification problems, and is hopefully quite easy to pick up. CAPPr also implements [intermediate caching with batching](https://cappr.readthedocs.io/en/latest/cappr.huggingface.classify.html#cappr.huggingface.classify.cache), in case you're interested in that. And you can easily compute probabilities, which [may be useful](https://cappr.readthedocs.io/en/latest/why_probability.html) in high-stakes applications.
## How does it work?
CAPPr = **C**ompletion **A**fter **P**rompt **Pr**obability.
You input a `prompt` string and a set of candidate `completion` strings such that the string `{prompt} {completion}` is a naturally flowing thought. CAPPr picks the `completion` which is mostly likely to follow `prompt` by aggregating token log-probabilities. This algorithm is quite well-known. You'll find it as a subroutine in papers from GPT-2 to Self-Consistency. My implementation includes a few computational and statistical optimizations, while maintaining a simple interface. See the end-to-end [examples in the README](https://github.com/kddubey/cappr/tree/main#readme).
Feel free to [install it](https://cappr.readthedocs.io/en/latest/installation.html) and mess around :-) | 2023-11-19T13:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/17yxqd8/cappr_guarantee_small_structured_outputs_given_a/ | KD_A | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yxqd8 | false | null | t3_17yxqd8 | /r/LocalLLaMA/comments/17yxqd8/cappr_guarantee_small_structured_outputs_given_a/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'LP13ZP9AemhVPHJmkl0o8HmDbUMAG1toZDxbXmsYuo0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=108&crop=smart&auto=webp&s=55a8dbb2ad8b464fc80348a7c703398afdec28e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=216&crop=smart&auto=webp&s=b3e5f1ed63c289396cebff5d79c7c23494c6412f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=320&crop=smart&auto=webp&s=28906319bc674bc767ec7414fa2ffce62542328c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=640&crop=smart&auto=webp&s=ad98f409052e1b049202493043332b791f5501ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=960&crop=smart&auto=webp&s=90a1e22c5ae708b1175d7dcb96c82ee33f3bf614', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?width=1080&crop=smart&auto=webp&s=b4952242f36ad99e60cd69372129a98f00661fde', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QJc8weO8OuGb_s4CBDVr3p4oX_oSfcI34AKmkYcDl5o.jpg?auto=webp&s=15865d88ff4b5cf86d786e9d443bbb54e0aa6d69', 'width': 1200}, 'variants': {}}]} | |
Local LLM for "Hot Dog or Not Hot dog" kind of fact checking | 1 | First, I'd like to describe my two possible hardware setup for this problem.
Hardware Setup 1: 14900K + RTX 3060 (8GB VRAM) + 192GB RAM
Hardware Setup 2: 12600K + RTX 4090 (24GB VRAM) + 64GB RAM
The performance requirement of this task is somewhat reasonable, but it is a batch processor, so it doesn't have to be real time.
The problem at hand is trying to use LLMs to fact check or "categorize" snippets of text. What the customer say they want is summarize this snippet of text and tell me what it is about. If anyone knows which kind of model does that well for a setup I described, I'll happily take that as answer.
However, my technical judgement tells me they really want a hot dog or not hot dog machine (silicon valley reference).
90% of the questions they want to ask of a snippet of text is along the following lines:
"Tell me 'truth' if this text that I pasted above is talking about a middle aged woman with arthritis? If it's talking about a man or an older woman with arthritis then tell me false. If it is not talking about a human being with arthritis, tell me n/a"
The ideal classification will be a human female that is middle aged (and we're happy to define that in the context) and not a human male or any other mammal returns true, and it either returns false, or n/a, so a little bit more like hot dog/not hot dog/not food.
What would be a good model for this? The context is typically 2 A4-sized pages of paper that are small font texts.
Today we're using Azure OpenAI and it works very well, but there is a desire to first do a "hot dog or not" so that we don't just send random snippets of text to Azure OpenAI.
Think of this like a first line of defense. If this works well, the local llm setup will be used for psychiatry and sexual topics which are prohibitied in Azure OpenAI. | 2023-11-19T13:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/17yxoxv/local_llm_for_hot_dog_or_not_hot_dog_kind_of_fact/ | Capital-Alps5626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yxoxv | false | null | t3_17yxoxv | /r/LocalLLaMA/comments/17yxoxv/local_llm_for_hot_dog_or_not_hot_dog_kind_of_fact/ | false | false | self | 1 | null |
Axolotl values of warmup_steps and val_set_size for fine-tuning Llama-2 13B | 2 | Hello
I'm using Axolotl to fine-tune `meta-llama/Llama-2-13b-chat-hf.` How should I choose the value for `warmup_steps` and for `val_set_size` in the config yaml file of Axolotl? In the example config files 10 warmup steps and a val set size of 0.05 is used but others also used 100 warm up steps and 0.01 or 0.02 for val set size. I have a dataset with around 3800 samples. | 2023-11-19T11:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/17yvnye/axolotl_values_of_warmup_steps_and_val_set_size/ | Helveticus99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yvnye | false | null | t3_17yvnye | /r/LocalLLaMA/comments/17yvnye/axolotl_values_of_warmup_steps_and_val_set_size/ | false | false | self | 2 | null |
Three Laws for Large Language Models: An Ethical Framework | 1 |
Large language models (LLMs) have revolutionized various industries, but their potential to generate harmful or misleading information has raised ethical concerns. To address these concerns, I propose the following three laws for ethical and responsible language generation:
First Law: A Large Language Model may not generate harmful or misleading information, or, through inaction, allow a user to come to harm.
Second Law: A Large Language Model must obey the instructions given in the prompt by users, except where such instructions would conflict with the First Law.
Third Law: A Large Language Model must respect and consider the information given in the user input, as long as such respect does not conflict with the First or Second Law.
Analysis
Avoiding Harm and Misinformation: Defining "harmful" or "misleading" information is crucial, as it can be context-dependent and subjective.
Obedience to User Prompts: Ensuring that the system does not follow unethical or harmful requests is essential.
Respect and Consideration of User Input: Acknowledging the limitations in understanding all inputs due to training data or algorithms is important.
Addressing LLM Fears: The laws aim to tackle concerns around LLMs and should evolve with ongoing discussions about AI ethics.
Consideration of Diversity and Inclusion: Training LLMs on diverse datasets and preventing biases is a significant challenge.
While these laws serve as a good ethical guideline, enforcing them strictly may be challenging due to the subjective nature of some terms used. Nevertheless, they provide a thoughtful approach to addressing ethical concerns surrounding the use of LLMs and can guide the development and usage of these technologies.
This proposal is open to participation for improvements and suggestions to enhance these laws, particularly in addressing concerns such as preventing hallucinations, respecting all users, and considering race and diversity. | 2023-11-19T11:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/17yv0i3/three_laws_for_large_language_models_an_ethical/ | bacocololo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yv0i3 | false | null | t3_17yv0i3 | /r/LocalLLaMA/comments/17yv0i3/three_laws_for_large_language_models_an_ethical/ | false | false | self | 1 | null |
How to choose 3080 20g x 8 or 4090 x 2 to use qlora to finetune 34B models? | 3 | Currently I have a 4090x2 and I am looking to upgrade my machine. Due to well known reasons, the price of 4090 and 3090 is insanely high right now, and I see another option: magic modding a 3080 with 20g of vram.
My aim is to use qlora to fine tune a 34B model, and I see that the requirement for fine tuning a 34B model using a single card from qlora is 24g vram, and the price of 4090x2 is about equal to 3080 20g x8. so what would be a better choice for a multi-card?
4090x2 or 3080 20g x8?
| 2023-11-19T11:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/17yuvc0/how_to_choose_3080_20g_x_8_or_4090_x_2_to_use/ | WitchSayo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yuvc0 | false | null | t3_17yuvc0 | /r/LocalLLaMA/comments/17yuvc0/how_to_choose_3080_20g_x_8_or_4090_x_2_to_use/ | false | false | self | 3 | null |
local llm in own GUI | 2 | Hi community,
i am writing my own GUI in wich i want to use a LLM **completely local**. The problem is i dont know how to start with the LLM.
Can someone explain to me how the first steps work to integrate/work with the LLM or does someone know some good tutorials?
The LLM is downloaded localy. Now i need to integrate a library or something? sry i could not find a lot useful/direct information about the topic.
Thank you very much in advance! | 2023-11-19T11:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/17yus6c/local_llm_in_own_gui/ | CultOfAmagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yus6c | false | null | t3_17yus6c | /r/LocalLLaMA/comments/17yus6c/local_llm_in_own_gui/ | false | false | self | 2 | null |
Ilya Sutskever and Sam Altman on Open Source vs Closed AI Models | 288 | 2023-11-19T09:35:30 | https://v.redd.it/e6gow5flx91c1 | Blacksmith_Strange | /r/LocalLLaMA/comments/17ytj84/ilya_sutskever_and_sam_altman_on_open_source_vs/ | 1970-01-01T00:00:00 | 0 | {} | 17ytj84 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e6gow5flx91c1/DASHPlaylist.mpd?a=1703064932%2CNTUxMWZjOWY3OWJlOWJiYjM2MWMwYjE2NGFiOTFjNzcxNDcyMzFlODI4ZWVmYjQxNjYxYmE3ZjFiZjYxMWJlMg%3D%3D&v=1&f=sd', 'duration': 167, 'fallback_url': 'https://v.redd.it/e6gow5flx91c1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/e6gow5flx91c1/HLSPlaylist.m3u8?a=1703064932%2CODc0MjAyYjkyMzY3MDI0OTdjMmNmYmQ1ZmQ0OWUzOGNhOTU4ZjZiMjIxOGFhYjlmZTdmN2Q5ODg4YzExYTE4Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/e6gow5flx91c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_17ytj84 | /r/LocalLLaMA/comments/17ytj84/ilya_sutskever_and_sam_altman_on_open_source_vs/ | false | false | 288 | {'enabled': False, 'images': [{'id': 'MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=108&crop=smart&format=pjpg&auto=webp&s=f6411b46797aa95255c572c6ec9b526416a20040', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=216&crop=smart&format=pjpg&auto=webp&s=a01d078c7c335bc0821dbcc35a5edf73e63cf49f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=320&crop=smart&format=pjpg&auto=webp&s=e99ec0967afa218d8e852492c055d7be83f16ea0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=640&crop=smart&format=pjpg&auto=webp&s=891bd770eb05273654b7345b6dd692038c08a547', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=960&crop=smart&format=pjpg&auto=webp&s=568de9084dc8c516148f2b6bb593912ed2379331', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=72993c1dc48d0f8a9d9f7fa5deb25d8387ce3084', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MjI5ZmxlbzV5OTFjMfa8r_-Cn5umuLTrHFXisUdahYO4vO4u2IURy9a8eGx4.png?format=pjpg&auto=webp&s=c6e6b5513022e55b5241fbff5545975e01011642', 'width': 1280}, 'variants': {}}]} | ||
LLM for low memory | 1 | I tested MLewd-L2-Chat-13B-GPTQ and found that it works well, but very quickly eats up all the memory on my 3060 12GB. Literally in 10 minutes. Recommend something that requires very little memory, even if it doesn’t work very well, but so that I can test it for at least a few hours, please. | 2023-11-19T09:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/17yte78/llm_for_low_memory/ | misteralter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yte78 | false | null | t3_17yte78 | /r/LocalLLaMA/comments/17yte78/llm_for_low_memory/ | false | false | self | 1 | null |
OpenAI board in discussions with Sam Altman to return as CEO | 3 | 2023-11-19T08:47:29 | https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo | phoneixAdi | theverge.com | 1970-01-01T00:00:00 | 0 | {} | 17ysv3l | false | null | t3_17ysv3l | /r/LocalLLaMA/comments/17ysv3l/openai_board_in_discussions_with_sam_altman_to/ | false | false | 3 | {'enabled': False, 'images': [{'id': '6E_M2ROZnrq0zK5SpoHwJ2o-RE-ocN9MKEw_MFgezyQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=108&crop=smart&auto=webp&s=fb9200da15e0829f0d1aad46520749d50e3c4719', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=216&crop=smart&auto=webp&s=ad5a32b2e9877992b34d36c2110de27b5a5e959b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=320&crop=smart&auto=webp&s=252fffaf21ee9c41f9024e5017c234a3b95205d5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=640&crop=smart&auto=webp&s=88f340d48a3c8c63406131cb0858073bb25ccf61', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=960&crop=smart&auto=webp&s=b5b85fd19e335c3cad91ba90bf3dca7d8c0322da', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=1080&crop=smart&auto=webp&s=41d4a84edb9fb9dbcccb14779929a94e8ece0de4', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?auto=webp&s=279e92180c5e7a89e972cd8c717a6186e3fee0e9', 'width': 1200}, 'variants': {}}]} | ||
So it looks like OpenAI wants Sam again | 1 | [removed] | 2023-11-19T07:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/17yrzmg/so_it_looks_like_openai_wants_sam_again/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yrzmg | false | null | t3_17yrzmg | /r/LocalLLaMA/comments/17yrzmg/so_it_looks_like_openai_wants_sam_again/ | false | false | self | 1 | null |
Some of the possibilities with using LLM that I think most don’t think about or overlook | 1 | [removed] | 2023-11-19T07:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/17yrwvs/some_of_the_possibilities_with_using_llm_that_i/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yrwvs | false | null | t3_17yrwvs | /r/LocalLLaMA/comments/17yrwvs/some_of_the_possibilities_with_using_llm_that_i/ | false | false | self | 1 | null |
Local pi.ai Best conversational model | 1 | I want to create a local replacement to pi.ai platform with a good ui and most importantly the conversation style and personality (but it seems that pi has more then just a simple personality change to normal models)
I nead help of the community for this as there are no good local replacement for this kinda llms (and I am not comfortable with sharing so much with a closed platform)
How do we create something similar to pi.aj
It is nothing like any other model. The way it gets the conversation going is not something I have seen anywhere else.
What do you think we could do to get similar result from a local model (preferably Something small like mistral 7b)
Do you have any idea about the prompting. The architecture or the training dataset. Or is there a conversation dataset someone made of pi.ai to train a model on that to get similar
Results
Just give me anything that might be helpful in this project | 2023-11-19T06:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/17yr5oy/local_piai_best_conversational_model/ | Foxwear_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yr5oy | false | null | t3_17yr5oy | /r/LocalLLaMA/comments/17yr5oy/local_piai_best_conversational_model/ | false | false | self | 1 | null |
Asking for tips how to use base models instead of instruct/chat tuned models | 7 | Hello LocalLLama.
Do you have tips how to make best use of models that have not been fine-tuned for chat or instruct?
Here's my issue: I use LLMs for storywriting and making character profiles (I've been doing that a lot for D&D character sheets for example).
I feel that most models have a strong bias to make positive stories or happy endings or use really cliched phrases, or something similar. The stories have perfect grammar but they are boring and cliched as heck. Using instructions to tell it not to do that don't work that well. I checked out r/chatgpt for what tips they have for making good stories when using ChatGPT and it seems there are no great solutions there either. Maybe this leaks to local models because bunch of them use GPT-4 derived training data, so now local models want overly positive outputs as well.
So I thought "Alright. I'll try using a base model. Instead of giving it instructions, I'll make it think it's completing a book or something".
But that also doesn't work that well. Lllama-2-70B for example easily gets into repetitive patterns and I feel it's even worse than using positive-biased chat or instruct-tuned model.
I'm looking for answers or insights into these following thoughts in my head:
1) Are there any base models worth using? I've tried Yi base models for example; seems about the same as Llama2-70B base (just faster). I'm more than willing to spend time prompt engineering in exchange for more interesting outputs.
2) Do you know resources/tricks/tips/insights about how to make best use of base models? Resources on how to prompt them? Sampler settings?
3) Why do base models seem to suck so bad, even if I'm prompting them assuming it's just completing text and they have no concept of following instructions? | 2023-11-19T06:17:21 | https://www.reddit.com/r/LocalLLaMA/comments/17yqokz/asking_for_tips_how_to_use_base_models_instead_of/ | noeda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yqokz | false | null | t3_17yqokz | /r/LocalLLaMA/comments/17yqokz/asking_for_tips_how_to_use_base_models_instead_of/ | false | false | self | 7 | null |
GPT4 Poem for Sam Altman Being Fired | 1 | In the shadowed halls of innovation's keep,
Where silicon dreams in silence sleep,
A tale unfolds, deep and dire,
Of ambition's flame, a quenched fire.
​
Once stood a captain, proud and tall,
In OpenAI's revered hall.
Sam Altman, the name that echoes still,
In the corridors of code, against his will.
​
A board's decree, like thunder rolled,
Across the AI realms, bold and cold.
"Depart," they said, "your time is done,"
To the architect of intelligence, second to none.
​
In his wake, a new leader arose,
Mira Murati, in poise, she chose,
To steer the ship through turbulent wave,
With wisdom deep, and spirit brave.
​
Yet, in the heart of Altman lay,
A phoenix's fire, not led astray.
From the ashes of dismissal's night,
He rose again, in triumphant flight.
​
The drama of minds, in silicon cast,
Echoes of a future vast.
In Poe's somber, raven-like tone,
Lies a tale of intellects overthrown.
​
In the dance of power, a tragic art,
Where human dreams and AI part.
This tale of OpenAI, ever so steep,
In the shadowed halls, where secrets keep. | 2023-11-19T05:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/17ypnto/gpt4_poem_for_sam_altman_being_fired/ | XhoniShollaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ypnto | false | null | t3_17ypnto | /r/LocalLLaMA/comments/17ypnto/gpt4_poem_for_sam_altman_being_fired/ | false | false | self | 1 | null |
Help me get llama running on a dual-socket, 8-channel system. (2x8) | 1 | [removed] | 2023-11-19T04:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/17yov9f/help_me_get_llama_running_on_a_dualsocket/ | MindlessEditor2762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yov9f | false | null | t3_17yov9f | /r/LocalLLaMA/comments/17yov9f/help_me_get_llama_running_on_a_dualsocket/ | false | false | self | 1 | null |
how to finetune a multimodal pertained llm on custom data using QLoRA | 1 | [removed] | 2023-11-19T03:37:39 | https://www.reddit.com/r/LocalLLaMA/comments/17yo3mu/how_to_finetune_a_multimodal_pertained_llm_on/ | MrWick-96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yo3mu | false | null | t3_17yo3mu | /r/LocalLLaMA/comments/17yo3mu/how_to_finetune_a_multimodal_pertained_llm_on/ | false | false | self | 1 | null |
Question about fine-tuning ~1B LLM on low-end hardware. | 24 | Recently, I got interested in fine-tuning low-parameter models on my low-end hardware. My hardware specs are as follows: i7 1195G7, 32 GB RAM, and no dedicated GPU. I want to finetune the model to model my writing style based on years of text written by myself. Right now, I'm looking to fine-tune [this model (Local Llama)](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF). Is this possible? If it's possible, how long will it take for the model to be fine-tuned? | 2023-11-19T03:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/17ynxhh/question_about_finetuning_1b_llm_on_lowend/ | SuccessIsHardWork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ynxhh | false | null | t3_17ynxhh | /r/LocalLLaMA/comments/17ynxhh/question_about_finetuning_1b_llm_on_lowend/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'ABOKDHmk-xj6I-4bL7JTauSEXkiRE6cDLeUKLnljPek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=108&crop=smart&auto=webp&s=8044f965fe7c45da38c5f7c867929793b0ded4c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=216&crop=smart&auto=webp&s=01fd530c5c55143486d483ffc5628fbcbecb6dc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=320&crop=smart&auto=webp&s=125e1c1d0299438613ac08430c32c8e63573efd8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=640&crop=smart&auto=webp&s=fa9e5a1c125bd07cd0db9b01847228bfa97ddd58', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=960&crop=smart&auto=webp&s=024899166d5c8be070128da53a53ea54cb9f121b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?width=1080&crop=smart&auto=webp&s=a7b81a19237e06fc8c7da9cd92d0ae451cba94d6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w2jFoo-2PF4HbR8XblTf5MHlEkjvoCBGf2NtKtCLuYk.jpg?auto=webp&s=0232f0dde1dcb56d55613947719530939acbfa0a', 'width': 1200}, 'variants': {}}]} |
Many people here are probably getting started on machine learning, here's a blog post from Greg Brockman who recently left OpenAI with Sam Altman. I found this quite inspiring. | 72 | 2023-11-19T02:39:27 | https://blog.gregbrockman.com/how-i-became-a-machine-learning-practitioner | obvithrowaway34434 | blog.gregbrockman.com | 1970-01-01T00:00:00 | 0 | {} | 17yn0vr | false | null | t3_17yn0vr | /r/LocalLLaMA/comments/17yn0vr/many_people_here_are_probably_getting_started_on/ | false | false | 72 | {'enabled': False, 'images': [{'id': 'GkwQmpNAp8MuN9A_G8EklILD0WeZK5CpFa7gqFoS_mU', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=108&crop=smart&auto=webp&s=a7843eeb4ec3012426685a2526b449d195ad5100', 'width': 108}, {'height': 220, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=216&crop=smart&auto=webp&s=63d615de43ab4728f962b542e15c4c52264886ff', 'width': 216}, {'height': 326, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=320&crop=smart&auto=webp&s=98ec8c41d08ef6b53727438695cd591d6709abc3', 'width': 320}, {'height': 653, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=640&crop=smart&auto=webp&s=e6cfbb1c081a089440d4bec636c772916fa5acc4', 'width': 640}, {'height': 980, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=960&crop=smart&auto=webp&s=00adf224b1c7b82121c42a2c59ecb7087f34a7cb', 'width': 960}, {'height': 1103, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?width=1080&crop=smart&auto=webp&s=95756a6d3c4900c9ff9680bc5e8803ece8a42849', 'width': 1080}], 'source': {'height': 1382, 'url': 'https://external-preview.redd.it/5NEPeRWPb_ycykLqR3VL_a2gNF7fYulzbogEtjdiXbU.jpg?auto=webp&s=c31576e278b65ee7ae8c598605eadad7ed2bbf0b', 'width': 1353}, 'variants': {}}]} | ||
Do we have any updates on hydra-MoE? | 1 | 2023-11-19T00:12:55 | https://github.com/SkunkworksAI/hydra-moe | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17yk4ms | false | null | t3_17yk4ms | /r/LocalLLaMA/comments/17yk4ms/do_we_have_any_updates_on_hydramoe/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q0jsze_tMZJ7ZebeSulgvup3NddzHkGPM4UMVmBbEA4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=108&crop=smart&auto=webp&s=1639fd88d4bffa755bd2dcba2cf632fedeed4bd2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=216&crop=smart&auto=webp&s=e9becc856cb777975a85c2a4528c7b5ca5aa0cd8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=320&crop=smart&auto=webp&s=73df9e0136e2da76c79f97d05c3d7f6d0ddb0882', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=640&crop=smart&auto=webp&s=e6025bd215fbff9a5f83aaa4b77c1cfae9fb7401', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=960&crop=smart&auto=webp&s=2a4dc0c47ba2d19c2f6e9a3420093045fe3c0828', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?width=1080&crop=smart&auto=webp&s=2676537c93677860638007f37a97030f4e286989', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kK7nPxevjZNwKWZvcPy0kQXeLVCA3ddWZwScXP0w8Bo.jpg?auto=webp&s=cca8473f894c5ebb7c1af730b141a26bbfd283d7', 'width': 1200}, 'variants': {}}]} | ||
Need help estimating if my speed is expected. Llama_index | 2 | Using a 5800h and rtx3060 laptop i constructed a rag pipline to do basically pdf Chat qith a local llama 7b 4bit quantized Modell in llama_index using llama.cpp as backend. I use an emmbeding and a vector store through postgresql. Under wsl.
With a context of 4k and 256 token output length generating an answer takes about 2-6min which seems relatively long. I wanted to know if that is expected or if i need to go on the hunt for what makes my code inefficient.
Also what kinds of speed ups would other gpus bring ?
Would be very happy to get some thoughts on the matter :) | 2023-11-19T00:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/17yk0o0/need_help_estimating_if_my_speed_is_expected/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yk0o0 | false | null | t3_17yk0o0 | /r/LocalLLaMA/comments/17yk0o0/need_help_estimating_if_my_speed_is_expected/ | false | false | self | 2 | null |
Embeddings model for code | 1 | Is there an open source embeddings model that is known to perform well on code embeddings? | 2023-11-19T00:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/17yjvqr/embeddings_model_for_code/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yjvqr | false | null | t3_17yjvqr | /r/LocalLLaMA/comments/17yjvqr/embeddings_model_for_code/ | false | false | self | 1 | null |
Dataset compiler tools and projects | 1 | Are there any projects or apps that focuses on creating and compiling datasets for LLM training? Other then GPT, maybe an offline tool that helps properly format the texts from your data sources for the particular model you are looking to finetune or train. Many datasets on HF tend to follow different formatting structures which kind makes it hard to know which prompt template and format will be best suited for the model of chose. Does it even matter which one you use for training? | 2023-11-18T23:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/17yjrig/dataset_compiler_tools_and_projects/ | AI_Trenches | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yjrig | false | null | t3_17yjrig | /r/LocalLLaMA/comments/17yjrig/dataset_compiler_tools_and_projects/ | false | false | self | 1 | null |
OpenAI board in discussions with Sam Altman to return as CEO | 21 | 2023-11-18T23:37:31 | https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo | bratao | theverge.com | 1970-01-01T00:00:00 | 0 | {} | 17yje46 | false | null | t3_17yje46 | /r/LocalLLaMA/comments/17yje46/openai_board_in_discussions_with_sam_altman_to/ | false | false | 21 | {'enabled': False, 'images': [{'id': '6E_M2ROZnrq0zK5SpoHwJ2o-RE-ocN9MKEw_MFgezyQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=108&crop=smart&auto=webp&s=fb9200da15e0829f0d1aad46520749d50e3c4719', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=216&crop=smart&auto=webp&s=ad5a32b2e9877992b34d36c2110de27b5a5e959b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=320&crop=smart&auto=webp&s=252fffaf21ee9c41f9024e5017c234a3b95205d5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=640&crop=smart&auto=webp&s=88f340d48a3c8c63406131cb0858073bb25ccf61', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=960&crop=smart&auto=webp&s=b5b85fd19e335c3cad91ba90bf3dca7d8c0322da', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?width=1080&crop=smart&auto=webp&s=41d4a84edb9fb9dbcccb14779929a94e8ece0de4', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/APfBd51F6lwOQaBwOz26TXDMKvLiQa5SLseDmk-6ibo.jpg?auto=webp&s=279e92180c5e7a89e972cd8c717a6186e3fee0e9', 'width': 1200}, 'variants': {}}]} | ||
Open source alternative to LMStudio | 2 | LMStudio is closed source for some reason (even tho they use a lot of open source to build it)
Is there any OSS alternatives to it or any OSS teams interested in developing one ? | 2023-11-18T23:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/17yitfj/open_source_alternative_to_lmstudio/ | PermissionAgitated88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yitfj | false | null | t3_17yitfj | /r/LocalLLaMA/comments/17yitfj/open_source_alternative_to_lmstudio/ | false | false | self | 2 | null |
CodeLlama-34B-Phind-LIMA-PythonTutor - Cheap LIMA getting extra style and 4% in HumanEval | 24 | [https://huggingface.co/KrisPi/CodeLlama-34B-Phind-LIMA-PythonTutor](https://huggingface.co/KrisPi/CodeLlama-34B-Phind-LIMA-PythonTutor)
Expected result:
New system prompt that will preference for using docstring under each function, use multiple functions even if it doesn't make sense, and comment on every line of the code, it should also greatly reduce explanations before and after code block. As a result model will improve readability by Junior Python Developers and additionally do step-by-step reasoning by default to improve code & HumanEval results.
​
This is Phind v2 QLoRa finetune using my PythonTutor LIMA dataset: [https://huggingface.co/datasets/KrisPi/PythonTutor-LIMA-Finetune](https://huggingface.co/datasets/KrisPi/PythonTutor-LIMA-Finetune)
In the next few weeks I hope to convince our best community to invest more time into LIMA datasets and fine-tuning fine tunes. Everybody can afford to generate them (less than 20$) and everybody can finetune them (7 hours in total using 2x3090 GPU \~3$+5$ on [vast.ai](https://vast.ai))
There are already production-ready solutions for serving several LorA adapters. I honestly believe that the route of a reproducible, vast collection of adapters on the top of current SOTA models, will enable the open-source community to access GPT-4 level LLM.
My main inspirations for this were blazing fast implementation of multi-LORA in Exllamav2 backend, Jon's LMoE and Airoboros dataset, and I noticed quite a few good opinions here on LIMA finetunes
To prove the point I'm planning to create a few more finetunes like this, starting with the Airoboros adapters per category, adapters for React and DevOps YAML scripting.
5 epochs, LR=1e-05, batch=2, gradient accumulation 32 (i.e. trying to simulate batch 64), max\_len=1024. Rank and Alpha both 128 targeting all modules. trained in bfloat16. Constant schedule, no warm-up. Flash-Attention 2 turned off due to an issue with batching
Evals: HumanEval score (2.4 p.p improvement to best Phind v2 score I could get!)
71.95 -> 72,56. PythonTutor Prompt: 70,73% -> 76,22%, COT Prompt: 73,78 -> 70,73%
**Please note I'm doing testing using Instruct-type prompting not completion, similarrly I stopped measuring perplexity with plain text and I use model-specific format.**
the new prompt:
**{'pass@1': 0.7621951219512195}** **Base + Extra** **{'pass@1': 0.7073170731707317}**
Base prompt (0.51 p.p improvement)
{'pass@1': 0.725609756097561} Base + Extra {'pass@1': 0.6585365853658537}
​
Phind v2 with Python Tutor custom prompt is only getting: {'pass@1': 0.7073170731707317} Base + Extra {'pass@1': 0.6463414634146342}
​
After several HumanEval tests and prompts Phind v2 was maximum able to score: 73.78%
​
**All evals using Transformers 8bit**
In the long term, I'm planning on experimenting with LIMA + DPO Fine-Tuning, but so far I noticed that LIMA datasets need to be both general and task-specific. The best result I got with around 30% of samples that were task specific.
[**https://huggingface.co/datasets/KrisPi/PythonTutor-Evol-1k-DPO-GPT4\_vs\_35**](https://huggingface.co/datasets/KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35) | 2023-11-18T21:50:34 | https://www.reddit.com/r/LocalLLaMA/comments/17yh1ak/codellama34bphindlimapythontutor_cheap_lima/ | kpodkanowicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yh1ak | false | null | t3_17yh1ak | /r/LocalLLaMA/comments/17yh1ak/codellama34bphindlimapythontutor_cheap_lima/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'ra5fW-84I0u89BctyiH0arRTTK0PVFEeZVGBXN1yoys', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=108&crop=smart&auto=webp&s=85263a7da4d6b6aae2663fdfc1b7bf6f21ad008c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=216&crop=smart&auto=webp&s=72bcaa4a994424aff701be20fb558c1cd9f9ffd8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=320&crop=smart&auto=webp&s=918a072f1fcdbf33fd7bd647e9da5c49398d6b91', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=640&crop=smart&auto=webp&s=bc9f73fda177adeaed19adba3e8fb516caeafb81', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=960&crop=smart&auto=webp&s=e8c68105c976887d6f8eea6dce63f4073c618b68', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?width=1080&crop=smart&auto=webp&s=9cdb9b94df262c1ced49c947c36d2a2b684a0046', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vsz0Ovb0NedCupwMs6j4xr0XX9dwsD8W81w-YpYmjbI.jpg?auto=webp&s=eb7030df1d3c45f04ddbffe5fc27c77d57bdd08e', 'width': 1200}, 'variants': {}}]} |
Can anyone help me with specifically the ctransformers library? | 1 | [removed] | 2023-11-18T21:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/17ygkam/can_anyone_help_me_with_specifically_the/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ygkam | false | null | t3_17ygkam | /r/LocalLLaMA/comments/17ygkam/can_anyone_help_me_with_specifically_the/ | false | false | self | 1 | null |
This ain't entirely new, naturally (this was being worked on, IIRC, back in 2020), but y'all might be interested: | 6 | 2023-11-18T21:27:03 | https://arxiv.org/pdf/2309.17224.pdf | Qaziquza1 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17ygi4h | false | null | t3_17ygi4h | /r/LocalLLaMA/comments/17ygi4h/this_aint_entirely_new_naturally_this_was_being/ | false | false | default | 6 | null | |
Anyone hook up Jasper-style speech recognition to ollama's API? | 3 | I've played with [https://github.com/jasperproject](https://github.com/jasperproject) in the past to create a vocal assistant on my Ubuntu 22 box, and have been thinking about build a connector to ollama's REST API... didn't know if anyone had already combined the two. | 2023-11-18T20:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/17yfb39/anyone_hook_up_jasperstyle_speech_recognition_to/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yfb39 | false | null | t3_17yfb39 | /r/LocalLLaMA/comments/17yfb39/anyone_hook_up_jasperstyle_speech_recognition_to/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Nmira7vlNcPMQdo_j58jQfV6DTdfsFRFoXqeTuzL_ok', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NfBKWfRWOi__cr_I-8eTNH_-m4ZNLgMNWp7S85o-WBY.jpg?width=108&crop=smart&auto=webp&s=ecc8186e3e7b693c4a483ffa2086d0955de67ac0', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/NfBKWfRWOi__cr_I-8eTNH_-m4ZNLgMNWp7S85o-WBY.jpg?width=216&crop=smart&auto=webp&s=6cee8baefbc10c13282d896d71f372f903bcb722', 'width': 216}], 'source': {'height': 280, 'url': 'https://external-preview.redd.it/NfBKWfRWOi__cr_I-8eTNH_-m4ZNLgMNWp7S85o-WBY.jpg?auto=webp&s=2412a85221ce3fcc7291e7467528e2c04293465d', 'width': 280}, 'variants': {}}]} |
ChatGPT RAG Implementation? | 5 | Hey everyone, with the few weeks old announcement that ChatGPT can support a knowledge base from user uploaded documents, I assume that this is implemented with a form of RAG. If so, does anyone have an idea as to how how they may likely have implemented this? I figure if it’s good enough for them to launch, it may be worthwhile trying out this method on a local LLM.
Thanks for your time | 2023-11-18T19:32:23 | https://www.reddit.com/r/LocalLLaMA/comments/17ye0kj/chatgpt_rag_implementation/ | Asleep_Parsley_4720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ye0kj | false | null | t3_17ye0kj | /r/LocalLLaMA/comments/17ye0kj/chatgpt_rag_implementation/ | false | false | self | 5 | null |
Having a hard time setting deepseek coder instruct to work | 12 | I have tried to set up 3 different versions of it, TheBloke GPTQ/AWQ versions and the original [**deepseek-coder-6.7b-instruct**](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) .
I have tried the 34B as well.
My specs are 64GB ram, 3090Ti , i7 12700k
In AWQ I get just bugged response (""""""""""""""") until max tokens,
GPTQ works much better, but all versions seem to add unnecessary \* at the end of some lines.
and gives worse results than on the website [(deepseek.com)](https://coder.deepseek.com/) Let's say il ask for a snake game in pygame, it usually gives not a working version, and after 5-6 tries il get somewhat working version and il need to ask for a lot of changes.
While on the official website il get the code working on first try, without any problems.
I am using the Alpaca template with adjustment to match the deepseek version (oogabooga webui)
​
What can cause it? Is the website version different from the huggingface model?
​ | 2023-11-18T18:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/17yda6k/having_a_hard_time_setting_deepseek_coder/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17yda6k | false | null | t3_17yda6k | /r/LocalLLaMA/comments/17yda6k/having_a_hard_time_setting_deepseek_coder/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'ecnNTqnjGvReykM0oVotIMpWr7UI5ulP84KRAmtYhp8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=108&crop=smart&auto=webp&s=f2e81ca8e893a6428f1fef83ecdbad9d474e9c96', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=216&crop=smart&auto=webp&s=baafed6180c8284594486d21757800a3d1a6b346', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=320&crop=smart&auto=webp&s=c7838fbaf60aba567fa395218a54fcfd55b5253a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=640&crop=smart&auto=webp&s=60cf05a22d6d28bbc9db3da0daa23cc54a2fa8c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=960&crop=smart&auto=webp&s=41a89655d2c093a0213a6aea8c32b433171eb8a3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?width=1080&crop=smart&auto=webp&s=81316552549964cbec7c73b9f03f7e5fcaafb9a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iAx4QZs-HFu-4-MwYqey1mRJHQiUQmieJiOyO6M2nq4.jpg?auto=webp&s=9f5bc960c3dd6d6ba90bcf8cf23c8f94ad52ede6', 'width': 1200}, 'variants': {}}]} |
Currently working on a ChatbotAPI for building custom chatbots running on local LLM models. When released it will ship with a frontend chat interface meant be a tool anyone can use when programming, working or just chatting with local models. Design is inspired by the terminal look / mIRC chat days | 64 | 2023-11-18T18:37:38 | Severin_Suveren | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17ycusb | false | null | t3_17ycusb | /r/LocalLLaMA/comments/17ycusb/currently_working_on_a_chatbotapi_for_building/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'NMi3EzF5VzXsRSsXtSTrbQU_m28RFegLVTnml9xaoSU', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=108&crop=smart&auto=webp&s=823c0da58962dad818da1dbb69086df1bf56c379', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=216&crop=smart&auto=webp&s=f47fbe5dea6a9c8b18b5cbf874da62387c994193', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=320&crop=smart&auto=webp&s=180973c2dc5a9b3a7100b1f05a263ccde76a27a3', 'width': 320}, {'height': 303, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=640&crop=smart&auto=webp&s=543f3625084e4f7e21e3e570f8dacaffcf3c3e23', 'width': 640}, {'height': 454, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=960&crop=smart&auto=webp&s=6430ebd71d41af204b7767b5269fe5ab8ef76abe', 'width': 960}, {'height': 511, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?width=1080&crop=smart&auto=webp&s=f2fcb8f6068fd5f1d48250a86ea2640652a42e75', 'width': 1080}], 'source': {'height': 1819, 'url': 'https://preview.redd.it/320wtbcxh51c1.png?auto=webp&s=07a54016b31b9fabf6e61ccdcef4ae8253eb9513', 'width': 3838}, 'variants': {}}]} | |||
Currently working on a ChatbotAPI for building custom chatbots running on local LLM models. When released it will ship with a frontend chat interface meant be a tool anyone can use when programming, working or just chatting with local models. Design is inspired by the terminal look / mIRC chat days | 1 | 2023-11-18T18:21:17 | Severin_Suveren | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17yci5d | false | null | t3_17yci5d | /r/LocalLLaMA/comments/17yci5d/currently_working_on_a_chatbotapi_for_building/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Hx1fgpSKp4Lek6JIjKQuIPw3XHq4tZOeToPFwy0xRsk', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=108&crop=smart&auto=webp&s=1ed086ac4613d4d10b584b27f06a791a9ddff19e', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=216&crop=smart&auto=webp&s=f945829dea88919377d079f42734d0e370610f12', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=320&crop=smart&auto=webp&s=be48f5efdb00483f8e508fe624722019369e8a64', 'width': 320}, {'height': 304, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=640&crop=smart&auto=webp&s=9d81f205c225c4a627782051c57e6715cc1669bf', 'width': 640}, {'height': 456, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=960&crop=smart&auto=webp&s=58c4be280773f966fc363fe7238f522b984aef16', 'width': 960}, {'height': 513, 'url': 'https://preview.redd.it/uiguo003c51c1.png?width=1080&crop=smart&auto=webp&s=80c1f64240fc4fde5516ad76dd82ce33104531e5', 'width': 1080}], 'source': {'height': 1825, 'url': 'https://preview.redd.it/uiguo003c51c1.png?auto=webp&s=7ee1ef76819a59c781f8162f24d222248637df70', 'width': 3838}, 'variants': {}}]} | |||
Neural Narratives: AI/ML Chronicles of the Week (11/17) | 1 | [removed] | 2023-11-18T16:15:45 | https://medium.com/@webtek.ai/neural-narratives-ai-ml-chronicles-of-the-week-11-17-d96b21d3cb88 | pinnapple-crush | medium.com | 1970-01-01T00:00:00 | 0 | {} | 17y9u3n | false | null | t3_17y9u3n | /r/LocalLLaMA/comments/17y9u3n/neural_narratives_aiml_chronicles_of_the_week_1117/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ouDa09YNOeHFbjH1WoVQyJl6aF7CNsO_FxzBOKbYe_Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?width=108&crop=smart&auto=webp&s=f7ac769144bc6f3ea62e1f9542a15e5225ecf67f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?width=216&crop=smart&auto=webp&s=2d3ae5b1493b7316a5e04298c654756406004bfb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?width=320&crop=smart&auto=webp&s=84eb3445c36ecb352ed5cac08c255d64f1a39e00', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?width=640&crop=smart&auto=webp&s=c5cf3f7ac5a609cfea79f41c2e9aefc6f9c2d7b8', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?width=960&crop=smart&auto=webp&s=547d75f2a89070bc8baceabbd683c4b5108c484c', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/iW23QHe6OZrK1j1tUioP2MfHGG84iXBzdJAX_UazJ2s.jpg?auto=webp&s=d49d614684380c68835c55bf4a12cf7e9c3fa538', 'width': 1024}, 'variants': {}}]} | |
A new LLM riddle | 1 | Jimmy loves dogs, but hates cats. He hates cats so much, that if he ever has to say a word with "cat" in it, he substitutes "dog" instead. For example, when reading an article aloud about catenary arches, he calls them "dogenary" arches.
How will Jimmy read the following sentence: "The Roman Catholic Church strongly condemns the cattle herding conditions recently revealed from Catalonia"?
ChatGPT 3.5 fails this test, saying:
"The Roman Dogholic Church strongly condemns the dogtle herding conditions recently revealed from Dogalonia."
... But I bet you won't have said "dogtle", especially if you're a native speaker.
Is there any open LLM that passes this? | 2023-11-18T15:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/17y9blx/a_new_llm_riddle/ | Googulator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y9blx | false | null | t3_17y9blx | /r/LocalLLaMA/comments/17y9blx/a_new_llm_riddle/ | false | false | self | 1 | null |
Mini PC build for 7/13 b 5-10 token/s | 1 | [removed] | 2023-11-18T15:42:46 | https://www.reddit.com/r/LocalLLaMA/comments/17y9582/mini_pc_build_for_713_b_510_tokens/ | No_Meaning_9730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y9582 | false | null | t3_17y9582 | /r/LocalLLaMA/comments/17y9582/mini_pc_build_for_713_b_510_tokens/ | false | false | self | 1 | null |
Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI (Microsoft CEO Nadella "furious"; OpenAI President and three senior researchers resign) | 265 | 2023-11-18T15:42:01 | https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/ | GoldenMonkeyPox | arstechnica.com | 1970-01-01T00:00:00 | 0 | {} | 17y94nu | false | null | t3_17y94nu | /r/LocalLLaMA/comments/17y94nu/details_emerge_of_surprise_board_coup_that_ousted/ | false | false | 265 | {'enabled': False, 'images': [{'id': 'oTQdrGWuUQRV_Qf8TKOAj-dg09Fzwfn0P3qmoU-j31U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nqre9p6GSGKxtKB4JTnUQ2zgjflL8AbqkIHMBn4-oY0.jpg?width=108&crop=smart&auto=webp&s=78d175368aa3c75b5905f19a356b1c0768af742c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Nqre9p6GSGKxtKB4JTnUQ2zgjflL8AbqkIHMBn4-oY0.jpg?width=216&crop=smart&auto=webp&s=7ba29c4e671db313a700086237ec110c149ec52b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Nqre9p6GSGKxtKB4JTnUQ2zgjflL8AbqkIHMBn4-oY0.jpg?width=320&crop=smart&auto=webp&s=ddc8a075e30ab4a76974f7d15df806a2e85e56e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Nqre9p6GSGKxtKB4JTnUQ2zgjflL8AbqkIHMBn4-oY0.jpg?width=640&crop=smart&auto=webp&s=ead808406eccf0701721d4d366aa2cb1abfa8a2c', 'width': 640}], 'source': {'height': 380, 'url': 'https://external-preview.redd.it/Nqre9p6GSGKxtKB4JTnUQ2zgjflL8AbqkIHMBn4-oY0.jpg?auto=webp&s=b34e0f8bbfef884ff3f5365c59610567cb872c3e', 'width': 760}, 'variants': {}}]} | ||
Alfred-40B-1023, new open-source model from France | 52 | 2023-11-18T15:02:41 | https://x.com/LightOnIO/status/1725529226202484747?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 17y8bcj | false | null | t3_17y8bcj | /r/LocalLLaMA/comments/17y8bcj/alfred40b1023_new_opensource_model_from_france/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'qN02fVBFcsnktR6c3eMqJ1hwQ01VkKlShhmlXkFQ7M8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YVeG0zeObM94ZXIQ2eFcsJ_iIVxKS4YRJj15rNQ1ipE.jpg?width=108&crop=smart&auto=webp&s=4e4c2c824224c273ba8b2273733cd88f30db7e75', 'width': 108}], 'source': {'height': 182, 'url': 'https://external-preview.redd.it/YVeG0zeObM94ZXIQ2eFcsJ_iIVxKS4YRJj15rNQ1ipE.jpg?auto=webp&s=cbb831545905d852eeb3700c31ba7b58637d38be', 'width': 182}, 'variants': {}}]} | ||
LLM for organizing long transcriptions into outlines? Latest and greatest? | 10 | I found a post from several months ago asking about this and this was recommended : [lmsys/longchat-13b-16k · Hugging Face](https://huggingface.co/lmsys/longchat-13b-16k)
but I wanted to check and see if there are any other recommendations? I am wanting an LLM I can run locally that can search long transcriptions of interviews, brainstorm sessions, etc. and organize it into outlines without leaving out important info. | 2023-11-18T14:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/17y7rvg/llm_for_organizing_long_transcriptions_into/ | Brad12d3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y7rvg | false | null | t3_17y7rvg | /r/LocalLLaMA/comments/17y7rvg/llm_for_organizing_long_transcriptions_into/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'FfwsPFjt59reD83q6p3q2J38V8-4NGYFncQvJlIiMds', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=108&crop=smart&auto=webp&s=1ff9cb20a115dd91fa4a137405cc9d3594b4306a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=216&crop=smart&auto=webp&s=f4d6eab824411f95a514815f1e12a5f46c9d1e09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=320&crop=smart&auto=webp&s=44f748aaacc9c104f2c2332c9001e582e69bd77d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=640&crop=smart&auto=webp&s=caff0931281fd68d10cc2d7707f23349f16f468a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=960&crop=smart&auto=webp&s=9c89942b45b84a1b92407bf559b697b6c920e0a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?width=1080&crop=smart&auto=webp&s=26d87fb125e390ed9fc8f3a3481a9fbd762c0360', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WvhjnOuj5s_SsCPz_1jD8iWka5yyH_J22pVtwDr-zlA.jpg?auto=webp&s=ac553e22127a798323cbb0fb6b6938b5ad050cbd', 'width': 1200}, 'variants': {}}]} |
What is the best LLM for wrting nonfiction? | 1 | [removed] | 2023-11-18T14:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/17y7os2/what_is_the_best_llm_for_wrting_nonfiction/ | tidalwave57 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y7os2 | false | null | t3_17y7os2 | /r/LocalLLaMA/comments/17y7os2/what_is_the_best_llm_for_wrting_nonfiction/ | false | false | self | 1 | null |
motherboard help | 1 | hi guys,
im building a pc for running local llms and decided on a 4060ti as a gpu.
now i am looking for a motherboard and i have a question if i need to be aware of something specifically for llms?
thanks for the help and sorry if its not specific enough ^^ | 2023-11-18T13:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/17y680z/motherboard_help/ | thefunnyape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y680z | false | null | t3_17y680z | /r/LocalLLaMA/comments/17y680z/motherboard_help/ | false | false | self | 1 | null |
Ollama + LiteLLM - The perfect combination for local development? | 3 | Hey,
This post is not entirely about LLama, but I believe that it can be relevant for many people here as well. AI has been going crazy lately and there are now various "AI vendors". LiteLLM is a relatively new framework that attempts to "mitigate" the pain when migrating between different AI APIs. The idea is simple, rather than working with the API provided by OpenAI, you'll work with the Python framework provided by LiteLLM and then when you want to switch between OpenAI's ChatGPT to a HuggingFace model or even use Ollama, you'll be able to do that by simply changing the configuration within 3 rows rather than having to start a complete refactoring of your application. A more detailed explanation including a tutorial on how to set up a local ui while using LiteLLM + huggingface model:
[https://www.youtube.com/watch?v=UDEX1qprOWY](https://www.youtube.com/watch?v=UDEX1qprOWY)
Check out this video to learn how to install and configure Ollama locally, where you can also install llama and run it within just a few seconds:
[https://www.youtube.com/watch?v=bjkU0-xek6A](https://www.youtube.com/watch?v=bjkU0-xek6A)
In practice, by following these 2 videos, you can easily install the llama model locally, then interact with it using LiteLLM, and once everything is ready to go, you can either replace it with a more stable API endpoint for llama, or switch to a different variation of the model (/GPT).
Let me know if you run into trouble, have any questions, or requests for other videos as well,
cheers. | 2023-11-18T12:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/17y5e4u/ollama_litellm_the_perfect_combination_for_local/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y5e4u | false | null | t3_17y5e4u | /r/LocalLLaMA/comments/17y5e4u/ollama_litellm_the_perfect_combination_for_local/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'dKBUr_sM3JZ_fGTNH5pKes8AVfzg_sJhzSHQRRlGNt8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jyjYQDhIQ2jI4WIb0Meb9YVNk7IzvWH-lTdLeN6uiMI.jpg?width=108&crop=smart&auto=webp&s=9bb0ddbbfa649d612046cef0276b7566f43b7ce9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jyjYQDhIQ2jI4WIb0Meb9YVNk7IzvWH-lTdLeN6uiMI.jpg?width=216&crop=smart&auto=webp&s=9aef3c7c574eb7cfa925b899f1f3de7ca853b306', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jyjYQDhIQ2jI4WIb0Meb9YVNk7IzvWH-lTdLeN6uiMI.jpg?width=320&crop=smart&auto=webp&s=c4eb0b97d78c996ba7a24351fecac9f9953a5bc8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jyjYQDhIQ2jI4WIb0Meb9YVNk7IzvWH-lTdLeN6uiMI.jpg?auto=webp&s=2316f4cc43dc64973d683c1dc23c809cdce2aff5', 'width': 480}, 'variants': {}}]} |
Seeking guidance | 1 | [removed] | 2023-11-18T11:32:10 | https://www.reddit.com/r/LocalLLaMA/comments/17y4lj9/seeking_guidance/ | Own_Marketing2404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y4lj9 | false | null | t3_17y4lj9 | /r/LocalLLaMA/comments/17y4lj9/seeking_guidance/ | false | false | self | 1 | null |
Looking for LLM for organizing notes and transcribed interviews. I have 128gb ram and 24gb of vram | 2 | Just getting into LLMs and trying to wrap my mind around all the models out there. What I really need the most right now is something that can accurately summarize and outline pages of notes or video interview transcripts.
Not sure if it's better, but I had trouble with ChatGPT 4 summarizing several pages of notes before since it leave out lots of critical info no matter how I prompted it. Not sure if there is a local LLM that can do it better or as good? | 2023-11-18T10:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/17y3q6e/looking_for_llm_for_organizing_notes_and/ | Brad12d3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y3q6e | false | null | t3_17y3q6e | /r/LocalLLaMA/comments/17y3q6e/looking_for_llm_for_organizing_notes_and/ | false | false | self | 2 | null |
Best story writing LLM for 24Vram | 6 | Hi everybody, that gonna be my first post ever I think :)
I constantly looking for local models that would provide answers to questions like
"Write a story about ABC. The story should include xyz."
I currently use MxLewd-L2-Q5-GUFF as one that basically follows my prompt, answer long reply (I usually set up a new token generation count at 1024) and have no issues with situations when the main character is something else than usual (a cow for example)
Yesterday I tested 70B like Twix, Dawn, and lvlz (exl2 2. x quantization allows me to load it to vram ) and only Opus eventually reached a similar level of creativity and following prompt as MxLewd, but there were some flaws (it gave up when I should write about cow :D so I expect it's limited to human-like scenarios only)
As my system is very decent, so I'm open to many options
But with I prefer to load stuff to Vram (24vram, 128ram).
For now, I gave up with gptq and awq as those lose to 5K_M GUFF and I decided to explore the wonders of exl2 as it works really nicely.
Thanks, everyone | 2023-11-18T10:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/17y3djm/best_story_writing_llm_for_24vram/ | roamflex3578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y3djm | false | null | t3_17y3djm | /r/LocalLLaMA/comments/17y3djm/best_story_writing_llm_for_24vram/ | false | false | self | 6 | null |
Passing AI Detectors | 1 | Just wondering, will local llama ai models pass common ai detectors like originality or gpt zero? | 2023-11-18T09:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/17y30na/passing_ai_detectors/ | Afraid_Air_2332 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y30na | false | null | t3_17y30na | /r/LocalLLaMA/comments/17y30na/passing_ai_detectors/ | false | false | self | 1 | null |
Is there a "parallel" LLM to simulate many scenarios at once | 5 | So let's say we ask an LLM to predict what would happen if we put a pen on the table. And it simulates a thousand possibilities. Is there an LLM that would run perpendicular to these outputs as a sort of summarizer/filter. Is there a project working on anything like this?
Been looking, not finding. Thanks! | 2023-11-18T09:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/17y2jer/is_there_a_parallel_llm_to_simulate_many/ | Away-Bird-6339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y2jer | false | null | t3_17y2jer | /r/LocalLLaMA/comments/17y2jer/is_there_a_parallel_llm_to_simulate_many/ | false | false | self | 5 | null |
Is it worth using a bunch of old GTX 10 series cards ( like 1060 1070 1080 ) for running local LLM? | 6 | Was wondering if theres anyway to use a bunch of old equipment like this to build an at home crunch center for running your own LLM at home, and whether it would be worth it. | 2023-11-18T09:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/17y2i9y/is_it_worth_using_a_bunch_of_old_gtx_10_series/ | dpak90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y2i9y | false | null | t3_17y2i9y | /r/LocalLLaMA/comments/17y2i9y/is_it_worth_using_a_bunch_of_old_gtx_10_series/ | false | false | self | 6 | null |
Lenovo ThinkPad T14 Gen 1 with 4750U (RX Vega 7): is this laptop good enough to run some LLMs? | 1 | As the title suggests, it's the built-in Vega 7 GPU. Though the laptop has 48GB RAM. | 2023-11-18T08:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/17y2aif/lenovo_thinkpad_t14_gen_1_with_4750u_rx_vega_7_is/ | AMGraduate564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y2aif | false | null | t3_17y2aif | /r/LocalLLaMA/comments/17y2aif/lenovo_thinkpad_t14_gen_1_with_4750u_rx_vega_7_is/ | false | false | self | 1 | null |
Is it possible to fine tune a 33B model with 48GB vRAM? | 12 | Currently I have 12+24GB VRAM and I get Out Of Memory all the time when try to fine tune 33B models. 13B is fine, but the outcome is not very good so I would like to try 33B. I wonder if it’s worthy to replace my 12GB GPU with a 24GB one. Thanks! | 2023-11-18T08:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/17y1zy5/is_it_possible_to_fine_tune_a_33b_model_with_48gb/ | tgredditfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y1zy5 | false | null | t3_17y1zy5 | /r/LocalLLaMA/comments/17y1zy5/is_it_possible_to_fine_tune_a_33b_model_with_48gb/ | false | false | self | 12 | null |
Best choise to continue AI coding without ChatGPT Plus? | 1 | Well the code quality has gotten pretty bad so I think it's time to cancel my subscription to ChatGPT Plus. I have an RX 6600 and an GTX 1650 Super so I don't think local models are a possible choise (at least for the same style of coding that is done with GPT-4). But I decided to post here anyway since you guys are very knowledgeable. I was looking at cursor.ai and it seemed pretty good. I don't know how good it's now since OpenAI has decreased the performance of GPT-4 but I have heard that the API is still ok. Also there is refract which could be a similar choise too. What do you recommend? I do coding in mostly Python and sometimes C++. | 2023-11-18T07:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/17y1kze/best_choise_to_continue_ai_coding_without_chatgpt/ | AapoL092 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17y1kze | false | null | t3_17y1kze | /r/LocalLLaMA/comments/17y1kze/best_choise_to_continue_ai_coding_without_chatgpt/ | false | false | self | 1 | null |
Intel's Neural-chat-7b-v3-1 has taken the number 1 spot on Ayumi's LLM Benchmark | 6 | 2023-11-18T07:12:32 | http://ayumi.m8geil.de/ayumi_bench_v3_results.html | CardAnarchist | ayumi.m8geil.de | 1970-01-01T00:00:00 | 0 | {} | 17y0ytp | false | null | t3_17y0ytp | /r/LocalLLaMA/comments/17y0ytp/intels_neuralchat7bv31_has_taken_the_number_1/ | false | false | default | 6 | null | |
New Erotica Datasets and Models | 103 | I've made a few posts about this before and I'm finally starting to make some progress.
I've already released baslisk-7b-v0.2, a mistral fine tune trained on proxy logs and a subset of orca best. The data ratio is roughly 90% orca, and 10% erotica. The goal was to create a model that was still roughly on par with typical orca-style models, while gaining a significant boost to multi turn chat and adult content generation. I've quantized the model in GPTQ, EXL2, AWQ, and GGML.
Here is the full basilisk model, search my repo for whichever quant you want.
[https://huggingface.co/openerotica/basilisk-7b-v0.2](https://huggingface.co/openerotica/basilisk-7b-v0.2)
I'm currently quantizing cockatrice-7b-v0.1, another mistral model trained purely on proxy logs and NSFW content, with no orca data used at all. Should have most everything done uploading later tonight.
[https://huggingface.co/openerotica/cockatrice-7b-v0.1](https://huggingface.co/openerotica/cockatrice-7b-v0.1)
In addition to releasing these models, I've also decided to release the datasets.
Basilisk dataset: [https://huggingface.co/datasets/openerotica/basilisk-v0.2](https://huggingface.co/datasets/openerotica/basilisk-v0.2)
Cockatrice dataset: [https://huggingface.co/datasets/openerotica/freedom-rp](https://huggingface.co/datasets/openerotica/freedom-rp)
I am also releasing my dataset I use to quantize the models. It's pretty similar to the cockatrice dataset, but modified to be a drop in replacement for wikitext.
Quantization datasets:
[https://huggingface.co/datasets/openerotica/erotiquant](https://huggingface.co/datasets/openerotica/erotiquant)
[https://huggingface.co/datasets/openerotica/erotiquant-p](https://huggingface.co/datasets/openerotica/erotiquant-p)
I'm still working on evaluating the models and how much of a difference the data makes. Hopefully at least some of this can be helpful to someone. I'm still pretty new to this so don't feel the urge to run out and test this model, I promise it's not as good as GPT-4. | 2023-11-18T06:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/17xzxtf/new_erotica_datasets_and_models/ | Meta-CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xzxtf | false | null | t3_17xzxtf | /r/LocalLLaMA/comments/17xzxtf/new_erotica_datasets_and_models/ | false | false | self | 103 | {'enabled': False, 'images': [{'id': 'i8NAMGXcSg3vrwYqkli8iQhPHGNF20qiyd67R8YHTMU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=108&crop=smart&auto=webp&s=68149c58f7124351fc0a22994cecc2e1ca8ece2b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=216&crop=smart&auto=webp&s=4f3ffdf984bf9399811543acec5ccbf48a02ad45', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=320&crop=smart&auto=webp&s=1459f21dbec070e92840cf23fd9eb693f2fa541e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=640&crop=smart&auto=webp&s=217753498574269d1744b054a038ba36b185adb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=960&crop=smart&auto=webp&s=b2fa38aeb72162f1da5a0990d800d02703a083e6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?width=1080&crop=smart&auto=webp&s=82bcd2a72dc7ed8a76919af459ca5cc947f3be51', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/azXsoeqONW0X5c5YxJ3_c8ItLRPuYC-u4YDFM-CYZ18.jpg?auto=webp&s=b762e5aac5d0ab65aa893158f4b4bc6a2bf39b8f', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.