title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Viability of fine tuning for domain knowledge? | 1 | [removed] | 2023-07-27T22:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/15bf2a9/viability_of_fine_tuning_for_domain_knowledge/ | keisukegoda3804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bf2a9 | false | null | t3_15bf2a9 | /r/LocalLLaMA/comments/15bf2a9/viability_of_fine_tuning_for_domain_knowledge/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]} |
I released a new model RedPajama-INCITE-Chat-Instruct-3B-V1. | 18 | This is an experimental merge of models RedPajama-INCITE-Chat-3B-V1 and RedPajama-INCITE-Instruct-3B-V1. The prompt template can be almost anything, but this template is recommended:
HUMAN: <prompt>
ASSISTANT:
Feel free to change HUMAN and ASSISTANT to anything. It will probably not change much. | 2023-07-27T21:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/15besx4/i_released_a_new_model/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15besx4 | false | null | t3_15besx4 | /r/LocalLLaMA/comments/15besx4/i_released_a_new_model/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'gSSlR7Xpf2go6vlcxedbARooYjRJQpio2XhRevp6kjg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=108&crop=smart&auto=webp&s=694035262e86f8a5f9fab3df9b84aff97efb92b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=216&crop=smart&auto=webp&s=966a670022a344c056aa2db0d02d8157ec88aea1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=320&crop=smart&auto=webp&s=894a1a7dfab2317918daf310fd3e93ec312a7fbc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=640&crop=smart&auto=webp&s=54817df5ca5daf2e9f9358cbe13e21a93a27b0b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=960&crop=smart&auto=webp&s=92cd3134e404ecd00f3db64155c11fa1c9eaffab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?width=1080&crop=smart&auto=webp&s=98a877d7e97f587a31e3683db541b5b3e6850fa3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1f_MC65OQ9uO0bOgtECrVJJSODuuAvJEvJNsEYbE-rI.jpg?auto=webp&s=81635bc76347220be741ba6a702e9887e2fc4b7d', 'width': 1200}, 'variants': {}}]} |
GPU with really good inference speed? | 16 | I am looking for a GPU with really good inference speed. Right now I am using the 3090 which has the same or similar inference speed as the A100.
Are there any GPUs that can beat these on inference speed? | 2023-07-27T21:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/15bdovj/gpu_with_really_good_inference_speed/ | ll_Teto_ll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bdovj | false | null | t3_15bdovj | /r/LocalLLaMA/comments/15bdovj/gpu_with_really_good_inference_speed/ | false | false | self | 16 | null |
Is there a uncensored version yet | 11 | The 70b version 2 model by meta the filter is horrible and rejects anything remotely controversial I ask it what something is and it says it would be offensive to talk about it is there a version yet that answers anything but no push back | 2023-07-27T20:40:35 | https://www.reddit.com/r/LocalLLaMA/comments/15bcuff/is_there_a_uncensored_version_yet/ | Avocado_Express | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bcuff | false | null | t3_15bcuff | /r/LocalLLaMA/comments/15bcuff/is_there_a_uncensored_version_yet/ | false | false | self | 11 | null |
Which model to use and which fine tuning to chat with a mail archive of 15 years? | 18 | I've been mesmerized (and lost track) by all the model variations and different fine tuning methods being released every other day.
With 5k to 10k of dollars to spend, which model would be my starting point?
Do I need to fine tune it? I guess the answer is yes, because such a huge mail archive, would probably not fit into any practicable context size?
What kind of hardware do I need, and roughly how long would it take to fine tune or retrain?
I hope some of you can help me get my bearings, to gauge if that project is reasonable. Thanks! | 2023-07-27T20:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/15bc011/which_model_to_use_and_which_fine_tuning_to_chat/ | armaver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bc011 | false | null | t3_15bc011 | /r/LocalLLaMA/comments/15bc011/which_model_to_use_and_which_fine_tuning_to_chat/ | false | false | self | 18 | null |
Can we fine tune llama on a whole book raw text? | 1 | Mostly what i look is that we create prompt and response such as in [guanaco](https://huggingface.co/datasets/guanaco/guanaco) dataset or create instructions, input, and output as in [alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset. But what to do if we have only raw text of book, no prompt and no response. How to tune model in such case | 2023-07-27T19:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/15bapgt/can_we_fine_tune_llama_on_a_whole_book_raw_text/ | mrtac96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15bapgt | false | null | t3_15bapgt | /r/LocalLLaMA/comments/15bapgt/can_we_fine_tune_llama_on_a_whole_book_raw_text/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P9UuiNgHUxz4r65WHpQR6QN1dPYWIZW6azlxOGds2zA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=108&crop=smart&auto=webp&s=c971c2a73009bd86d8d80ec95e145f903bf14375', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=216&crop=smart&auto=webp&s=2d887ff98a57b6a6a307879603f4aaf05b6cf975', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=320&crop=smart&auto=webp&s=521f80ba92193e8ce56c8503307da8cad1325f7f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=640&crop=smart&auto=webp&s=f110f14f4c91baa049db377cb51b8b09cbc5db13', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=960&crop=smart&auto=webp&s=06b0a49843ac769f8719fe4f1a0fbbe6976c4404', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?width=1080&crop=smart&auto=webp&s=c794b0a4e18fc7c46269f7667e939df083e652f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U-gBDtJzkeijFl3DSd18JeyCd46c7m1WKUokSnKUAew.jpg?auto=webp&s=da65d0b8e37e9240a0801dde0ef352a478444ca8', 'width': 1200}, 'variants': {}}]} |
I made a 'web search' addon for llama.cpp , it will append your prompt with relevant google search results and will summarize them .. | 29 | 2023-07-27T19:03:52 | https://gist.github.com/staberas/5b83d479ab057dedde3844c419527cc6 | staberas | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 15bac9d | false | null | t3_15bac9d | /r/LocalLLaMA/comments/15bac9d/i_made_a_web_search_addon_for_llamacpp_it_will/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]} | ||
Is the only way to get a detailed guide step-by-step is just keep asking it questions as a fractal? | 4 | When I ask an LLM for a detailed guide it gives me generic step by steps. I would like deeper non-cliche advice from it. The only thing I find that kind of helps is going for substeps on each main step. Then it finally starts giving non-generic advice.
​
Any advice would be helpful, thank you. | 2023-07-27T18:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/15b9u6i/is_the_only_way_to_get_a_detailed_guide/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b9u6i | false | null | t3_15b9u6i | /r/LocalLLaMA/comments/15b9u6i/is_the_only_way_to_get_a_detailed_guide/ | false | false | self | 4 | null |
Llama2 Successes, Challenges, and Potential? | 9 | It's been a little over a week since the Llama2 release and looking through this sub I'm seeing a lot of summarizers, chatbots, etc popping up using the model. I'm really interested in the future of open source in this space and excited to see how things progress, as I'm sure most of you are.
So my questions for those of you that have been tinkering;
* What successes have you had with the model?
* What does Llama2 do well? Anything that surprised you? Or that previous models just couldn't handle?
* What challenges have you faced so far?
* Have there been any major roadblocks you've encountered with Llama2? Any strategies or tips for other builders to avoid similar issues?
* Where do you see the future of Llama2 and open source models?
* I know it's still very early for this so it might include speculation, but are there any potential breakout projects you've heard of or are working on?
Thanks to anyone that shares their input, interested to see how everyone is getting along with Llama2 as I start tinkering with it for the first time. | 2023-07-27T18:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/15b93et/llama2_successes_challenges_and_potential/ | DAVEALLCAPS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b93et | false | null | t3_15b93et | /r/LocalLLaMA/comments/15b93et/llama2_successes_challenges_and_potential/ | false | false | self | 9 | null |
LLama 70b is available on Poe.com | 1 | Just noticed it. | 2023-07-27T18:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/15b8wot/llama_70b_is_available_on_poecom/ | throwaway275912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b8wot | false | null | t3_15b8wot | /r/LocalLLaMA/comments/15b8wot/llama_70b_is_available_on_poecom/ | false | false | self | 1 | null |
Training LLaMA on Sci-Hub (I am not a fed, I swear guys) | 39 | Hey, has someone tried training LLaMa on the massive amount of data you find at Sci-Hub? 25 million research articles, mostly medical and healthcare related.. result would be impressive, wouldn't it?
Would be pretty bad getting caught tho | 2023-07-27T17:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/15b8ec7/training_llama_on_scihub_i_am_not_a_fed_i_swear/ | 753sopho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b8ec7 | false | null | t3_15b8ec7 | /r/LocalLLaMA/comments/15b8ec7/training_llama_on_scihub_i_am_not_a_fed_i_swear/ | false | false | self | 39 | null |
Tesla P40 with GGML higher context models - ggml_new_tensor_impl: not enough space in the scratch memory pool | 7 | So I'm trying to use 8k GGML models with my P40, since that isn't feasible with SuperHOT GPTQ models. They load with the llama loader just fine, but error out with ggml\_new\_tensor\_impl: not enough space in the scratch memory pool.
Tried a few different models, TheBloke's Chronos Hermes 13B 8K superhot GGML and OpenAssistant-Llama2-13B-Orca-8K-3319-GGML. Both send back the above error. Latest Ooga. Works totally fine with lower context sizes. I'm giving about 5k context, should be well within limits, but it won't have it.
Any thoughts? | 2023-07-27T17:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/15b8d35/tesla_p40_with_ggml_higher_context_models_ggml/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b8d35 | false | null | t3_15b8d35 | /r/LocalLLaMA/comments/15b8d35/tesla_p40_with_ggml_higher_context_models_ggml/ | false | false | self | 7 | null |
What is the best uncensored LLM model for ERP with the ability to train on your own chat messages? | 13 | I would like to create an erotic roleplay bot on my own chat histories. The model should be uncensored and horny and there should be the possibility to train it on my own chat data e.g. from Whatsapp to give the bot my own voice. | 2023-07-27T17:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/15b7x1l/what_is_the_best_uncensored_llm_model_for_erp/ | Special_Neat4619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b7x1l | false | null | t3_15b7x1l | /r/LocalLLaMA/comments/15b7x1l/what_is_the_best_uncensored_llm_model_for_erp/ | false | false | self | 13 | null |
Looking for help with creating a NSFW model on Replicate | 0 | I've been experimenting with creating a chatbot companion and have made it as far as UI and interface to public models hosted on Replicate but these are all very bland and not good for a personal experience. Looking for some help in creating the best customizable model for a companion, RP, SFW and NSFW.
Any pointers or help are greatly appreciated. | 2023-07-27T16:54:45 | https://www.reddit.com/r/LocalLLaMA/comments/15b71jr/looking_for_help_with_creating_a_nsfw_model_on/ | RMACNJ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b71jr | false | null | t3_15b71jr | /r/LocalLLaMA/comments/15b71jr/looking_for_help_with_creating_a_nsfw_model_on/ | false | false | nsfw | 0 | null |
Best local personal assistant frameworks w/ UI? (gradio???) | 6 | There must be many projects pursuing this sort of thing but I never see them posted. Is this a case of me not looking in the right places/right terms or are these well kept secrets? I feel like all the building blocks are there and before bothering to hack them together for myself I'm curious if theres any projects that are accessible to someone who (as an example) isn't comfortable with langchain yet. | 2023-07-27T16:29:16 | https://www.reddit.com/r/LocalLLaMA/comments/15b6epp/best_local_personal_assistant_frameworks_w_ui/ | ciaguyforeal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b6epp | false | null | t3_15b6epp | /r/LocalLLaMA/comments/15b6epp/best_local_personal_assistant_frameworks_w_ui/ | false | false | self | 6 | null |
Understanding how the huggingface cache works | 0 | I have what I thought was a gargantuan ssd, at 2TB. I load models from the huggingface hub using pipelines and the model name is meta-llama/Llama-2-13b-chat-hf. This usually only downloads once then I can use it, but every few days it seems to re-download the model and not delete the old one. Why is this? Is the model changing or something? Did I set something up wrong? I've been playing on this computer for only a few weeks and due to this multiple download thing (and, in fairness, testing out a bunch of different models) I've eaten up 1TB of my space. Can I safely delete the cache folder? I think I'm done trying new models for a bit so I'd like to just delete everything and download Llama-2-13B-chat exactly one more time and use it for a while.
​ | 2023-07-27T16:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/15b6dib/understanding_how_the_huggingface_cache_works/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b6dib | false | null | t3_15b6dib | /r/LocalLLaMA/comments/15b6dib/understanding_how_the_huggingface_cache_works/ | false | false | self | 0 | null |
Best OSS Coding Assistant for VS Code | 35 | The title says it all. Any recommendation is welcome. I could imagine to run a local smaller model on my MacBook Pro M1 16GB or a self-hosted model where I would spin it up for a coding session and then spin it down again, e.g. on runpod, Colab, Huggingface spaces. Is there any VS Code plugin you can recommend that you can wire up with local/self-hosted model? | 2023-07-27T15:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/15b565t/best_oss_coding_assistant_for_vs_code/ | krazzmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b565t | false | null | t3_15b565t | /r/LocalLLaMA/comments/15b565t/best_oss_coding_assistant_for_vs_code/ | false | false | self | 35 | null |
Most useful LLaMA's request and a ELI5 guide to understanding basics of the different aspects | 1 | [removed] | 2023-07-27T15:33:49 | https://www.reddit.com/r/LocalLLaMA/comments/15b500c/most_useful_llamas_request_and_a_eli5_guide_to/ | Seronkseronk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b500c | false | null | t3_15b500c | /r/LocalLLaMA/comments/15b500c/most_useful_llamas_request_and_a_eli5_guide_to/ | false | false | self | 1 | null |
I hacked together a llama and vicuna chatbot that runs in-browser using wasm and webgpu. Any other models I should add? | 5 | 2023-07-27T15:30:43 | https://chat.palapa.ai | swordsman1 | chat.palapa.ai | 1970-01-01T00:00:00 | 0 | {} | 15b4x76 | false | null | t3_15b4x76 | /r/LocalLLaMA/comments/15b4x76/i_hacked_together_a_llama_and_vicuna_chatbot_that/ | false | false | default | 5 | null | |
Need Assistance with Configuring a LLM Workstation/Server | 8 | I'm working on a company project to acquire a server or workstation for running local LLMs. My hope was to get something that can run at least a 4-bit quantized version of LLAMA 2 70B.
The specs I've come up with so far include:
\-12-24 core contemporary Xeon, or EPYC processor (preferably a generation that supports PCIe 5.0)
\-32-64 GB (2-4 RDIMMs respectively) of system memory (Does the model need to load into system memory before loading into the GPU? Is system memory important if you plan to run the models exclusively on the GPU(s)?)
\-Single GPU with at least 48gb of VRAM (NVIDIA L40, RTX A6000, RTX 6000 'Ada' are top choices)(Do multi-gpu configurations pose any significant disadvantages over single gpu configurations?)(What advantages do enterprise cards like the RTX 6000 have over 'consumer' cards like the RTX 4090? Besides total VRAM)
\-At least 1TB of solid state storage (Would 2TB be better if we want to store more models?)
\-Linux operating system (Is Linux the preferred operating system for this application. If so, which Linux distro would be preferred? Ubuntu? RHEL? Debian?)
​ | 2023-07-27T15:30:11 | https://www.reddit.com/r/LocalLLaMA/comments/15b4wkm/need_assistance_with_configuring_a_llm/ | ForsakenMC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b4wkm | false | null | t3_15b4wkm | /r/LocalLLaMA/comments/15b4wkm/need_assistance_with_configuring_a_llm/ | false | false | self | 8 | null |
I hacked together a llama and vicuna chatbot that runs in-browser. What do you all think? Any other models I should add? | 1 | 2023-07-27T15:23:52 | https://chat.palapa.ai/ | swordsman1 | chat.palapa.ai | 1970-01-01T00:00:00 | 0 | {} | 15b4qvm | false | null | t3_15b4qvm | /r/LocalLLaMA/comments/15b4qvm/i_hacked_together_a_llama_and_vicuna_chatbot_that/ | false | false | default | 1 | null | |
Question about configuring a workstation/server for LLM testing/experimentation | 1 | [removed] | 2023-07-27T15:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/15b4qav/question_about_configuring_a_workstationserver/ | LLMEnthusiast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b4qav | false | null | t3_15b4qav | /r/LocalLLaMA/comments/15b4qav/question_about_configuring_a_workstationserver/ | false | false | self | 1 | null |
Embracing the Open Source Ethos: OpenAI Must Be Open Source | 25 |
The open-source movement embodies a promise for the future – a world driven by collaboration, shared knowledge, and the collective pursuit of better solutions. As the momentum behind open source grows, it's clear that this approach, fueled by a dedicated community and rooted in transparency, has the potential to catalyze tremendous innovation.
LLM's stand at the forefront of technological advancement. As we navigate its complexities, the importance of fostering a culture of collective effort becomes paramount. OpenAI, with its groundbreaking models and initiatives, has taken significant steps by discussing the potential release of an open source model.
This is still not enough. The broader call from the community is for a deeper immersion in the open-source ethos. It is for true participation in an open way. Open AI!
All of us know that in open-source endeavors, together, we flourish. This is my invitation to OpenAI: Let's come together even more intimately inside the open-source community. In doing so, we champion a brighter, more inclusive future for technology and, by extension, for humanity.
This isn't about us anymore. Let's share the good technology. | 2023-07-27T14:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/15b2ye1/embracing_the_open_source_ethos_openai_must_be/ | hanjoyoutaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b2ye1 | false | null | t3_15b2ye1 | /r/LocalLLaMA/comments/15b2ye1/embracing_the_open_source_ethos_openai_must_be/ | false | false | self | 25 | null |
Hardware requirements | 0 | I am a system admin / developer and I wanted to ask about the hardware requirements
How does GPU affect using a model beside the lower token generation speed? | 2023-07-27T13:57:52 | https://www.reddit.com/r/LocalLLaMA/comments/15b2le1/hardware_requirements/ | AcanthisittaBig2910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b2le1 | false | null | t3_15b2le1 | /r/LocalLLaMA/comments/15b2le1/hardware_requirements/ | false | false | self | 0 | null |
text gen webui llama.cpp metal inference? | 1 | can the llama.cpp loader run with metal inference for apple silicon? gives amazing performance in llama.cpp command line but hoping to use it with this ui. i have researched this and tried to tinker myself...I may just be incompetent | 2023-07-27T13:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/15b1fus/text_gen_webui_llamacpp_metal_inference/ | Ok_Bid_8789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b1fus | true | null | t3_15b1fus | /r/LocalLLaMA/comments/15b1fus/text_gen_webui_llamacpp_metal_inference/ | false | false | self | 1 | null |
MiniGPT-4.cpp with other models (LLaMA and LLaMA-2) | 43 | Recently tried [MiniGPT-4.cpp](https://github.com/Maknee/minigpt4.cpp) (for those who don't know, it's Vicuna that can see images and this is its ggml version). It's amazingly simple to install and works pretty well.
[Using the recommended Vicuna-13b-v0 model.](https://preview.redd.it/nom9ffwc1ieb1.png?width=1920&format=png&auto=webp&s=b2bae3c9d3b6d675fd4dcde590aea5ddfe6fa50f)
But then curious (and a bit fed up by Vicuna claiming it can't see things), decided to switch and to just try the next model lying around on my computer: [this](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML).
Surprisingly, it worked! Although this one would go on tangents afterwards, probably because of its increased context length. But it gave pretty sensible results.
[wizard-vicuna-13b-uncensored-superhot-8k](https://preview.redd.it/f7mf3xeg1ieb1.png?width=1920&format=png&auto=webp&s=80fe512b19a2d6c0e01a5ccb58e076e0e8ec015c)
[Maybe it shouldn't be surprising though, given they're based off the same LLaMA 13b model](https://preview.redd.it/p2efo1q32ieb1.png?width=1920&format=png&auto=webp&s=efc8b6513f7ae98fe10e783b070f51012093e646)
Now curious, I tried the other 13b model on hand at the time: [this](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML).
[Nous-Hermes-Llama2-13b](https://preview.redd.it/x8pxrluk2ieb1.png?width=1920&format=png&auto=webp&s=2851bf4c3317f105988dc9cb6154bee5147327b0)
It was able to load, although this time the results were useless:
​ | 2023-07-27T12:30:06 | https://www.reddit.com/r/LocalLLaMA/comments/15b0mu4/minigpt4cpp_with_other_models_llama_and_llama2/ | VisitingCookies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b0mu4 | false | null | t3_15b0mu4 | /r/LocalLLaMA/comments/15b0mu4/minigpt4cpp_with_other_models_llama_and_llama2/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'isukIG5hDzQiQKKWa3Tg-f1E8gybM0hXiaykIrwG36M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=108&crop=smart&auto=webp&s=f2e4db5e479efbc9fd741943f731a4a8ad591fe0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=216&crop=smart&auto=webp&s=08ffdd8bcc9e62b4e901980875fef2ce6bf9aa9b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=320&crop=smart&auto=webp&s=6a2fa592d0dd4e5fed06a777f4cf66919a56f7c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=640&crop=smart&auto=webp&s=5a9869419f73b3af178c54a7f86dda308379b062', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=960&crop=smart&auto=webp&s=02084bd7a401e9fc0c4dcb50d5013397ffb68e8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?width=1080&crop=smart&auto=webp&s=c7ecabe67765496c5d6d5583e9ed82746723b0fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_SY-E5GUK8OYtL2IkT9k340lvfSq7eSl7msHKwaHvAM.jpg?auto=webp&s=303b2c5a7cb7a5dc001e7ad97a4b55ba489fa601', 'width': 1200}, 'variants': {}}]} | |
Trying to train LoRA, hitting a wall | 1 | Hi everyone,
I'm trying to train my first LoRA on a Llama v1 model ([guanaco 13B 8-bit](https://huggingface.co/TheBloke/guanaco-13B-GGML)), but I keep getting this error that I can't figure out:
[https://imgur.com/a/gI8gPWx](https://imgur.com/a/gI8gPWx)
Using latest Oogabooga on Windows, text generation works fine. "--load-in-8bit" is set in the CMD flags.
Here are my parameters:
\- Override existing files: yes
\- Save every n steps: 0
\- Microbatch size: 3
\- Batch size: 128
\- Epochs: 3
\- Learning rate: 3e-4
\- LR Scheduler: Linear
\- Lora rank: 32
\- Lora alpha: 64
\- Cutofflength: 256
\- Hard cut string: \\n\\n\\n
\- Ignore small blocks: 0
\- Overlap length: 128
\- Prefer Newline Cut Length: 128
My dataset is a raw text file with 31,000 words, though it has some JSON fragments in it.
Does anyone know how I can get this Lora to train? Thank you!!!! | 2023-07-27T12:25:12 | https://www.reddit.com/r/LocalLLaMA/comments/15b0j0t/trying_to_train_lora_hitting_a_wall/ | tenplusacres | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b0j0t | false | null | t3_15b0j0t | /r/LocalLLaMA/comments/15b0j0t/trying_to_train_lora_hitting_a_wall/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5VCuKo6Xt9VvC0AS6srcLpaWs4jm4WjHs0hN-LxjdfY', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=108&crop=smart&auto=webp&s=460973af181c45dadac1fbfccec016c343adc851', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=216&crop=smart&auto=webp&s=b12e5ae22a4ed3cae60a4a1fe02e9c3a4ec30908', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=320&crop=smart&auto=webp&s=c979ac46fc28d7a4b4bc08c089a34b5827087d79', 'width': 320}, {'height': 331, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=640&crop=smart&auto=webp&s=249717057bbd800ebc8eabb77ddcc087cc06718e', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=960&crop=smart&auto=webp&s=a037596755d7e7e679028f055a589ccfc5f2a4b8', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?width=1080&crop=smart&auto=webp&s=c4409c05ffa402b9392f2690776724f8b9df45c4', 'width': 1080}], 'source': {'height': 677, 'url': 'https://external-preview.redd.it/nIdtmPbUTm9EBlv_rJkb_cmX9wxfr4lgKEPQR36Ixco.jpg?auto=webp&s=f5c275c847fc08fd0fb5c827f2c84b8778c78567', 'width': 1308}, 'variants': {}}]} |
What methods do you use to get the base llama 2 (not llama 2 chat) to do what you want? | 1 | Whether it is writing stories, roleplaying, following instructions, writing code, etc, what methods do you use to keep base llama 2 on track and doing what you want?
[Here's one example of advice.](https://www.reddit.com/r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/jtjb678/) | 2023-07-27T12:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/15b0554/what_methods_do_you_use_to_get_the_base_llama_2/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15b0554 | false | null | t3_15b0554 | /r/LocalLLaMA/comments/15b0554/what_methods_do_you_use_to_get_the_base_llama_2/ | false | false | self | 1 | null |
Yet another LLaMA2 finetune | 37 | Today I release 3 finetunes for LLaMA 2 and GPT-J, Kimiko
Kimiko is trained on a small dataset of 3000 instructions + roleplay data, I avoid using GPT data as much as possible but adding around 1000 instructions sample from airboro make the model smarter for some reason?
In roleplay model is slightly bias to NSFW while giving really long response. Both 6/7B are trained as LoRA while 13B is QLoRA
[https://huggingface.co/nRuaif/Kimiko\_J](https://huggingface.co/nRuaif/Kimiko_J)
[https://huggingface.co/nRuaif/Kimiko\_7B](https://huggingface.co/nRuaif/Kimiko_7B)
[https://huggingface.co/nRuaif/Kimiko\_13B](https://huggingface.co/nRuaif/Kimiko_13B)
Enjoy
​ | 2023-07-27T11:59:36 | https://www.reddit.com/r/LocalLLaMA/comments/15azzgl/yet_another_llama2_finetune/ | Tight-Juggernaut138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15azzgl | false | null | t3_15azzgl | /r/LocalLLaMA/comments/15azzgl/yet_another_llama2_finetune/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'OohmSsOcoNO4LSXXW9GcPBNlOqynzu-GTLDqFehvQwM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=108&crop=smart&auto=webp&s=881ea529d2d05a142787cc6410b183711bd33f8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=216&crop=smart&auto=webp&s=7aa850716e7dde3d3289ce9ac9243efc877a6199', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=320&crop=smart&auto=webp&s=3a54a4194cfc0dcf96a6276a433dea9b7b844391', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=640&crop=smart&auto=webp&s=2c6caba7c0bb19468897a25872fac6598c9b4f6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=960&crop=smart&auto=webp&s=1ec11ecc5834ef255823755375b478fbc29e731f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?width=1080&crop=smart&auto=webp&s=434b49b54ef9eb8a7c830dec7f475b9ffe3e82fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j0azsXu54w4tYTN9PDdc9FLeefjOXNjo3VAEsHGzfAg.jpg?auto=webp&s=603c8a480b372380589164056a176c30e59c84d4', 'width': 1200}, 'variants': {}}]} |
What can we achieve with small models ? | 20 | How can we use models less than 1b to do something useful? | 2023-07-27T11:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/15ayz6l/what_can_we_achieve_with_small_models/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ayz6l | false | null | t3_15ayz6l | /r/LocalLLaMA/comments/15ayz6l/what_can_we_achieve_with_small_models/ | false | false | self | 20 | null |
Large LLMs for summarization and topic extraction/modeling | 8 | Hey Everyone,
I am sorry if this is a frequently asked question or something obvious!
I have a number of Excel documents that contain market research information about some products. Each row contains a product and each column has information that describes it, such as the Aim of the product, the steps to use the product, if you need additional materials, etc.
I want to combine this information into a product description. I have already tried using models 13B models that are placed high in the HuggingFace LLM leaderboard, such as "NousResearch/Nous-Hermes-Llama2-13b".
I am not sure if my approach is correct, I understand that these models are instruction-tuned to either carry out various tasks or instruction-tuned to act like a chatbot, but I am having trouble differentiating between the two.
Is this a good approach? Using Instruction-tuned out of the box LLMs or should I try to use something else?
Sorry if this is something that is frequently asked!
Edit: Forgot to add my topic modeling/extraction part sorry.
I was wondering if topic extraction and modeling with an LLM provides better results than with TopicBERT or LDA. My documents usually consist of a single document ranging from 300 to 6-7 thousand words. | 2023-07-27T10:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/15aycxs/large_llms_for_summarization_and_topic/ | Laskas123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aycxs | false | null | t3_15aycxs | /r/LocalLLaMA/comments/15aycxs/large_llms_for_summarization_and_topic/ | false | false | self | 8 | null |
Seeking advice on hardware and LLM choices for an internal document query application | 13 | Hello everyone,
I am working on a project to implement an internal application in our company that will use a Large Language Model (LLM) for document queries. The application will be used by a team of 20 people simultaneously during working hours.
I am seeking advice on the necessary hardware infrastructure and the choice of LLM to support this application. Here are the options we are considering:
Hardware:
Dell Server ?
NVIDIA A100 graphics card
NVIDIA RTX 3090 graphics card
CPU ?
RAM?
LLM:
GPTQ models
GGML models
We would like to get a better idea of the tokens per second (tk/s) speed we can expect with these setups, especially when used with multiple users simultaneously.
Does anyone have experience with similar setups and can provide some guidance? Are there any other hardware considerations or LLM choices we should keep in mind? How can we optimize our infrastructure to achieve the best possible tokens per second speed?
Any advice or suggestions would be greatly appreciated. Thank you in advance! | 2023-07-27T09:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ax45f/seeking_advice_on_hardware_and_llm_choices_for_an/ | zasp2300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ax45f | false | null | t3_15ax45f | /r/LocalLLaMA/comments/15ax45f/seeking_advice_on_hardware_and_llm_choices_for_an/ | false | false | self | 13 | null |
Why doesn't Googles USM paper specify which languages it was trained on? | 5 | All AI developers claim to train their models on "hundreds of languages" but I rarely find any source with greater details (what languages, dataset size per language, performance per language).
[Even Googles paper](https://arxiv.org/pdf/2303.01037.pdf) fails to mention any details on those 300+ languages. Am I looking a the wrong places or is it a trade secret? | 2023-07-27T09:26:30 | https://www.reddit.com/r/LocalLLaMA/comments/15ax3gd/why_doesnt_googles_usm_paper_specify_which/ | BigBootyBear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ax3gd | false | null | t3_15ax3gd | /r/LocalLLaMA/comments/15ax3gd/why_doesnt_googles_usm_paper_specify_which/ | false | false | self | 5 | null |
Can you convert your own checkpoint to GGML? | 1 | I've been looking around all the new stuff and it seems that GGML is the way to go for inference, the thing is, I'd first like to fine-tune a llama-2 version (which already has a GGML implementation) but it seems that I can't do that directly. I need to first fine-tune the original version and then convert that into GGML/GPTQ or whatever...
​
Any ideas on how this may be achieved? Also, let's say this particular version of the model has ggml/gpt-q/lora/qlora checkpoints available. Which one do I use for fine-tuning? Do I use the original base model and then attach whatever I want onto it (e.g. a new qlora which is fine-tuned)? I'm a little confused with all the new stuff coming out.
​
Thank you! | 2023-07-27T09:23:58 | https://www.reddit.com/r/LocalLLaMA/comments/15ax1sg/can_you_convert_your_own_checkpoint_to_ggml/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ax1sg | false | null | t3_15ax1sg | /r/LocalLLaMA/comments/15ax1sg/can_you_convert_your_own_checkpoint_to_ggml/ | false | false | default | 1 | null |
LLM Boxing - Llama 70b-chat vs GPT3.5 blind test | 188 | 2023-07-27T09:09:11 | https://llmboxing.com/ | andreasjansson | llmboxing.com | 1970-01-01T00:00:00 | 0 | {} | 15awsel | false | null | t3_15awsel | /r/LocalLLaMA/comments/15awsel/llm_boxing_llama_70bchat_vs_gpt35_blind_test/ | false | false | default | 188 | null | |
How to analyse http requests for malicious strings | 1 | Hi, am I looking for someone to point me in the right direction.
The task is, to classify the HTTP requests that come to honeypot as "crawler" or "malicious". For example, if I can detect a Log4j exploit inside on of the headers I can say that that request is malicious.
The problem is, this exploit could be inside any numerous headers. It can be at the beginning or at the end. And this is just 1 exploit. There are many different exploits with their own unique strings. And I don't know them all, nor do I have a "regex" for each 1 of them. The malicious string could also not be inside headers, but inside URL, as query parameter. Or if the request was made to something like www/IP.com/phpadmin/.env (or something like this).
My current thought process is, to take some open-source LLN, because it has some basic knowledge of how language works and somehow add this cybersecurity domain knowledge to it. To further train it on CVE database, example scripts that showcase each CVE, etc.
​
Am I barking at the right tree here? Or should I maybe train a language model from scratch, so that the embeddings, etc are specialized to cybersec space (because there is a lot of programming code here). Or maybe I should use some other ways to analyse text?
​
I would be greatefull if someone can point me in the right direction (links to blogs, or articles, or some other education material).
​
Thanks | 2023-07-27T07:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/15ave9l/how_to_analyse_http_requests_for_malicious_strings/ | PopayMcGuffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ave9l | false | null | t3_15ave9l | /r/LocalLLaMA/comments/15ave9l/how_to_analyse_http_requests_for_malicious_strings/ | false | false | self | 1 | null |
An INSANE(!!) output from Llama2 (7-B Chat) on very custom runtime settings and prompt interjection/interruption | 13 | Poem about: A woman in her 40s
everybody has heard of midlife crisis, but what exactly is it ?. By definition a 'mid- life crises' occur during the late middle adulthood stage and generally happen between ages Sixty to seventy years old . Midè exists across gender culture economic status ,educational level surprisigly women are particularly prone Autumn’ Brown Within these generations many transformations mark this significant phase on Life journey The first being physical changes such as weight gain health issues thyroid problems difficulty losing height etcetera As our body slow down tempéramental process increase ( joint pain back strains ) we may struggle adjusting emotional scale( Depression Anxiety stress) reflect actual circumstances or perceived deficits when dealing social relatoinships work satisfaction family dynamics inner dialogue May result intense searching for self identity exploring new interests hobbies even complete transformation seeking validation from outside sources do yes? Is This A Crises Or Extended Exploration Women In Generally More Prune To Identity Trying Ageism Rampant thoughtful reflection that begins at early thirt iest Slumber deep within each soul—an unseen war waged an internal one where femininity battles strive daily against societal expect - For them its fight just keep breathng with shaking knee cap while navigating menace waters. But don`t worry dear struggling sister You Aren t alone amidst torment Your kind have lived through countless wars over millennia endure almost every hardship known mankind eke out perilous existence subjugate their inners selves imposed impositions; dress code modesty pious postures dictated obligatory chastity et al Yet here lyes secret weapon Ageless beauty resilience lies hidden behind mask agitated performed demanded by fear society ` s standards thus empower your voice release yourself Finding oneself involves embracement answer questions longstanding curiosity surrounding Self identification purpose unique gifts particular flavor signature contribution desired community acceptance appraisals feedback loops formidable network support nurturing mutual validatiuhnn which transformative healing restores brilliance inherent you!! | 2023-07-27T06:38:53 | https://www.reddit.com/gallery/15au6tc | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15au6tc | false | null | t3_15au6tc | /r/LocalLLaMA/comments/15au6tc/an_insane_output_from_llama2_7b_chat_on_very/ | false | false | 13 | null | |
Instruction tuning is not alignment. Calling it that supports regulatory capture we do not want. | 40 | tl;dr I'm noticing that 'alignment' is becoming shorthand for instruction tuning (SFT, RLHF, RLAIF, etc), or the subset of instruction tuning used for censorship, here and on twitter, and suggesting we use language that is not being used for regulatory capture.
Long version:
OpenAI orchestrated this confusion for their benefit when they deceptively claimed an 'AI alignment' win for RLHF ([https://openai.com/research/instruction-following](https://openai.com/research/instruction-following)), taking 'AI alignment' concerns and rounding them off to a problem they could address with techniques they knew, thereby presenting themselves as safety-conscious responsible stewards. They've used this spin while lobbying for regulations on LLMs that would cement their lead by suppressing competition (inclusive of open source and open access LLMs).
Since this sub is generally opposed to regulatory capture by OpenAI and Anthropic, I suggest using more descriptive shorthand instead, like 'instruction tuning' (when referring to SFT/RLHF/RLAIF generally), and 'censorship tuning' (or 'HR alignment') when that is the referent.
AI Alignment is actually a field dedicated to building ASI that supports human thriving. It's a big topic, and I'm not enough of an alignment nerd to do it justice, but the wikipedia article does ([https://en.wikipedia.org/wiki/AI\_alignment](https://en.wikipedia.org/wiki/AI_alignment)), and the very short version is that instruction tuning is a technique for pursuing a very superficial case of 'outer alignment,' arguably at the cost of 'inner alignment' (cf the Jungian "[Waluigi Effect](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post)"), which is what real AI safety researchers are more centrally concerned about. | 2023-07-27T06:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/15ati2t/instruction_tuning_is_not_alignment_calling_it/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ati2t | false | null | t3_15ati2t | /r/LocalLLaMA/comments/15ati2t/instruction_tuning_is_not_alignment_calling_it/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'WoqTJZZTlAsamfZf6P0oB2kH0Huv63sngOj_f2okCrI', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=108&crop=smart&auto=webp&s=1f18aa3e092399f5b23b3d8e915b3d0a4b8f7aa8', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=216&crop=smart&auto=webp&s=50fff20068f81447034d9f42f8440e60fff4847e', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=320&crop=smart&auto=webp&s=8b4198ac915e1780370b1f975e8da2d570ea9e8c', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=640&crop=smart&auto=webp&s=ceb1be51bb91f4c24563b9b3f5839e4871246950', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=960&crop=smart&auto=webp&s=73b27e44cab69edb06ed4692641792b0b6634015', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?width=1080&crop=smart&auto=webp&s=1ccb59a0d0622d061b87450c5be340d5b2a86cf8', 'width': 1080}], 'source': {'height': 1373, 'url': 'https://external-preview.redd.it/9cYRsLLckyYmZrmQDM5KLw9SNIVY_kmRfX8vPTwUEAA.jpg?auto=webp&s=43b981a0dcce9b7dbfce9f0e28d7e154d47167c7', 'width': 2064}, 'variants': {}}]} |
How to install gpt4-x-alpaca on a laptop GPU? | 1 | Hi,
I'm new to open source AI models and want to try them out. I heard that gpt4-x-alpaca is good. However, I don't have the best hardware. Is it possible to install gpt4-x-alpaca on an RTX 3060 laptop GPU with a Ryzen 5 5600H and 16GB of ram? What can I do to optimize it? | 2023-07-27T04:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/15arq9z/how_to_install_gpt4xalpaca_on_a_laptop_gpu/ | throawayalt9989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15arq9z | true | null | t3_15arq9z | /r/LocalLLaMA/comments/15arq9z/how_to_install_gpt4xalpaca_on_a_laptop_gpu/ | false | false | self | 1 | null |
Where can I find the relative performance of proprietary models (e.g. ChatGPT) compared to open source models? | 5 | I know there is a Open LLM Leaderboard in Huggingface, but I can't find any proprietary models there.
Am I missing something? | 2023-07-27T03:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/15aqnu1/where_can_i_find_the_relative_performance_of/ | regunakyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aqnu1 | false | null | t3_15aqnu1 | /r/LocalLLaMA/comments/15aqnu1/where_can_i_find_the_relative_performance_of/ | false | false | self | 5 | null |
Roleplay with models locally | 17 | Hey all, lately I've been reading a lot of posts here about roleplaying with models bot SFW and NSFW.
I want to ask for further information and maybe some tutorials about how to get this going and an explanation about what kind of roleplaying you guys do. Thanks! | 2023-07-27T02:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/15apyah/roleplay_with_models_locally/ | ZCAY6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15apyah | false | null | t3_15apyah | /r/LocalLLaMA/comments/15apyah/roleplay_with_models_locally/ | false | false | self | 17 | null |
Roleplay with models locally | 1 | Hey all, lately I've been reading a lot of posts here about roleplaying with models bot SFW and NSFW.
I want to ask for further information and maybe some tutorials about how to get this going and an explanation about what kind of roleplaying you guys do. Thanks! | 2023-07-27T02:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/15apy9z/roleplay_with_models_locally/ | ZCAY6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15apy9z | false | null | t3_15apy9z | /r/LocalLLaMA/comments/15apy9z/roleplay_with_models_locally/ | false | false | self | 1 | null |
Llama2 (7B-Chat) renders crazy story after simple interactive prompt: “A Short story about: a dog” | 11 | Run command * ./main -m ./models/7B/ggml-model-q4_0.bin --interactive-first --color -i --prompt "A Short story about: "*
___
The only input I gave to the prompt was the two words “a dog”
___
A Short story about: a dog
Unterscheidung between a real and a fake dog in the street.
As I was walking down the street, I saw something that made my heart race. There, in front of me, was a dog! Or so I thought. The thing is, it didn't quite look like any dog I had ever seen before. It was tall and lanky, with glowing green eyes that seemed to be looking right through me. And its fur was an odd shade of purple.
I stopped in my tracks, unsure of what to do next. The "dog" sat there placidly, not moving a muscle. I took a few steps closer, and it suddenly stood up on its hind legs. My heart skipped a beat as the creature looked directly at me with an unnerving intensity.
I couldn't help but wonder if this was some kind of hallucination or dream. But then the "dog" spoke to me in a low, rumbling voice.
"Hello there," it said. "My name is Baxter. And you are?"
I stared at Baxter in shock, unsure of how to respond. Finally, I managed to stammer out a greeting. "Uh, hi...Baxter? Are you a dog?"
Baxter chuckled, the sound rumbling through his body like thunder. "Oh, no," he said. "I am not a dog. I'm a creature from another world."
My mind reeled as I tried to process this revelation. A creature from another world? How was that possible? But before I could ask any questions, Baxter spoke again.
"Don't worry," he said. "I mean no harm. My planet is at war with yours, and I have come to infiltrate your society and gather intelligence."
I couldn't believe what I was hearing. An alien creature had just revealed itself to me on the street! But as I looked into Baxter's eyes, I saw something else there - a deep sadness and longing that seemed almost human.
"But why?" I asked, my voice barely above a whisper. "Why would you do such a thing?"
Baxter sighed, his body sagging under the weight of his secrets. "War has been raging on our planet for centuries," he said. "We have lost many good people, and our society is struggling to survive. We need any advantage we can get - even if it means infiltrating another world."
I felt a surge of compassion for this strange creature, who was so desperate that he would risk everything to gain the upper hand in his planet's war. "I see," I said softly. "But...what now?"
Baxter looked at me with a hint of hope in his eyes. "Now? Now we talk."
And so we did. We talked for hours, exchanging stories and secrets and learning about each other's worlds. Baxter told me about his planet, about the wars that had ravaged it for centuries, and about the desperation that had driven him to come here. I told him about my own world, about the mundane routines of my daily life, and about the strange occurrences that had brought us together.
As we talked, something shifted between us. It was as if a weight had been lifted from Baxter's shoulders, and he seemed to relax for the first time since I had met him. I felt a sense of connection with this alien creature, and I knew that our meeting had changed me forever.
And so we continued to talk, long after the sun had set and the stars had come out. We talked about the universe and its mysteries, and about the infinite possibilities that lay beyond our tiny corner of it. We talked about hope and despair, and about the resilience of the human spirit.
As I listened to Baxter's stories and shared my own, I realized that we were not so different after all. We were both searching for something - for a sense of belonging, for meaning and purpose, for a connection to something greater than ourselves. And in that moment, I knew that I would always cherish the memory of this strange and wonderful encounter with an alien from a distant world. | 2023-07-27T02:55:34 | https://www.reddit.com/gallery/15apxdv | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15apxdv | false | null | t3_15apxdv | /r/LocalLLaMA/comments/15apxdv/llama2_7bchat_renders_crazy_story_after_simple/ | false | false | 11 | null | |
! POE is providing free hosting to fine-tuned version of Llama. | 0 | Just discovered this from Quora POE's X:
>We are also interested in helping host fine-tuned versions of Llama so that developers don’t have to manage a server or pay large costs if their bot gets popular. Please reach out to us at developers@poe.com if you are interested in participating in this program.
The original X: [https://twitter.com/poe\_platform/status/1684362717459812352](https://twitter.com/poe_platform/status/1684362717459812352)
I guess WizardLM team can submit their fine-tunes so that we can all try and use them more easily? For those who just want to use uncensored model but don't want to host them locally, it may actually be a good news?
Not to mention, the current demo from
[https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official\_wizardlm13bv12\_released\_trained\_from/](https://www.reddit.com/r/LocalLLaMA/comments/159bl45/official_wizardlm13bv12_released_trained_from/)
is giving out "502 Bad Gateway" when I try to visit. I can't even try the new version.
| 2023-07-27T02:19:01 | https://www.reddit.com/r/LocalLLaMA/comments/15ap60z/poe_is_providing_free_hosting_to_finetuned/ | NickCanCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ap60z | false | null | t3_15ap60z | /r/LocalLLaMA/comments/15ap60z/poe_is_providing_free_hosting_to_finetuned/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'xb3MWzBvWf72_Fnm-kl1NF-yteH1_XhqJKxbK74vNEE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/1hTGG4EpFOQLIij-piXLuYdKR64pyoAbNiGPdJ8MwKc.jpg?width=108&crop=smart&auto=webp&s=1c3eb6af29a1cdca2cc461637126a5aa116cc6a6', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/1hTGG4EpFOQLIij-piXLuYdKR64pyoAbNiGPdJ8MwKc.jpg?auto=webp&s=179d598c2417f7674e4567c17a4eee35f4bc3675', 'width': 140}, 'variants': {}}]} |
Hosted WizardCoder | 6 | Hi
Is there any website that provides access to hosted open source models like WizardCoder or the new WizardLM?
Something similar to how poe.com provides access to proprietary LLMs like chatgpt and palm.
I tried h2o gpt but they don't have WizardCoder or the newer Llama2 fine tunes.
Thanks | 2023-07-26T23:52:14 | https://www.reddit.com/r/LocalLLaMA/comments/15alxzk/hosted_wizardcoder/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15alxzk | false | null | t3_15alxzk | /r/LocalLLaMA/comments/15alxzk/hosted_wizardcoder/ | false | false | self | 6 | null |
What is the limit if you have infinite money (PC build for roleplay)? | 57 | Let's say you are a millionaire and decide to build a PC to do roleplay in SillyTavern.
So, you build a PC with the best posible processor, 64GB RAM, and when it comes to the GPUs, *four RTX6000*, meaning a total of 192GB of VRAM.
My questions are:
What would be the absolute best roleplaying LLM for this machine?
How many context tokens would I have? Would long term memory be possible?
Would it be possible to use a LLM while using StableDifussion and an advanced text-to-speech AI at the same time?
Would I be able to ever leave my house again? | 2023-07-26T23:19:44 | https://www.reddit.com/r/LocalLLaMA/comments/15al63i/what_is_the_limit_if_you_have_infinite_money_pc/ | allmightyloser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15al63i | false | null | t3_15al63i | /r/LocalLLaMA/comments/15al63i/what_is_the_limit_if_you_have_infinite_money_pc/ | false | false | self | 57 | null |
Is there any real difference between a 13b, 30b, or 60b LLM when it comes to roleplay? | 31 | Is there any real difference between a 13b, 30b, or 60b LLM when it comes to roleplay?
Honestly, aside from some bugs and and lore mistakes here and there (like characters confusing names or missingterpreting some things), a good 13b LLM seems to be really, really solid, creative and fun. Can you go any higher with a 30b or 60b LLM?
I never used something above 13b, since my PC can't handle it and bla bla bla. | 2023-07-26T23:04:36 | https://www.reddit.com/r/LocalLLaMA/comments/15aksvx/is_there_any_real_difference_between_a_13b_30b_or/ | allmightyloser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aksvx | false | null | t3_15aksvx | /r/LocalLLaMA/comments/15aksvx/is_there_any_real_difference_between_a_13b_30b_or/ | false | false | self | 31 | null |
Short guide to hosting your own llama.cpp openAI compatible web-server | 156 | ## llama.cpp-based drop-in replacent for GPT-3.5
Hey all, I had a goal today to set-up wizard-2-13b (the llama-2 based one) as my primary assistant for my daily coding tasks. I finished the set-up after some googling.
llama.cpp added a server component, this server is compiled when you run make as usual. This guide is written with Linux in mind, but for Windows it should be mostly the same other than the build step.
1. Get the latest llama.cpp release.
2. Build as usual. I used `LLAMA_CUBLAS=1 make -j`
3. Run the server `./server -m models/wizard-2-13b/ggml-model-q4_1.bin`
4. There's a bug with the openAI api unfortunately, you need the `api_like_OAI.py` file from this branch: https://github.com/ggerganov/llama.cpp/pull/2383, this is it as raw txt: https://raw.githubusercontent.com/ggerganov/llama.cpp/d8a8d0e536cfdaca0135f22d43fda80dc5e47cd8/examples/server/api_like_OAI.py. You can also point to this pull request if you're familiar enough with git instead.
- So download the file from the link above
- Replace the `examples/server/api_like_OAI.py` with the downloaded file
5. Install python dependencies `pip install flask requests`
6. Run the openai compatibility server, `cd examples/server` and `python api_like_OAI.py`
With this set-up, you have two servers running.
1. The ./server one with default host=localhost port=8080
2. The openAI API translation server, host=localhost port=8081.
You can access llama's built-in web server by going to localhost:8080 (port from `./server`)
And any plugins, web-uis, applications etc that can connect to an openAPI-compatible API, you will need to configure `http://localhost:8081` as the server.
I now have a drop-in replacement local-first completely private that is about equivalent to gpt-3.5.
---
## The model
You can download the wizardlm model from thebloke as usual https://huggingface.co/TheBloke/WizardLM-13B-V1.2-GGML
There are other models worth trying.
- Wizarcoder
- LLaMa2-13b-chat
- ?
---
## My experience so far
It's great. I have a ryzen 7900x with 64GB of ram and a 1080ti. I offload about 30 layers to the gpu `./server -m models/bla -ngl 30` and the performance is amazing with the 4-bit quantized version. I still have plenty VRAM left.
I haven't evaluated the model itself thoroughly yet, but so far it seems very capable. I've had it write some regexes, write a story about a hard-to-solve bug (which was coherent, believable and interesting), explain some JS code from work and it was even able to point out real issues with the code like I expect from a model like GPT-4.
The best thing about the model so far is also that it supports 8k token context! This is no pushover model, it's the first one that really feels like it can be an alternative to GPT-4 as a coding assistant. Yes, output quality is a bit worse but the added privacy benefit is huge. Also, it's fun. If I ever get my hands on a better GPU who knows how great a 70b would be :)
We're getting there :D | 2023-07-26T22:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ak5k4 | false | null | t3_15ak5k4 | /r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/ | false | false | self | 156 | {'enabled': False, 'images': [{'id': 'YifVHGQmCuZu-0lS61KgDS5up-7BOuh98WKW6HPD854', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=108&crop=smart&auto=webp&s=e7198be8cbd82614d78a8e8aaac13caa6120e61d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=216&crop=smart&auto=webp&s=4c7af0a0bc03a2e246df1e9c81fef65fe183ace2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=320&crop=smart&auto=webp&s=1c16f2aef42627e7593befacc3c2c115a58e893a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=640&crop=smart&auto=webp&s=13f7ce1653472c756ae96d13f3d3a27d30f0ba66', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=960&crop=smart&auto=webp&s=b261d58e33a146e600c872ef92532dd4d6f983ec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?width=1080&crop=smart&auto=webp&s=8d2fbf6b2305ee447ff0702971e35d3c9a0b2a20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VjcLJL_ylpanjoFazP_OLgNrdFudDPUNwk64veJ5zi0.jpg?auto=webp&s=9aff0b337a18ed4ee47c8a9f50fef84abc9f8e46', 'width': 1200}, 'variants': {}}]} |
Converting BERT Embeddings to Readable Text | 5 | Hey everyone! Given models like TABERT for understanding tabular data and MathBERT for understanding mathematical equations, how would you convert the resultant contextualized embeddings into understandable text? I believe it can be done by adding a fresh decoder, right? Are they are known examples of converting resulting BERT representations into readable text? | 2023-07-26T22:37:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ak5e2/converting_bert_embeddings_to_readable_text/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ak5e2 | false | null | t3_15ak5e2 | /r/LocalLLaMA/comments/15ak5e2/converting_bert_embeddings_to_readable_text/ | false | false | self | 5 | null |
Unreliable Open source LLMs | 4 | I consistently face this problem
For a given task and same prompt, sometimes things work well sometimes it gives shitty reply which doesn't follow what I asked, I have seen this across all qlora LLMs I have trained. What might be the reason? Is it due to Less data, less batch size making training unstable? Have you all seen this? What are some tricks which can be used to reduce such behaviours. Reliability becomes center and utmost important point when it comes deployment. Thanks | 2023-07-26T22:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/15ajrx2/unreliable_open_source_llms/ | Longjumping_Essay498 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ajrx2 | false | null | t3_15ajrx2 | /r/LocalLLaMA/comments/15ajrx2/unreliable_open_source_llms/ | false | false | self | 4 | null |
Well, so much for agent-ising llama2... | 7 | 2023-07-26T21:48:08 | staviq | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 15aivpy | false | null | t3_15aivpy | /r/LocalLLaMA/comments/15aivpy/well_so_much_for_agentising_llama2/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'dhy00QwPfUZomMioNWx_Va3iXrAp3xIeGZy2lYGh4JI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=108&crop=smart&auto=webp&s=bfbd3ee91552e7d625606e45ff605a3b2b80ef34', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=216&crop=smart&auto=webp&s=5c068724de1dc1b005252423b7c07cfa7f34369d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=320&crop=smart&auto=webp&s=a0cf2571834adb0a51f1722bacb69c18ca39cf05', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=640&crop=smart&auto=webp&s=c8af9219a02cf2954ac4a3afdd03a49ba043a84c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=960&crop=smart&auto=webp&s=a3bbf47bf88706ad89d0754b898def1ba0d56036', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?width=1080&crop=smart&auto=webp&s=d8cd0e520401ab432bbf04c5e4b1672b976e084b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/TMVoqTmyJqX5SCgXCURTzcCbfQnfrJ7suDv6xSUjs1w.png?auto=webp&s=c503bdb4de3c29d6efc1b9e0d875df994af772b5', 'width': 1920}, 'variants': {}}]} | |||
Intelligence Compared...... | 2 | I have seen a few posts recently with fine tuned models trained on a tiny set of data performing better on tasks than GPT4. This is biased to what you are typing after training, you are in 'logic' / 'math' / literature' etc... mode when testing after your small data fine tuning.
If we were to compare the brain of say a human (pretend:GPT4) and an orangutan (pretend:Llama2) we will see that there are many tasks that any given human can do an orangutan can do without knowing the context, The baboon from the 1800's who worked at a railway station and even carried on working after the 'owner' / work boss had died:
[https://en.wikipedia.org/wiki/Jack\_(baboon)](https://en.wikipedia.org/wiki/Jack_(baboon))
Here is also the video that appeared on r/nextfuckinglevel today that showed an orangutan not only driving around a path but slowing down and making decisions:
[https://www.reddit.com/r/nextfuckinglevel/comments/rsre82/orangutan\_drives\_a\_golf\_car/](https://www.reddit.com/r/nextfuckinglevel/comments/rsre82/orangutan_drives_a_golf_car/)
What is displayed in these videos is bias, yes it looks like these animals completely understand the system and how it works interictally, but the crucial part that is missing is understanding the system. They only work when it works, so to speak. If certain bells rung in a different order or were missing a ding it may confuse the monkey and we end up with a catastrophe. Luckily that never happened but it was always a bigger probability than a human making the same error.
These curated models on 7B are essentially teaching a spider to create webs and being amazed they can make new webs. You need to test if the spider (fine-tuned-model) can maybe fly, code, do math and logic. Only then can you say it has emergent properties better than LLM's so large they are not achievable by anything smaller than the 3 big companies in the world.
P.S. while looking for the orangutan video I seen this video showing a real neural pathway being made:
[https://www.reddit.com/r/nextfuckinglevel/comments/15a1lrd/real\_footage\_of\_brain\_cells\_forming\_connections/](https://www.reddit.com/r/nextfuckinglevel/comments/15a1lrd/real_footage_of_brain_cells_forming_connections/)
You can see the connections stretching out and being reinforced over time, this is the process each hidden layer node makes with all the other nodes, major spaghetti junction of thin and thick connections, carrying on this analogy, you only want to know when the first bit of traffic makes it through each tunnel and chain them together and now you have a coherent thought. | 2023-07-26T21:25:26 | https://www.reddit.com/r/LocalLLaMA/comments/15aiacg/intelligence_compared/ | randomrealname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aiacg | false | null | t3_15aiacg | /r/LocalLLaMA/comments/15aiacg/intelligence_compared/ | false | false | self | 2 | null |
Seeing as llama-2 guanaco 70b is worse for creative writing, I'd really like to see a llama-1 65b guanaco tuned for 8k (or 16k!) context. Anyone else with me? | 8 | Seeing as I use local LLM's largely for creative writing (and can't stand censorship), guanaco 65b has been my go-to model since I discovered it.
Initially I was EXTREMELY excited when llama-2 was released, assuming that finetunes would further improve its abilities, but as [this post](https://www.reddit.com/r/LocalLLaMA/comments/159064y/llama_2_based_guanaco_and_airoboros_70b_are_a/) correctly points out, llama-2 finetunes of guanaco and airoboros are less capable in the creative fiction department, not more, in various aspects (see previously mentioned post for the deets).
I know there's a 65b 8k context tune of airoboros, but I personally prefer guanaco, and am honestly surprised that someone hasn't done an equivalent expanded context tune of that model yet.
Is anyone in the same boat as me? Just trying to judge interest out there, FWIW. If I had the abilities I would do it myself but this kind of thing is way beyond my technical skills, AFAIK. Would love to hear others' thoughts on the subject! Thanks. | 2023-07-26T21:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/15ahtaw/seeing_as_llama2_guanaco_70b_is_worse_for/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ahtaw | false | null | t3_15ahtaw | /r/LocalLLaMA/comments/15ahtaw/seeing_as_llama2_guanaco_70b_is_worse_for/ | false | false | self | 8 | null |
Someone made a partial orca 70b. | 17 | 2023-07-26T20:40:26 | https://huggingface.co/dfurman/llama-2-70b-dolphin-peft | a_beautiful_rhind | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15ah44m | false | null | t3_15ah44m | /r/LocalLLaMA/comments/15ah44m/someone_made_a_partial_orca_70b/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'EBsF30cXWdZedqxkETvroWk3qg0QWQKpj_3JRGXupvY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=108&crop=smart&auto=webp&s=e03b5580b33cec1a03026fcd108318852b77d2f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=216&crop=smart&auto=webp&s=087c48ff5fed96168794569bf6f3c53ca80d9ae3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=320&crop=smart&auto=webp&s=c389e99958c8d1247f8ca3bdbbb1d7dce408da40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=640&crop=smart&auto=webp&s=d745237183257db7ed405e4f90dab3afa45a2769', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=960&crop=smart&auto=webp&s=b16f73290c2f87fa93e88ceeba934fb8bcda8e93', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?width=1080&crop=smart&auto=webp&s=ea78ef038e28b5c2c2e7f982a83334f2c55040e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IUr2P4hWIbnQf51U0dX9-DnYbs3iHnbahM3o9_6lWrk.jpg?auto=webp&s=0e6c8552e45c32ea4e96e21411b32e8609f966f8', 'width': 1200}, 'variants': {}}]} | ||
Can some kind soul help a completely clueless newbie? | 8 | I'm brand new to all this and am trying to install LLama 2, specifically TheBloke\_Llama-2-13B-chat-GPTQ\_gptq-4bit-32g-actorder\_True. I would think my computer is acceptable to run it (Ryzen 9 7900X, 32 GB RAM, 3080Ti GPU).
Usually, it doesn't load the model (gives some Traceback error). I've got it to load in GPTQ-for-LLaMa, but the only output it shows is formatted as "ttotoodddottttoddd" etc. Only twice now, it loaded in ExLlama and seemed to work. However, I have to get that Traceback error 30-40 times before it'll work.
Here's my installation process:
1. Downloaded and extracted oobabooga\_windows.zip
2. Ran start\_windows.bat and let it successfully install everything.
3. Entered "TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder\_True" in Model (tab) > Download Custom Model or LoRa > Enter and waited for it to successfully download.
4. Selected the model from the Model drop-down menu (after refreshing).
5. Selected the Model Loader from the drop-down menu.
6. Click Load.
7. If it loaded (and that's a huge IF), I then go to the Text Generation tab and enter a prompt and press Enter.
I must be missing some important steps. Any ideas? | 2023-07-26T20:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/15agvi9/can_some_kind_soul_help_a_completely_clueless/ | 0260n4s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15agvi9 | false | null | t3_15agvi9 | /r/LocalLLaMA/comments/15agvi9/can_some_kind_soul_help_a_completely_clueless/ | false | false | self | 8 | null |
Sharing Generative Media Lab Project Proposal | 2 | Dear colleagues,
I wanted to share a project proposal I have drafted for establishing a
Media Lab focused on empowering cultural and creative industries in
the Global South with local generative AI capabilities.
While I do not currently have the means to implement this lab myself,
I hope this proposal document may serve as a useful starting point or
source of ideas for anyone interested in establishing similar
initiatives oriented towards empowering creative communities
with generative AI.
The proposal covers key aspects like objectives, methodology,
timeline, estimated budget in euros, etc. It is available here:
https://www.datoeneltejado.com/generative_artificial_intelligence_media_lab
I openly share this proposal under a Creative Commons CC0 1.0
Universal (CC0 1.0) Public Domain Dedication license so you are
welcome to reuse or adapt any portion that may be helpful in your own
projects. Although this is just a conceptual document, I'm happy to
provide any additional context or answer questions to the best of my
ability.
My goal is to contribute ideas and momentum towards seeing more
accessible medialabs emerge across the Global South to explore
generative AI. If this proposal helps inspire or inform real projects
in any way, I would consider that a valuable outcome. | 2023-07-26T20:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/15agimy/sharing_generative_media_lab_project_proposal/ | Scared-Virus-3463 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15agimy | false | null | t3_15agimy | /r/LocalLLaMA/comments/15agimy/sharing_generative_media_lab_project_proposal/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-hTx_DcB2f4K0ibMdsuoLB060o4TX7G7F6Z9SSSDr3w', 'resolutions': [{'height': 34, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=108&crop=smart&auto=webp&s=181ad57c0777aa5b7689a650fef22375da304438', 'width': 108}, {'height': 69, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=216&crop=smart&auto=webp&s=88b5c196a0e7f25b7e0697158d878427d2c622a9', 'width': 216}, {'height': 102, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=320&crop=smart&auto=webp&s=00a79f7da1b52206ea3701f51b746479ae7fafc5', 'width': 320}, {'height': 205, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=640&crop=smart&auto=webp&s=39368a90b0887cd67a404af0b0381c3570cc3d17', 'width': 640}, {'height': 307, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=960&crop=smart&auto=webp&s=5a4068699b34c3008d61318f5f54abc78606aab8', 'width': 960}, {'height': 346, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?width=1080&crop=smart&auto=webp&s=195376cefb14fd468aa95885f4499d2fa8899c77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b6EqPXU9SFlED8M__GTxjH7R97M3rAfDDMg_yb1rHks.jpg?auto=webp&s=28ce9a41af5f4c6676bcd59bb141b08a9c30a0b1', 'width': 1872}, 'variants': {}}]} |
Question: Settings for extending context | 4 | Hey! Rookie question from a lowly enthusiast. What are the settings supposed or theorized to be for rope frequency base and rope frequency scale? I'm assuming the desired values of these settings are dependent on multiple factors; the actual size of the input prompted to a model, the defined context size of i.e. 4k or 8k, but also whether a model already has a native context of 4k or 8k. What would you the experts out there consider using for these settings, when, why and what would/could the differences be? Is there a rule of thumb that can help the simpleminded such as me? I apologize if this question has wrongful assumptions. I hope it makes sense. | 2023-07-26T20:04:10 | https://www.reddit.com/r/LocalLLaMA/comments/15ag60j/question_settings_for_extending_context/ | Sir_Mammut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ag60j | false | null | t3_15ag60j | /r/LocalLLaMA/comments/15ag60j/question_settings_for_extending_context/ | false | false | self | 4 | null |
What is the best model to extract data from a plain text? | 1 | With llama2, personal data like name of persons, phone numbers, etc? | 2023-07-26T20:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/15ag5vw/what_is_the_best_model_to_extract_data_from_a/ | kontostamas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ag5vw | false | null | t3_15ag5vw | /r/LocalLLaMA/comments/15ag5vw/what_is_the_best_model_to_extract_data_from_a/ | false | false | self | 1 | null |
What's the matter with GGML models? | 39 | I'm pretty new with running Llama locally on my 'mere' 8GB NVIDIA card using ooba/webui. I'm using GPTQ models like Luna 7B 4Bit and others, and they run decently at 30tk/sec using ExLLama. It's fun and all, but...
Since some of you told me that GGML are far superior to even the same bit GPTQ models, I tried running some GGML models and offload layers onto the GPU as per loader options, but it is still extremely slow. The token generation is at 1-2tk/sec, but the time it needs to start generating takes more than a minute. I couldn't get **ANY** GGML model to run as fast as the GPTQ models.
With that being said, what's the hype behing GGML models, if they run like crap? Or maybe I'm just using the wrong options?
Appreciate the help! | 2023-07-26T20:01:54 | https://www.reddit.com/r/LocalLLaMA/comments/15ag3sh/whats_the_matter_with_ggml_models/ | Fusseldieb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ag3sh | false | null | t3_15ag3sh | /r/LocalLLaMA/comments/15ag3sh/whats_the_matter_with_ggml_models/ | false | false | self | 39 | null |
AGI local | 1 | [removed] | 2023-07-26T19:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/15afrze/agi_local/ | ComparisonTotal1016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15afrze | false | null | t3_15afrze | /r/LocalLLaMA/comments/15afrze/agi_local/ | false | false | self | 1 | null |
Can I run the 4 or 8 bit TheBloke/llama-2-70b-Guanaco-QLoRA-fp16 on a 3090? | 1 | [removed] | 2023-07-26T19:41:12 | https://www.reddit.com/r/LocalLLaMA/comments/15afk7v/can_i_run_the_4_or_8_bit/ | trv893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15afk7v | false | null | t3_15afk7v | /r/LocalLLaMA/comments/15afk7v/can_i_run_the_4_or_8_bit/ | false | false | self | 1 | null |
Is it possible to run Llama 2 without a GPU? | 50 | I have access to a grid of machines, some very powerful with up to 80 CPUs and >1TB of RAM. None has a GPU however. Is it possible to run Llama 2 in this setup? Either high threads or distributed.
I'd like to build some coding tools. Simple things like reformatting to our coding style, generating #includes, etc. So doesn't have to be super fast but also not super slow. | 2023-07-26T19:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/15af0la/is_it_possible_to_run_llama_2_without_a_gpu/ | patery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15af0la | false | null | t3_15af0la | /r/LocalLLaMA/comments/15af0la/is_it_possible_to_run_llama_2_without_a_gpu/ | false | false | self | 50 | null |
Found this gem in a new dataset I am working on | 85 | 2023-07-26T19:12:40 | pokeuser61 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15aet79 | false | null | t3_15aet79 | /r/LocalLLaMA/comments/15aet79/found_this_gem_in_a_new_dataset_i_am_working_on/ | false | false | 85 | {'enabled': True, 'images': [{'id': 'FjyJj4ZLQNjo-FyxJ2Ra6SfdiAz4ihQXi_rSechXA9A', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=108&crop=smart&auto=webp&s=23dbe2fe846f4041cf44585b15e606f0690bf1c5', 'width': 108}, {'height': 40, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=216&crop=smart&auto=webp&s=828c980ef86a6b1aa9dc2ad3dea8ac771ffb9889', 'width': 216}, {'height': 59, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=320&crop=smart&auto=webp&s=d59431cfa5768813635381572c2cb89816d38bf7', 'width': 320}, {'height': 119, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=640&crop=smart&auto=webp&s=9bafe028df17d29e49af9caf93b30591d3aa32b3', 'width': 640}, {'height': 179, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=960&crop=smart&auto=webp&s=435f8b89814c0313504853f2bb791f57b597806d', 'width': 960}, {'height': 201, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?width=1080&crop=smart&auto=webp&s=88b1e4b9469b37f18df4dac2dd9746af51ea18d2', 'width': 1080}], 'source': {'height': 251, 'url': 'https://preview.redd.it/tmwekja9zceb1.png?auto=webp&s=fcffa8be70dae3c8d5245e652f1552148726e44e', 'width': 1344}, 'variants': {}}]} | |||
How to Develop LLMs in a Responsible Way | 1 | [removed] | 2023-07-26T18:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/15adm7j/how_to_develop_llms_in_a_responsible_way/ | Ashishpatel26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15adm7j | false | null | t3_15adm7j | /r/LocalLLaMA/comments/15adm7j/how_to_develop_llms_in_a_responsible_way/ | false | false | self | 1 | null |
AI Policy @🤗: Open ML Considerations in the EU AI Act | 40 | 2023-07-26T18:21:44 | https://huggingface.co/blog/eu-ai-act-oss | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15adh7x | false | null | t3_15adh7x | /r/LocalLLaMA/comments/15adh7x/ai_policy_open_ml_considerations_in_the_eu_ai_act/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'ApNanJQCBDXikXt8Myy3R4e4bvvWDeIYlhBLnQIgZ18', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=108&crop=smart&auto=webp&s=6f9194b85c22a953b39e196f444dcde8c101389b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=216&crop=smart&auto=webp&s=3e9875091731a9dd50054a47812d787eefe54a24', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=320&crop=smart&auto=webp&s=420414dc6ab78f702e161f393179e178c7bb9d66', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=640&crop=smart&auto=webp&s=a47c90edbbc3f0bfe220c24b178e769b7d6044b4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?width=960&crop=smart&auto=webp&s=ef1e72c3ff4bfd82309be3a07d2e5e5906578d82', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/i0FQ1-IBm4WYLQ2pC2AfNDY4PTGuXSOW3elSAt-PBnc.jpg?auto=webp&s=0bf8c49121e5b887785f534419f738e960e268d0', 'width': 960}, 'variants': {}}]} | ||
Skyrim has been modded to have LLM powered NPC conversations. | 1 | 2023-07-26T18:20:53 | https://www.reddit.com/r/skyrimvr/comments/157d5u6/nazeems_just_been_misunderstood_fus_mantella_mod/ | fallingdowndizzyvr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15adgdo | false | null | t3_15adgdo | /r/LocalLLaMA/comments/15adgdo/skyrim_has_been_modded_to_have_llm_powered_npc/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rlAeftl_lR05tdr5X_lv-tmkKrEZk-Db6xIJo4rATWg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=108&crop=smart&auto=webp&s=3bd3e7d4add6b10b0aa36bc9c62df1c9e10548b8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=216&crop=smart&auto=webp&s=9025131beedf3bbd6860ca869c42616a31b865f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=320&crop=smart&auto=webp&s=aeedf4d11220cc2c6737b1f5c8a8e688d68321a6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=640&crop=smart&auto=webp&s=596238811127a7ce5aed46dd92bcfe272f7ea57c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=960&crop=smart&auto=webp&s=d5842a2a642f8fa4d1f69a7e1953ba8f7393eee3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?width=1080&crop=smart&auto=webp&s=db6cab1566987f4c8012cc5d3725eaaa3720bd73', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dP2qBEkG2OSH6Nsz6V_IoDp8OwHu0xpcu9lWl-PGr3E.png?auto=webp&s=7e772efb09560a8f13575a9889a4f1064d870c1c', 'width': 1280}, 'variants': {}}]} | ||
Extended Guide: Instruction-tune Llama 2 | 37 | [https://www.philschmid.de/instruction-tune-llama-2](https://www.philschmid.de/instruction-tune-llama-2) | 2023-07-26T18:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/15ad7ct/extended_guide_instructiontune_llama_2/ | MuffinB0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ad7ct | false | null | t3_15ad7ct | /r/LocalLLaMA/comments/15ad7ct/extended_guide_instructiontune_llama_2/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]} |
Langchain custom functions with llama | 0 | Hey folks, turns out if you wanna make a list of tools to be used by LLM in langchain - those can only be used with OpenAI. What a bummer! Do you use local llm with langchain to execute on big tasks, what’s your experience? | 2023-07-26T18:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/15ad1zy/langchain_custom_functions_with_llama/ | Sacksha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ad1zy | false | null | t3_15ad1zy | /r/LocalLLaMA/comments/15ad1zy/langchain_custom_functions_with_llama/ | false | false | default | 0 | null |
Looking for suggestions on detecting user intent. Trying to help users complete varying tasks. | 3 | I've setup very rote chatbot and help systems in the past based on large decision trees. But it always ended up as a game of catch up where we would find new things to throw into the tree.
We currently have a huge knowledge base full of mildly structured lessons learned. The platform we're using for our 'help' is being decommissioned for various reasons, so we are starting fresh. Thought it would be a good opportunity to investigate LLM solutions.
The end goal is to have a clippy like recommendations on helping users complete tasks.
I'm currently on a decent sized dev team, but this is a bit out of our wheel house. Sorry if the questions have been already answer, we're still getting our bearings on this and all the new terms. | 2023-07-26T17:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/15abj3v/looking_for_suggestions_on_detecting_user_intent/ | chris480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15abj3v | false | null | t3_15abj3v | /r/LocalLLaMA/comments/15abj3v/looking_for_suggestions_on_detecting_user_intent/ | false | false | self | 3 | null |
Is training inside ooba with load-in-4bit the same as QLoRA ? | 8 | I know the training tab uses LoRA, but is LoRA + Accelerate 4-bit all I need to do QLoRA or am I missing something? | 2023-07-26T16:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/15ab54p/is_training_inside_ooba_with_loadin4bit_the_same/ | hurrytewer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ab54p | false | null | t3_15ab54p | /r/LocalLLaMA/comments/15ab54p/is_training_inside_ooba_with_loadin4bit_the_same/ | false | false | self | 8 | null |
Beginner Oobabooga Chatbot Question - Verbosity goes off the rails. | 1 | [removed] | 2023-07-26T16:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/15aasem/beginner_oobabooga_chatbot_question_verbosity/ | decker12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aasem | false | null | t3_15aasem | /r/LocalLLaMA/comments/15aasem/beginner_oobabooga_chatbot_question_verbosity/ | false | false | self | 1 | null |
eGpu to train /fine tune models | 4 | Just wondering if anyone is using eGPU to fine tune models. What's your setup | 2023-07-26T16:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/15aa724/egpu_to_train_fine_tune_models/ | paulus_aurellius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15aa724 | false | null | t3_15aa724 | /r/LocalLLaMA/comments/15aa724/egpu_to_train_fine_tune_models/ | false | false | self | 4 | null |
Is there any model or any tool (for free) that can read financial statements of multiple companies in pdf and create a comparison of their financial performance | 1 | I've been trying to extract text from pdf using pypdf2 but it seems like it's not getting 100 accuracy.
Is there any model or any python codes that are already doing this? | 2023-07-26T15:57:52 | https://www.reddit.com/r/LocalLLaMA/comments/15a9nl0/is_there_any_model_or_any_tool_for_free_that_can/ | paulus_aurellius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a9nl0 | false | null | t3_15a9nl0 | /r/LocalLLaMA/comments/15a9nl0/is_there_any_model_or_any_tool_for_free_that_can/ | false | false | self | 1 | null |
Unveiling the Latent Potentials of Large Language Models (LLMs) | 65 | I've spent considerable time examining the capabilities of LLMs like GPT-4, and my findings can be summarized as:
1. **Latent Semantics in LLMs:** Hidden layers in LLMs carry a depth of meaning that has yet to be fully explored.
2. **Interpretable Representations:** By visualizing each hidden layer of LLMs as distinct vector spaces, we can employ SVMs and clustering methods to derive profound semantic properties.
3. **Power of Prompt Engineering:** Contrary to common practice, a single well-engineered prompt can drastically transform a GPT-4 model's performance. I’ve seen firsthand its ability to guide LLMs towards desired outputs.
Machine Learning, especially within NLP, has achieved significant milestones, thanks to LLMs. These models house vast hidden layers which, if tapped into effectively, can offer us unparalleled insights into the essence of language.
My PhD research delved into how vector spaces can model semantic relationships. I posit that within advanced LLMs lie constructs fundamental to human language. By deriving structured representations from LLMs using unsupervised learning techniques, we're essentially unearthing these core linguistic constructs.
In my experiments, I've witnessed the rich semantic landscape LLMs possess, often overshadowing other ML techniques. **From a standpoint of explainability:** I envision a system where each vector space dimension denotes a semantic attribute, transcending linguistic boundaries. Though still in nascent stages, I foresee a co-creative AI development environment, with humans and LLMs iterating and refining models in real-time.
While fine-tuning has its merits (I've personally fine-tuned instances and collaborated with the OpenOrca team), I've found immense value in prompt engineering. Properly designed prompts can redefine the scope of LLMs, making them apt for a variety of tasks. The potential applications of this approach are extensive.
I present these ideas in the hope that the community sees their value and potential. | 2023-07-26T15:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/ | hanjoyoutaku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a8ppj | false | null | t3_15a8ppj | /r/LocalLLaMA/comments/15a8ppj/unveiling_the_latent_potentials_of_large_language/ | false | false | self | 65 | null |
Llama-2 7B-hf repeats context of question directly from input prompt, cuts off with newlines | 1 | [removed] | 2023-07-26T15:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/15a878a/llama2_7bhf_repeats_context_of_question_directly/ | k-ga | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a878a | false | null | t3_15a878a | /r/LocalLLaMA/comments/15a878a/llama2_7bhf_repeats_context_of_question_directly/ | false | false | self | 1 | null |
Anyone running dual 3090? | 18 | What is the hardware setup? Do you use SLI? | 2023-07-26T14:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/15a84b9/anyone_running_dual_3090/ | Remarkable_Ad4470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a84b9 | false | null | t3_15a84b9 | /r/LocalLLaMA/comments/15a84b9/anyone_running_dual_3090/ | false | false | self | 18 | null |
New 3B model with 8K context | 1 | “Cerebras and Opentensor are pleased to announce BTLM-3B-8K (Bittensor Language Model), a new state-of-the-art 3 billion parameter open-source language model that achieves breakthrough accuracy across a dozen AI benchmarks.
BTLM-3B-8K Highlights:
- 7B level model performance in a 3B model
- State of the art 3B parameter model
- Optimized for long sequence length inference 8K or more
- First model trained on the SlimPajama, the largest fully deduplicated open dataset
- Runs on devices with as little as 3GB of memory when quantized to 4-bit
- Apache 2.0 license for commercial use
BTLM was commissioned by the OpenTensor foundation for use on the Bittensor network. Bittensor is a blockchain based network that lets anyone contribute AI models for inference, providing a decentralized alternative to centralized model providers like OpenAI and Google. Bittensor serves over 4,000 AI models with more than 10 trillion model parameters across the network.” | 2023-07-26T13:03:47 | https://twitter.com/cerebrassystems/status/1683556415330213888?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15a5bjz | false | null | t3_15a5bjz | /r/LocalLLaMA/comments/15a5bjz/new_3b_model_with_8k_context/ | false | false | default | 1 | null |
Is there any webui runpod templates that works "out of the box" for llama-2 GPTQs yet ? | 4 | Or is it still needed to fiddle manually, installing or setting stuff up ?
I saw that u/TheBloke's templates were updated 28th june, before llama 2, and his llama 2 on hugging face says last version of Transformers / ExLlama is requiered.
( or maybe I just need to "git pull / pip install " a new version on the pod instance and it works ? )
( I'm interested in the 70B versions, that I guess won't fit in my local 24Gb )
Thanks | 2023-07-26T13:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/15a5al2/is_there_any_webui_runpod_templates_that_works/ | knoodrake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a5al2 | false | null | t3_15a5al2 | /r/LocalLLaMA/comments/15a5al2/is_there_any_webui_runpod_templates_that_works/ | false | false | self | 4 | null |
Embeddings or Instructor Embeddings? | 9 | Hello,
I was wondering if I should use for my document Q&A Chatbot Embeddings or Instructor Embeddings from HuggingFace with LangChain.
​
Like this: [https://python.langchain.com/docs/integrations/text\_embedding/instruct\_embeddings](https://python.langchain.com/docs/integrations/text_embedding/instruct_embeddings)
or that: [https://python.langchain.com/docs/integrations/text\_embedding/huggingfacehub](https://python.langchain.com/docs/integrations/text_embedding/huggingfacehub)
​
Do you guys have any experience wich of these works the best for my specific use case? | 2023-07-26T12:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/15a3yef/embeddings_or_instructor_embeddings/ | jnk_str | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a3yef | false | null | t3_15a3yef | /r/LocalLLaMA/comments/15a3yef/embeddings_or_instructor_embeddings/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=108&crop=smart&auto=webp&s=2684aa31208d728f65279640de17c8d8f9039e79', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=216&crop=smart&auto=webp&s=d50c278029cd238c11dc42e60a8b08d7d1f28bc3', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=320&crop=smart&auto=webp&s=1642eda69cd46554b563bc6d931ff7565bf15d55', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?width=640&crop=smart&auto=webp&s=fbdcb89f2e77b07ef0f74faf07f62774da8993e6', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/yj1VFfepLr812JLTSNnU5MJG1lU-9ZqkUURZj9T-PA0.jpg?auto=webp&s=a6f2697c0bbf3ffa9fd7a65e9e0e8d57c392d56a', 'width': 794}, 'variants': {}}]} |
Can finetuning LLM beat Roberta/Deberta in traditional tasks? | 5 | For example, there is a classification dataset. If I finetune llama with this dataset, will it perform better than finetune a bert model? | 2023-07-26T09:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/15a0xxx/can_finetuning_llm_beat_robertadeberta_in/ | Financial_Stranger52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a0xxx | false | null | t3_15a0xxx | /r/LocalLLaMA/comments/15a0xxx/can_finetuning_llm_beat_robertadeberta_in/ | false | false | self | 5 | null |
Short guide to hosting your own llama.cpp openAI compatible web-server | 1 | [removed] | 2023-07-26T09:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/15a0s1y/short_guide_to_hosting_your_own_llamacpp_openai/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a0s1y | false | null | t3_15a0s1y | /r/LocalLLaMA/comments/15a0s1y/short_guide_to_hosting_your_own_llamacpp_openai/ | false | false | self | 1 | null |
OpenAI is still exploring an open source LLM release, currently codenamed G3PO, and views Llama 2's rapid adoption as a threat | 309 | This news comes from *The Information*, the same business publication that previously leaked the imminent release of Llama 2. The [full article](https://www.theinformation.com/articles/pressure-grows-on-openai-to-respond-to-metas-challenge) is paywalled but here's a quick summary of the situation:
* [Last time](https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/) this was reported two months ago, OpenAI was reportedly preparing for an immediate release. Now, they're still exploring the idea of releasing an open source model but haven't confirmed a timeline yet.
* OpenAI is feeling pressured by Meta's release of Llama 2. Their model, named G3PO internally, is unlikely to be competitive with GPT-3.5 or GPT-4. The G3PO name could be a hint to its capabilities.
* According to the author, they're delaying the release because they want to focus on launching an app store and creating a personalized ChatGPT assistant. Their app store would be a marketplace offering another way to forming developer lock-in.
* Even with the delay and changing focus, OpenAI will likely move forward with an open source model for the same reasons Meta released Llama 2. They reportedly believe in a process of developing advanced models to generate revenue while releasing less advanced open source models to keep developers on their side.
I wouldn't be surprised if they also delayed the release because they need more time to push their advanced models ahead. It'd be interesting to see a GPT-3.5-Turbo open sourced once something like GPT-4.5 exists. | 2023-07-26T09:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/15a0jxk/openai_is_still_exploring_an_open_source_llm/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a0jxk | false | null | t3_15a0jxk | /r/LocalLLaMA/comments/15a0jxk/openai_is_still_exploring_an_open_source_llm/ | false | false | self | 309 | {'enabled': False, 'images': [{'id': 'k_gM2lyBQ__LBnOY7SNXNiEqwPQYQ0BnT6iIuwGISzo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=108&crop=smart&auto=webp&s=b598c5aa52503f4a817a1e2b9f5d043015a8dc6f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=216&crop=smart&auto=webp&s=d5174573690f9ddef512b1df282117f29be94c75', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=320&crop=smart&auto=webp&s=4df98d447072def2e9bd8657191dd53cba632db6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=640&crop=smart&auto=webp&s=8464b37961c2df31ac5fe4c71f8fb2dd4665b102', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=960&crop=smart&auto=webp&s=3d6d423fc043d85e7db5d58c139c34b3550cdf97', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?width=1080&crop=smart&auto=webp&s=93c1b5c6caab88c16fabfe9bfce2cd2bc5e1605d', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/cUJWoEQagjFbxhZ4LXmKg-2b99VuIdhg2bu0LHJxNcs.jpg?auto=webp&s=f537cba9de238588be26dd3c0aeaca33b0a87e1b', 'width': 1200}, 'variants': {}}]} |
WizardLM 13B V1.2 Roleplay Sampler | 1 | [removed] | 2023-07-26T09:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/15a0az7/wizardlm_13b_v12_roleplay_sampler/ | Aeon-Black | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a0az7 | false | null | t3_15a0az7 | /r/LocalLLaMA/comments/15a0az7/wizardlm_13b_v12_roleplay_sampler/ | false | false | nsfw | 1 | null |
Malicious prompt injection | 42 | A subtle vector for spamming/phishing where the user just sees images/audio but there's nastiness behind the scenes.
From the Twitter thread :
"...it only works on open-source models (i.e. model weights are public) because these are adversarial inputs and finding them requires access to gradients...
I'd hoped that open source models would be particularly appropriate for personal assistants because they can be run locally and avoid sending personal data to LLM providers but this puts a bit of a damper on that."
[https://twitter.com/random\_walker/status/1683833600196714497](https://twitter.com/random_walker/status/1683833600196714497)
Paper:
[https://arxiv.org/abs/2307.10490](https://arxiv.org/abs/2307.10490) | 2023-07-26T08:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/15a02le/malicious_prompt_injection/ | danja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a02le | false | null | t3_15a02le | /r/LocalLLaMA/comments/15a02le/malicious_prompt_injection/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'SXJnLOEjbW5FqerOHQrTs4fE3usWKuzASiViNabAJsE', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=108&crop=smart&auto=webp&s=6d77c61972835cc18f1568dc51626ec9836e388d', 'width': 108}, {'height': 274, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=216&crop=smart&auto=webp&s=5a591a8bca5b8bd5749e46301640d63c872136e3', 'width': 216}, {'height': 406, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=320&crop=smart&auto=webp&s=2368cdcf1a261a485d266d2524ae42de22313761', 'width': 320}, {'height': 812, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=640&crop=smart&auto=webp&s=dd02b4fa36b827850027112e284878477d4cf8e3', 'width': 640}, {'height': 1218, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?width=960&crop=smart&auto=webp&s=c068c9a17f125dee94bd1b0549ef0d1abe16f630', 'width': 960}], 'source': {'height': 1246, 'url': 'https://external-preview.redd.it/cvCJQRIs7f7wZFQvZqkTXg2gKlYgcMkH-l-quk-cReY.jpg?auto=webp&s=834539c07bc382fbd3c5fe45bcda252180980d0a', 'width': 982}, 'variants': {}}]} |
Which LLama2 can I run? | 2 | I have 2 3090 and 32 GB of RAM. Which Llama2 Unquantized can I run? | 2023-07-26T08:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/15a02do/which_llama2_can_i_run/ | rajanghimire534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15a02do | false | null | t3_15a02do | /r/LocalLLaMA/comments/15a02do/which_llama2_can_i_run/ | false | false | self | 2 | null |
I'm relatively new to LLM's but I find it odd, that a supposedly "superior" GPT-4 LLM is flat out not doing what I need it to, but Llama 2 does it very well! | 37 | In a nutshell, I'm totally new to all this AI stuff, but I've managed to set up oobabooga with llama-2-13b-chat.ggmlv3.q4\_1, and let me tell you, it's been an awesome experience so far. The only thing is, I kinda wish it were a bit more open-minded when it comes to morals and ethics. I think it would be even funnier and more helpful that way. Nevertheless, it's been performing exceptionally well with the tasks I throw at it, unlike those sneaky GPT-3.5 and GPT-4 versions that used to deceive and gaslight me constantly and need to be constantly prompting. But hey, I'm genuinely excited to see where the Llama LLM will go in the future!
[GPT-3.5](https://preview.redd.it/zkz41rjzn9eb1.png?width=662&format=png&auto=webp&s=98feddd144cd0de03341d8b06ae8655ea5f92173)
[GPT-3.5](https://preview.redd.it/wqxney3wr9eb1.png?width=683&format=png&auto=webp&s=f62ae91641232722ad191cc533b7cba565602f58)
[llama-2-13b-chat.ggmlv3.q4\_1](https://preview.redd.it/4dya5ujzn9eb1.png?width=783&format=png&auto=webp&s=c4ef0494dbe8f403ff74358737f7f7ee3b451e49) | 2023-07-26T08:26:42 | https://www.reddit.com/r/LocalLLaMA/comments/159zq6y/im_relatively_new_to_llms_but_i_find_it_odd_that/ | chrisebryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159zq6y | false | null | t3_159zq6y | /r/LocalLLaMA/comments/159zq6y/im_relatively_new_to_llms_but_i_find_it_odd_that/ | false | false | 37 | null | |
Llama-2-7b-chat producing gibberish output | 4 | I'm using the llama-2-7b-chat model through vllm, and it is giving me a gibberish output. I've checked the config.json file and I've ensured that tokenizer is the one from meta's GitHub. I don't know what I'm doing wrong and how to fix it. | 2023-07-26T08:08:00 | jacks751 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 159zejn | false | null | t3_159zejn | /r/LocalLLaMA/comments/159zejn/llama27bchat_producing_gibberish_output/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'JFueTY5GodsZFWG8oxN4z7mrzUrOnunb7jblnE3TjwI', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=108&crop=smart&auto=webp&s=7631ead19286a42a71ee9418e3cd085f08cdef27', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=216&crop=smart&auto=webp&s=02f5bc052fb74d1b0d8152c584ab33104306712d', 'width': 216}, {'height': 87, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=320&crop=smart&auto=webp&s=8caea6e9614619f43855389066bcf8c935634c08', 'width': 320}, {'height': 174, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=640&crop=smart&auto=webp&s=3afeca9e101a6c51ce84a142dd704fcdee57ebb1', 'width': 640}, {'height': 261, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=960&crop=smart&auto=webp&s=8114067cd3395be52af990d6d82d39496297920d', 'width': 960}, {'height': 294, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?width=1080&crop=smart&auto=webp&s=591dcb6f9936a7d3ab2d15a7faa085a747cde80d', 'width': 1080}], 'source': {'height': 436, 'url': 'https://preview.redd.it/2pej8fuso9eb1.jpg?auto=webp&s=ec2741eb13fffe081f082461325326f850579cde', 'width': 1600}, 'variants': {}}]} | ||
My workplace is looking into building PC for Local LLM models to run | 1 | We are not having issues with budget, what would be the best configuration would be best to build? I'm looking right now at GeForce RTX 4090 for graphics card that all I know so far, and what else everything needs to be? RAM 128GB?, What CPU would be better? | 2023-07-26T07:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/159yp7c/my_workplace_is_looking_into_building_pc_for/ | SupremeConscious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159yp7c | false | null | t3_159yp7c | /r/LocalLLaMA/comments/159yp7c/my_workplace_is_looking_into_building_pc_for/ | false | false | self | 1 | null |
'NoneType' object is not iterable Error when loading llama-2-7b-chat through vLLM | 3 | I downloaded the models from meta directly yesterday and I've put them on gdrive so I can access them through colab.
I tried loading the model through vllm and faced the following 2 errors first:
- config.json not found
- For this I renamed the params.json to config.json file in the model path.
- Tokenizer not found
- I uploaded tokenizer.model in the model path.
Then when I try to load the model, it takes 5-10 minutes to load the tokenizer, then gives me the error 'NoneType' object is not iterable error.
I have attached a picture of the error, but is there something wrong I am doing? Is there a different config.json file I should be using? | 2023-07-26T07:09:54 | jacks751 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 159ye11 | false | null | t3_159ye11 | /r/LocalLLaMA/comments/159ye11/nonetype_object_is_not_iterable_error_when/ | false | false | 3 | {'enabled': True, 'images': [{'id': '0uKkKWUWE26YYZheSlD_FXnCZ-RkiU2vhVbstGdphqQ', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=108&crop=smart&auto=webp&s=d27bed5de4437005cdaa37d9b837a403fa8ab06f', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=216&crop=smart&auto=webp&s=3b25767ea740f7c83404b844f3c9e21e7c75768e', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=320&crop=smart&auto=webp&s=14783379d8e77ec69b23c6be91c17740df0a3336', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=640&crop=smart&auto=webp&s=b24a2bf3a04051f02572c1aac0cbdd9cfd7cc0ae', 'width': 640}, {'height': 682, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=960&crop=smart&auto=webp&s=5160217f064c96a3298618a49c538629190418eb', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?width=1080&crop=smart&auto=webp&s=91014a18a1e1d4dbe4e3e977ca2420838b19fda8', 'width': 1080}], 'source': {'height': 1137, 'url': 'https://preview.redd.it/1n8cvcwfe9eb1.jpg?auto=webp&s=1365ea511b631fcc71f90980a88d5e1950ca16e1', 'width': 1600}, 'variants': {}}]} | ||
'NoneType' object is not iterable Error when loading llama-2-7b-chat through vLLM | 1 | [removed] | 2023-07-26T07:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/159yb4m/nonetype_object_is_not_iterable_error_when/ | comical_cow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159yb4m | false | null | t3_159yb4m | /r/LocalLLaMA/comments/159yb4m/nonetype_object_is_not_iterable_error_when/ | false | false | default | 1 | null |
Quantization and CPU only inference | 1 | I want to play with some models locally (CPU only) and I am a bit lost :
\- Are GGML models my best bet for CPU only inference?
\- Which quantization ? TheBloke models have a huge list of quantization flavors for each model, which is the best for CPU only?
Thanks!
​ | 2023-07-26T06:50:34 | https://www.reddit.com/r/LocalLLaMA/comments/159y1nd/quantization_and_cpu_only_inference/ | Low-Woodpecker-4522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159y1nd | false | null | t3_159y1nd | /r/LocalLLaMA/comments/159y1nd/quantization_and_cpu_only_inference/ | false | false | self | 1 | null |
OpenAI’s Andrej Karpathy Launches Baby Llama 2 | 11 | 2023-07-26T06:25:05 | https://www.theinsaneapp.com/2023/07/openai-karpathy-launches-baby-llama-2.html | vadhavaniyafaijan | theinsaneapp.com | 1970-01-01T00:00:00 | 0 | {} | 159xlup | false | null | t3_159xlup | /r/LocalLLaMA/comments/159xlup/openais_andrej_karpathy_launches_baby_llama_2/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'F6A7MMhrmtgO4VDrfl_-k2zwZbbD-TRvLZLMXcwhV5k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=108&crop=smart&auto=webp&s=5f90afb7b473341e9d8d9fe144974bf3a3e8e23f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=216&crop=smart&auto=webp&s=53c86e7e110ba874b41c0138bd9b0fd3d8e956de', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=320&crop=smart&auto=webp&s=8384269ac2383184d7fb26439cffa43dd7af38d1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=640&crop=smart&auto=webp&s=477e03f133736f2dcdd329f6ecfd1f819342daee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=960&crop=smart&auto=webp&s=0b2e818611cc6fac165695137971b61c4404ddc9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?width=1080&crop=smart&auto=webp&s=5cbb23c819b4f49e16a9aafedd365222d9dc8688', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/iQAFux1hz68xt3XiewMdxdNAfNIIyyrNOYWVxLU4bj8.jpg?auto=webp&s=05f731ee9dcd4f60649e398bf719e9cfd74d9d5f', 'width': 1280}, 'variants': {}}]} | ||
Does a gpu llama model have to be GGML for it to be offloaded to CPU? | 1 | New to all of this, I have a model I want to use but it is not GGML. Does that mean I can only run it on GPU, or can I still offload some of it to ram? Thanks | 2023-07-26T05:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/159x44k/does_a_gpu_llama_model_have_to_be_ggml_for_it_to/ | PurdSurv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159x44k | false | null | t3_159x44k | /r/LocalLLaMA/comments/159x44k/does_a_gpu_llama_model_have_to_be_ggml_for_it_to/ | false | false | self | 1 | null |
currently aiming to fine-tune a 7b parameter model to beat 30b/40b need difficult Benchmark questions | 20 | I'm currently training my own model that in my opinion Rivals that of responses from the top 40B models any questions you seem to always get bad answers too , can help me benchmark and further improve the llm based on them so please reply to this post with any prompts that may help as I do of course plan on open sourcing the Finished model the overall reasoning for the 7B is for the overarching need for expensive Hardware in the local language model community or renting from cloud-based services overall the presence for pushing the edges of lower parameter models seems to be limited to 13B at best | 2023-07-26T04:40:50 | https://www.reddit.com/r/LocalLLaMA/comments/159vo8c/currently_aiming_to_finetune_a_7b_parameter_model/ | bralynn2222 | self.LocalLLaMA | 2023-07-26T04:48:10 | 0 | {} | 159vo8c | false | null | t3_159vo8c | /r/LocalLLaMA/comments/159vo8c/currently_aiming_to_finetune_a_7b_parameter_model/ | false | false | self | 20 | null |
How to stop getting these kind of comments during roleplay? | 15 | I'm using Chronos-hermes 13B with SillyTavern, sometimes I get these walls of text at the end of a reply (i'm censoring the character's name and my name, and the conversation):
[Shogun.](https://preview.redd.it/sye7l9xnk8eb1.png?width=605&format=png&auto=webp&s=6f502c739aa63e3d56750b9bb0b6261040a019dc)
[Censored because is NSFW.](https://preview.redd.it/5govzs4pj8eb1.png?width=615&format=png&auto=webp&s=9cbffa8ebc829e5e125e8df979a2cbb63bd36845)
There is no character's note using KoboldAI with ChronosHermes13B, so, how can I instruct the AI to only reply as the character they are roleplaying? I tried changing the temperature but nothing happened, they keep adding "reviews" of the story or entire character descriptions for some reason. | 2023-07-26T04:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/159vf66/how_to_stop_getting_these_kind_of_comments_during/ | allmightyloser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159vf66 | false | null | t3_159vf66 | /r/LocalLLaMA/comments/159vf66/how_to_stop_getting_these_kind_of_comments_during/ | false | false | 15 | null | |
Hacked away an abysmally simple Code Interpreter over the weekend using locally hosted Llama-2 based models. | 83 | First off I want to thank chatGPT for making all this possible!
\-----------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE: Repo is up: [🦙 Baby Code](https://github.com/itsPreto/baby-code)
\-----------------------------------------------------------------------------------------------------------------------------------------------------
Feel free to give as much feedback as possible. I have a full time (very busy) day job, which I'm late to as I type this-- but I'll be looking to maintain this as much as possible on my downtime!
In the last 7 months I went from knowing only Kotlin (for my Android dev job) to know basically being a full-stack engineer-- and I know I'm probably doing a bunch of things wrong but just WOW have I already come a long way.
So I figured I'd post here first to get some initial feedback before contemplating to push to Github (this would be my first open source project:)
Anyways-- I've got loads of ideas for features and improvements but I would, first and foremost, like to hear from the community.
Was also learning a bit of video editing lol :3
[my goofy code interpreter](https://reddit.com/link/159v7re/video/20gg9zogg8eb1/player)
`In a nutshell`
\- `Backend`: **Python Flask** (CORS for serving both the api and the html).
\- `Frontend`: **Html/Js/Css** (I designed the UI 100% to my own liking but I'd be open to change it).
\- `Engine`: **Llama.cpp** *(Inferencing library for Llama/GGML models).*
\- `Model`: **Llama-2** *(only models compatible with Llama.cpp).*
\- `Arbiter`: **LangChain** (for gluing all of this together).
\- `Wrapper`: **LlamaCpp** *(Langchain's wrapper around Llama.cpp for loading the models).*
(`visualized`):
| | | | | |
|:-------:|:------:|:--------:|:-----:|:-------:|
| Frontend| <--> | Backend | <--> | |
| (HTML, | | (Flask) | | |
| CSS, JS)| | | | |
| |<------>| /generate| |Llama.Cpp|
| | | | | (models |
| |<------>| /run | | locally |
| | | | | stored) |
| | | | |
| | | LlamaCpp | |
| | | <--------> | |
| | | | |
| LangChain |
| <------------------------> |
\-----------------------------------------------------------------------------------------------------------------------------------------------------GPT-4's breakdown:
1. ***Frontend***: The user interacts with the frontend of the application, typically by entering a Python-related question or prompt into the text area and clicking on a button to send this question to the server. The request sent from the frontend to the backend is an HTTP POST request, containing the user's question in the body. If the response contains a valid Python code snippet, it is displayed in the output text area. If the response contains an error message, the error message is displayed instead.
2. ***Backend***: The backend receives the HTTP POST request from the frontend. This triggers the \`/generate\` endpoint. The question from the request is passed to the LlamaCpp model to generate a Python code snippet that could answer the user's question. The Python code is extracted from the model's response using a regular expression that matches Python code enclosed in triple backticks.
* A user interacts with the frontend, triggering an HTTP POST request to the backend's /generate endpoint.
* The backend processes the request, which involves calling the LlamaCpp model to generate Python code in response to a given prompt.
* The backend extracts the Python code from the model's response and runs it using the /run endpoint.
* The backend sends an HTTP response back to the frontend with the results of the code execution, which updates the user interface accordingly. | 2023-07-26T04:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/159v7re/hacked_away_an_abysmally_simple_code_interpreter/ | LyPreto | self.LocalLLaMA | 2023-07-27T00:00:17 | 0 | {} | 159v7re | false | null | t3_159v7re | /r/LocalLLaMA/comments/159v7re/hacked_away_an_abysmally_simple_code_interpreter/ | false | false | 83 | {'enabled': False, 'images': [{'id': '_ypYR2bezMLYEsU_ln7OwfOIF-wB-yXAAC9-SCtVMiM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=108&crop=smart&auto=webp&s=fefdd6224ba1f6842de3cea2a9ebc1be1c0da190', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=216&crop=smart&auto=webp&s=f5bef2cb840decb757256e251f8bd91a35905563', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=320&crop=smart&auto=webp&s=714e91d656a0d754fb2e4524583df3b45e1d78b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=640&crop=smart&auto=webp&s=96a6382979da810bf4f006909a61ed98bfd81a8d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=960&crop=smart&auto=webp&s=d510769b86fa03dac20c0c187f1b6700fbbda0e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?width=1080&crop=smart&auto=webp&s=6d749009e1a1c4bb768fc01db1e429ec682c60cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vv0xL817m_NGth_qIAj8M877nYe4zcadu3LNFYBsP7A.jpg?auto=webp&s=85963ab5ffccf85ea9527e564a9788b0c697d793', 'width': 1200}, 'variants': {}}]} | |
Way to fill in blank information in structured text besides LMQL? | 5 | ​
https://preview.redd.it/t4zxc2zka8eb1.png?width=1914&format=png&auto=webp&s=3f3cbb47705329d955e94960d252bbf19b4886c4
I would like to be able to create something that fills in gaps in structured text instead of LLM generating everything. Is LMQL my only option or is there a simpler way?
I am asking because i want to use koboldapi with a locally hosted instance how i can do with langchain with a wrapper for the API.
Is there any way I can also either use KoboldAPI in LMQL or achieve this result some other way?
Im going to guess that unless I want to do a lot of writing my only good option is to switch to LlamaCpp.
(Finetuning is probably out of the equation for quite some time because of money constraints, and on their own prompts on the other hand can be fragile and random)
Would i be better off looking for a " symbol and type examples for the AI in the system prompt?
I would appreciate any suggestions. Cheers. | 2023-07-26T03:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/159u7a1/way_to_fill_in_blank_information_in_structured/ | paryska99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159u7a1 | false | null | t3_159u7a1 | /r/LocalLLaMA/comments/159u7a1/way_to_fill_in_blank_information_in_structured/ | false | false | 5 | null | |
Llama2 (7B-Chat) surprisingly outputs nice essay when started with “random prompt” | 11 | Run command: ** ./main -m ./models/7B/ggml-model-q4_0.bin -n 1024 --repeat-penalty 1.0 --color -i --interactive-first -r "User:" --random-prompt**
Llama2 opened with the random word “If” and I filled it in
___
If I was a succesful application developer who just released a successful monetizable concept utilizing Docker - that concept explained in all its details would look like:
prüm
===================================================================
Title: Docker-based Monetization of a Successful Application
===================================================================
Abstract:
This document outlines a strategy for leveraging Docker to monetize a successful application. The concept involves creating a containerized version of the application, offering it as a service to customers, and generating revenue through subscription fees or pay-per-use models. The document provides a detailed explanation of the concept, its potential benefits, and the steps required to implement it.
Introduction:
Monetizing a successful application is essential for any business to remain competitive and sustainable in today's fast-paced digital landscape. One innovative approach is to leverage Docker, an open-source platform for containerization. By packaging the application into a container, businesses can offer it as a service to customers, generating revenue through various pricing models. In this document, we will explore the Docker-based monetization concept in detail, including its potential benefits, the steps required to implement it, and potential challenges.
Potential Benefits:
1. Increased Revenue: By packaging the application into a container, businesses can offer it as a service to customers, generating revenue through subscription fees or pay-per-use models.
2. Scalability: Docker containers can be easily scaled to meet the needs of a growing customer base, without requiring significant investment in infrastructure.
3. Flexibility: The containerized application can be deployed on various platforms, such as cloud, on-premise, or hybrid environments, providing customers with flexibility in how they consume the service.
4. Ease of Deployment: With Docker, the application can be deployed quickly and easily, reducing deployment times and minimizing the risk of errors during the deployment process.
5. Improved Security: Docker containers provide an additional layer of security, as each container is isolated from the rest of the system, reducing the risk of security breaches.
6. Better Performance: Docker containers can improve application performance by providing faster start-up times, improved resource utilization, and reduced overhead.
Steps Required to Implement:
1. Identify the Target Audience: Determine the target audience for the application, including their needs, preferences, and expectations.
2. Define the Requirements: Define the requirements for the application, including the features and functionalities needed to meet the target audience's needs.
3. Develop the Application: Develop the application using Docker, incorporating the required features and functionalities.
4. Test and Validate: Test and validate the application to ensure it meets the requirements and works as expected.
5. Package and Deploy: Package the application into a container, and deploy it to the target environment.
6. Monitor and Maintain: Monitor and maintain the application, ensuring it continues to meet the needs of the target audience.
Challenges:
1. Learning Curve: There may be a learning curve for developers and IT teams who are not familiar with Docker and containerization technologies.
2. Security Concerns: There may be concerns about the security of the containerized application, particularly if the container is not properly configured or secured.
3. Integration: Integrating the containerized application with existing systems and infrastructure may present challenges, particularly if the systems are not compatible with Docker.
4. Limited Resources: Containerization may not be feasible for applications that require extensive resources, such as large databases or complex simulations.
5. Lack of Standardization: There may be a lack of standardization in the use of Docker and containerization technologies, which can make it difficult to ensure consistency and reproducibility across different environments and teams.
Best Practices:
1. Plan and Design: Plan and design the application and its containerization strategy, taking into account the target audience, requirements, and constraints.
2. Use Official Images: Use official Docker images to ensure consistency and reproducibility across different environments and teams.
3. Use Secure Containers: Use secure containers to protect the application and its data, and to meet security requirements.
4. Monitor and Log: Monitor and log the application to ensure it is working as expected and to identify any issues or errors.
5. Test and Validate: Test and validate the application to ensure it meets the requirements and works as expected.
6. Document and Share: Document and share the containerization strategy and results, to ensure consistency and reproducibility across different environments and teams.
Future Developments:
1. Improved Security: Containers will continue to improve security features, such as better isolation, to meet the increasing security demands of enterprises.
2. Better Performance: Containers will continue to improve performance, such as faster startup times, to meet the needs of modern applications.
3. Simplified Management: Containers will continue to simplify management, such as automated scaling and failover, to make it easier for developers and ops teams to work together.
4. Hybrid and Multi-Cloud: Containers will continue to support hybrid and multi-cloud strategies, to give enterprises the flexibility to deploy applications across different environments.
5. AI and Machine Learning: Containers will continue to incorporate AI and machine learning to improve automation, security, and performance.
6. Collaboration and Interoperability: Containers will continue to improve collaboration and interoperability, to make it easier for developers and ops teams to work together and integrate with other technologies.
Conclusion:
Containerization has become a crucial aspect of modern software development and deployment. By using containers, developers can package their applications and their dependencies into a single container that can be run consistently across different environments, such as development, testing, staging, and production. This consistency and portability make it easier for developers and ops teams to work together, reduce the risk of errors, and improve the overall efficiency of the development and deployment process. Containers will continue to evolve and improve in the coming years, incorporating new technologies and addressing emerging challenges, such as security, performance, and management. As containers continue to advance, they will remain an essential tool for software development and deployment. | 2023-07-26T02:15:06 | https://www.reddit.com/gallery/159soaj | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 159soaj | false | null | t3_159soaj | /r/LocalLLaMA/comments/159soaj/llama2_7bchat_surprisingly_outputs_nice_essay/ | false | false | 11 | null | |
Unfiltered version of open-assistant/guanaco dataset | 9 | Should we prepare unfiltered version of this dataset | 2023-07-26T01:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/159rilb/unfiltered_version_of_openassistantguanaco_dataset/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159rilb | false | null | t3_159rilb | /r/LocalLLaMA/comments/159rilb/unfiltered_version_of_openassistantguanaco_dataset/ | false | false | self | 9 | null |
QLora training, better to use base Llama 2 or an already fine-tuned version (Nous Hermes, WizardLM, etc)? Best way to format the dataset for training? | 3 | Hi, I help out in a fan translation community for Chinese games, usually wuxia or xianxia themed. Sadly this genre of games does not translate well with google, deepl, etc. We have had good success using chatgpt as an aide, but it would not be feasible to feed it a spreadsheet with thousands of lines to translate.
One of us have already curated a 46 thousand line key value pair (untranslated and translated) dataset from previous translation mods we've made. They had mixed results with training the M2M100 480m model. I thought I would give making a qlora a try using their dataset, I remember when I tested [Chinese Nous Hermes](https://huggingface.co/coyude/Nous-Hermes-13b-Chinese-GGML) that I got some pretty promising results. This brings me to my question, would it be better to use an already finetuned model like WizardLM 1.2, or Nous Hermes Llama 2? Or should I just use the base Lllama 2 for training on this dataset.
I will be trying to train this using qlora on a google collab notebook unless someone can suggest a better method (my local computer is a laptop with an 8gb 3070 laptop). This is my first time trying to train any sort of ai and Im not super familiar with python so any suggestions or help will be appreciated.
My last question is how should the dataset be formatted for optimal results with qlora training? I was initially going to do something like this:
[
{
"instruction": "Translate this text from Chinese to English for a Wuxia
themed game.",
"input": "对了{junior},在阁中修业迈入了第二年,接下来应当会有更多阁外派遣吧?",
"output": "By the way, {junior}, after entering the second year of training in the Pavilion, there should be more dispatches outside the Pavilion?"
},
...
]
The AI would need to retain those curly brackets if possible, etc. There's sometimes things like that and markup language for things like font color mixed in. | 2023-07-26T00:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/159qzgs/qlora_training_better_to_use_base_llama_2_or_an/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 159qzgs | false | null | t3_159qzgs | /r/LocalLLaMA/comments/159qzgs/qlora_training_better_to_use_base_llama_2_or_an/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '6q6Y2tDda1x3yhz5MsJHEz6XFIE8uQ8cIyWyaPYZSSc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=108&crop=smart&auto=webp&s=d80482e8642de7b9ca2ca40d5f0b33cdff9279a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=216&crop=smart&auto=webp&s=d72bfb824c359cbf0547a817791d3ea051625798', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=320&crop=smart&auto=webp&s=6cffdd47cd174ecbc05137835fa2a96059a137b0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=640&crop=smart&auto=webp&s=aeb95b2ea6dfa72679b25da733e6d764163a66d2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=960&crop=smart&auto=webp&s=a6c2784c54d007195977b14ecbf94e3a2667ca1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?width=1080&crop=smart&auto=webp&s=1b6fffd64bd4fa57de0d3c42c49f96c4c7464d9a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jPe4mRYKQTC_LaFP1sgw2vXrcej79Jwqzd8wahsUx2g.jpg?auto=webp&s=d7ed719efe28350eeca4505414d53666976c474a', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.