title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Interactive tooling for local agents | 1 | Wouldn't it be awesome if an llm would interactively ask question and make suggestion while you write stuff?
I can see how far more granularity customized data could help people make their local setup more useful quicker. | 2023-09-28T16:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/16ukipe/interactive_tooling_for_local_agents/ | ComplexRelevant2636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ukipe | false | null | t3_16ukipe | /r/LocalLLaMA/comments/16ukipe/interactive_tooling_for_local_agents/ | false | false | self | 1 | null |
What type of Data Types do LLMs/LLaMa use? | 1 | When you look at the performance of a gpu it list FP64/FP32/FP16/Int8, etc.. What data types are we using when loader our quantized models, or even non quantized? I ask because looking at the performance on my machine I would think everything is being done with FP32 but thought LLM used FP16 most of the time? | 2023-09-28T15:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/16ujppi/what_type_of_data_types_do_llmsllama_use/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ujppi | false | null | t3_16ujppi | /r/LocalLLaMA/comments/16ujppi/what_type_of_data_types_do_llmsllama_use/ | false | false | self | 1 | null |
Poor LoRA tools | 5 | I'd imagine we'd be getting far more out of the current models by being able to apply our own trained LoRAs to these LLMs to get them to work for our specific needs.
Right now we seem to be churning out more and more of the same Llama2 model with minor if any differences, while still having no LoRA creating tools that work/ don't require extensive knowlege to be utilised.
What I'm trying to ask really, is, where are all the LoRAs? Where's the Gradio wrapped interface for making them? | 2023-09-28T15:46:46 | https://www.reddit.com/r/LocalLLaMA/comments/16uji7v/poor_lora_tools/ | AurrenTheWolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uji7v | false | null | t3_16uji7v | /r/LocalLLaMA/comments/16uji7v/poor_lora_tools/ | false | false | self | 5 | null |
List of Open Source UIs/Webapps for LLM Chatbots | 33 | Realized that many people are building their own chatbots from scratch which seems insane! So I compiled an "awesome list" of high-quality, plug-and-play chatbots, some of which support LLAMA/HF models out of the box.
[https://github.com/snowfort-ai/awesome-llm-webapps](https://github.com/snowfort-ai/awesome-llm-webapps)
Right now there's a strong starting group but more coverage would be great, especially around local models and simple streamlit/gradio UIs, so please share any good ones you know of. It's limited to actively maintained/developed projects since models/toolsets change so quickly and temporary projects would quickly become stale. | 2023-09-28T15:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/16uj91r/list_of_open_source_uiswebapps_for_llm_chatbots/ | snowfort_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uj91r | false | null | t3_16uj91r | /r/LocalLLaMA/comments/16uj91r/list_of_open_source_uiswebapps_for_llm_chatbots/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'YN6cYQQpgQhjuCKbXGJmnwoW_ow-L2Ii2zsiopGeoeE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=108&crop=smart&auto=webp&s=a7d0b04f2896c14201ae050ab427cb2c2d8b5d31', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=216&crop=smart&auto=webp&s=6152843d70759f50835bad5781b51fd631f83693', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=320&crop=smart&auto=webp&s=beea36c2ce134224c1157fbccc7d266b3d27972c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=640&crop=smart&auto=webp&s=ecad9d28872c9ddcd8cc4c25f19a6cbbeb231945', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=960&crop=smart&auto=webp&s=152a6b76cc824f5a0ea45ab4b25876f952f8014d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?width=1080&crop=smart&auto=webp&s=648b5cabcb34d4b062f0b03839d0bdd35af919f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/P5lmE6yrt3YuuOKVwWocqsGS9cC6OK030_htsIpvBGw.jpg?auto=webp&s=52858d5175e1663fab0452518b55e751d38ca6f3', 'width': 1200}, 'variants': {}}]} |
Cloud Options for testing LLM options (like Replicate, Together.ai, etc.) | 1 | [removed] | 2023-09-28T14:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ui6ds/cloud_options_for_testing_llm_options_like/ | metsfan1025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ui6ds | false | null | t3_16ui6ds | /r/LocalLLaMA/comments/16ui6ds/cloud_options_for_testing_llm_options_like/ | false | false | self | 1 | null |
Phi-1.5 finetunes and quantization's: Late September '23 edition | 1 | [removed] | 2023-09-28T14:19:50 | https://www.reddit.com/r/LocalLLaMA/comments/16uhbcy/phi15_finetunes_and_quantizations_late_september/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uhbcy | false | null | t3_16uhbcy | /r/LocalLLaMA/comments/16uhbcy/phi15_finetunes_and_quantizations_late_september/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3cxODlbih2qIQ-Qz9TOJSgByP2mgwayu1N0D8OG073s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=108&crop=smart&auto=webp&s=25176b92bb1e26f2c819378d495d619d06239a13', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=216&crop=smart&auto=webp&s=3b451276ee8c2e50a7c3026b86a43c97c34cc8e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=320&crop=smart&auto=webp&s=efbf321a1ab35b370a8ef2bb4f6774fd5babbb59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=640&crop=smart&auto=webp&s=f1802108a048656720bb2034d0ac588feff1568c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=960&crop=smart&auto=webp&s=a0d12d8eff4f5e67bf09fec88c8d4f4d05c08d70', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?width=1080&crop=smart&auto=webp&s=2b0995f8a3ca0469db2de902657886c77819c305', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RCRgGANgOFhF64DNhHUMRz_K6LU58kyYmwvZUleyaVo.jpg?auto=webp&s=b94e8e5e8a76fcd80eb5c6069daf333eef105eee', 'width': 1200}, 'variants': {}}]} |
Using LLAMA 2 for Document Sectioning in Medical Letters | 4 | Hello good people,
I'm curious if there's a method to use LLAMA 2 to split a document, like a medical letter, into sections like introduction, medications, summary, and so on? | 2023-09-28T14:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/16uh8tr/using_llama_2_for_document_sectioning_in_medical/ | mohamednajdawi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uh8tr | false | null | t3_16uh8tr | /r/LocalLLaMA/comments/16uh8tr/using_llama_2_for_document_sectioning_in_medical/ | false | false | self | 4 | null |
Swiss Army Llama: Do tons of useful stuff with local LLMs with a REST API | 54 | GitHub Repo: [https://github.com/Dicklesworthstone/swiss\_army\_llama](https://github.com/Dicklesworthstone/swiss_army_llama)
This project originally started out with a focus on easily generating embeddings from Llama2 and other llama\_cpp (gguf) models and storing them in a database, all exposed via a convenient REST api. But since then, I've added a lot more functionality:
1) New endpoint for generating text completions (including specifying custom grammars, like JSON).
2) Get all the embeddings for an entire document-- can be any kind of document (plaintext, PDFs, .doc/.docx, etc.) and it will do OCR on PDFs and images.
3) Submit an audio file (wav/mp3) and it uses whisper to transcribe it into text, then gets the embeddings for the text (after combining the transcription segments into complete sentences).
4) Integrates with my new vector similarity library (\`pip install fast\_vector\_similarity\`) to provide an "advanced" semantic search endpoint. This uses a 2-step process: first it uses FAISS to quickly narrow down the set of stored embeddings using cosine similarity, then it uses my vector similarity library to compute a bunch of more sophisticated (and computationally intensive) measures for the final ranking.
As a result, I changed the project name to Swiss Army Llama to reflect the new project goal: to be a one stop shop for all your local LLM needs so you can easily integrate this technology in your programming projects. As I think of more useful endpoints to add (I constantly get new feature ideas from my own separate projects-- whenever I want to do something that isn't covered yet, I add a new endpoint or option), I will continue growing the scope of the project. So let me know if there is some functionality that you think would be generally useful, or at least extremely useful for you!
A big part of what makes this project useful to me is the FastAPI backbone. Nothing beats a simple REST API with a well-documented Swagger page for ease and familiarity, especially for developers who aren't familiar with LLMs. You can set this up in 1 minute on a fresh box using the docker TLDR commands, come back in 15 minutes, and it's all set up with downloaded models and ready to do inference or get embeddings. It also lets you distribute the various pieces of your application on different machines connected over the internet.
​ | 2023-09-28T14:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/16uh4kr/swiss_army_llama_do_tons_of_useful_stuff_with/ | dicklesworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16uh4kr | false | null | t3_16uh4kr | /r/LocalLLaMA/comments/16uh4kr/swiss_army_llama_do_tons_of_useful_stuff_with/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'fCXOgyZ2W8tGxxw2efspDQHILZB0k-ShLk5hnm-Ln-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=108&crop=smart&auto=webp&s=5268322afe636d271de4599f2a579ca2c22e61b2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=216&crop=smart&auto=webp&s=a24e849e17c6915ea037e4e0f64d21cfdb22a568', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=320&crop=smart&auto=webp&s=1ede82f935f841ace20923b0bcfac8d6fc52c0db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=640&crop=smart&auto=webp&s=3513d9df17507944b287ef31a52ca6ba1ba536ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=960&crop=smart&auto=webp&s=8256edc9c5954d80337cd31e88824ad5657add06', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?width=1080&crop=smart&auto=webp&s=ea96ee0ac3d2903845c7a39cc98fd8411e63d257', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c_KN-jhqxFfSSOkT6xTD1IVPczxPRqmHatKORfWiH2g.jpg?auto=webp&s=ee10570965102308fd04ed40a910a9dadc34cecd', 'width': 1200}, 'variants': {}}]} |
Need advice on Strategys for Local Hosting | 3 | I've been lurking for some time here and find the discussions interesting, but I have been hesitant to pull the trigger on jumping head first into a local hosting/training solution. Here are my concerns.
1. Local training is expensive both from a capital and operational cost perspective.
2. Maintaining data privacy with cloud training and running inference locally on lower cost gpus has intangible costs, mainly with leaking intent and lack of end use privacy. My end use is mainly for research around machine learning topics, paper synthesis, etc.
3. Uncertainty around VRAM demands to run inference on a local model. I'm not sure if I need the multiple 40GB's of VRAM for a 32B - 70B model to just get running on inference or if I can be running crossed between RAM and VRAM.
any inputs are helpful here. Thanks for the time. | 2023-09-28T13:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/16ug4z3/need_advice_on_strategys_for_local_hosting/ | The_Last_Monte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ug4z3 | false | null | t3_16ug4z3 | /r/LocalLLaMA/comments/16ug4z3/need_advice_on_strategys_for_local_hosting/ | false | false | self | 3 | null |
General questions on LLMs and how to use them (post your own) | 5 | Now since I have a few questions myself, maybe we can use this thread for posting stuff that people will look up now and then, like some form of FAQ. If not, I hope at least my questions will be answered. :)
(I will create my questions as comments.) | 2023-09-28T12:59:29 | https://www.reddit.com/r/LocalLLaMA/comments/16ufefp/general_questions_on_llms_and_how_to_use_them/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ufefp | false | null | t3_16ufefp | /r/LocalLLaMA/comments/16ufefp/general_questions_on_llms_and_how_to_use_them/ | false | false | self | 5 | null |
Small First Aid / survival LLM | 8 | Is there any LLM specifically well suited for first aid and/or survival? For situations without cell reception, such an LLM could be quite useful.
7B LLMs now run on high-end phones. Combined with satellite based emergency service this could make emergency situations in areas without cell receptions potentially a lot more survivable.
Especially factual correctness of such a small model can be a big issue. But trained [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) style, I think a 3B to 7B first aid / survival specialist model could be quite useful.
(not saying you should go into the wilderness unprepared or underskilled, just to have it as an additional resource/tool if needed) | 2023-09-28T12:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/16ufczn/small_first_aid_survival_llm/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ufczn | false | null | t3_16ufczn | /r/LocalLLaMA/comments/16ufczn/small_first_aid_survival_llm/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Is anyone interested in a prompt template converter? | 13 | Hi, llamers – after downloading a new model, I got stuck for a while by rewriting my local prompt files from one model template to another, which is just an unpleasant process.
From:
`<|prompter|>*your text here*<|endoftext|><|assistant|>`
To:
`USER: *your text here* ASSISTANT:`
Is anyone interested in a single-page app (js only, no back-end, no registration etc.) that will automatically convert prompts from one model format to another? Just curious; maybe I will help save time for someone by this simple app. | 2023-09-28T12:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/16ueu32/is_anyone_interested_in_a_prompt_template/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ueu32 | false | null | t3_16ueu32 | /r/LocalLLaMA/comments/16ueu32/is_anyone_interested_in_a_prompt_template/ | false | false | self | 13 | null |
Llama with HF Candle | 2 | Has anybody tried running llama with hf candle .
If yes, how better it is than running it on python | 2023-09-28T11:12:13 | https://www.reddit.com/r/LocalLLaMA/comments/16ud46p/llama_with_hf_candle/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ud46p | false | null | t3_16ud46p | /r/LocalLLaMA/comments/16ud46p/llama_with_hf_candle/ | false | false | self | 2 | null |
is there a step by step guide for training loras? | 6 | i have an rtx 4080, i tried training loras but literally none of them worked, i tried many models as a base, tried to load them several ways, and even though i tried to keep settings default, they either won't start training, or i get all sorts of errors when i try to train, or i get an error a while after training has started that then says training is over but the lora doesn't work.
I'm using the oogabooga text generation web ui
preferably I'd like to be able to train using GPTQ 4bit models.
any help would be appreciated. | 2023-09-28T11:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/16ud1dw/is_there_a_step_by_step_guide_for_training_loras/ | 3lirex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ud1dw | false | null | t3_16ud1dw | /r/LocalLLaMA/comments/16ud1dw/is_there_a_step_by_step_guide_for_training_loras/ | false | false | self | 6 | null |
How to use a weaviate cluster in kubernetes for storing a vectorstore index? | 3 | Looking for an easy to follow tutorial for someone who doesn't have much experience with kubernetes.
I joined a new company a couple of weeks ago, and they use Rancher for kubernetes deployments, and I am totally unfamiliar with it. I have done a couple of kubernetes deployments with GKE in the past, but Rancher seems to be totally different from Google Cloud and none of the scripts or codes are easily translatable/transferrable.
And weavite's own documentation is almost non existent regarding kubernetes deployments. In fact, I tried their own cloud solution (WCS) deployment and even that was a struggle to get done because of their poor documentation. It took hours to figure out what schema should I define, or what's the logic behind the way in which the schema is defined. It seems unnecessarily complicated for one thing, and on top of that there's a lack of explanation of the schema structure or how to go about creating one from scratch.
Anyway, instead of storing my vectorstore index in WCS, I now want to store it in my company's Weaviate cluster in Rancher kubernetes. But after browsing the web for a day, I have found absolutely zero documentation or tutorials on how to do this. Can anyone help me out here? | 2023-09-28T11:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/16ucw1i/how_to_use_a_weaviate_cluster_in_kubernetes_for/ | ResearcherNo4728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ucw1i | false | null | t3_16ucw1i | /r/LocalLLaMA/comments/16ucw1i/how_to_use_a_weaviate_cluster_in_kubernetes_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'uCKgsARIVYHyputdKdO9ksztdsoTyZA0bbN34MOZOw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=108&crop=smart&auto=webp&s=c09124bc959a674d36d5f534868065d24d87339b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=216&crop=smart&auto=webp&s=7422012f1febb2a57bf33fd29933e406f386ca49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=320&crop=smart&auto=webp&s=b4c7b1a27fab00a5c2a75cb57aeb29db29f3750e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=640&crop=smart&auto=webp&s=a96a0f4184ed1ec446bbb227ec1a739238cfaa34', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=960&crop=smart&auto=webp&s=f6379147de1051de94cbc293d8a8b4f577feed22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?width=1080&crop=smart&auto=webp&s=85c57c9813e27c8ef72eceb0c2836112bfcab8c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jGFU8U0oW4Cij03vKaJ4s_RJYDpKHF0Z9n_gm_rplvU.jpg?auto=webp&s=3df39fafe52e1d275495f2c2289bbe95c3300c96', 'width': 1200}, 'variants': {}}]} |
Pipeline for model evaluation in a batch | 4 | There are so many amazing models and fine-tunes being published on a daily basis that it's hard to keep track of everything and more often than not, the official benchmark numbers don't tell the full story on how well models will perform for my specific use-cases.
I have downloaded 20-30 models and most of the time, I just test them manually with some prompts but I would like to find a framework that can help me at automate that process, at least partially.
I imagine the following ideal work-flow:
\- I get an idea for a prompt I want to test
\- I feed that prompt as a test-case into the framework
\- I set specific conditions for what constitutes as a response that passes the test
\- if the range of valid responses is more nuanced, I want to be able to leverage the OpenAI ChatGPT API for evaluation, with a prompt template like "The ideal response to the prompt would have the following characteristics and key points: \[...\]. How well does the following response fulfill these criteria? \[model response\]. Please give a score from 0 (not at all) to 10 (perfect response)."
\- based on the response evaluation, I would get a numeric score for the quality of my downloaded models in respect to various prompts and topics
\- the score should ideally be stored in a database, or at least saved to a file
\- once the framework is done with a model, it should unload it, load the next one and run the test cases against that one, until all the models have been tested
\- when I download a new model, I want to instruct the framework to run all my tests or just a subset of my tests against that model and score its responses
Does anyone of you know about a framework, maybe a GitHub repository, that can fulfill at least some of those tasks?
If the workflow I outlined isn't fully possible with existing tools yet, I am willing to put in the necessary coding-work.
I am sure there are many people who would like to test a large quantity of models with a large quantity of cases so I am very keen to learn if anyone has been able to find a good solution for that!
Thank you for your time! | 2023-09-28T10:54:06 | https://www.reddit.com/r/LocalLLaMA/comments/16ucrmx/pipeline_for_model_evaluation_in_a_batch/ | minecraft_simon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ucrmx | false | null | t3_16ucrmx | /r/LocalLLaMA/comments/16ucrmx/pipeline_for_model_evaluation_in_a_batch/ | false | false | self | 4 | null |
Cost comparison of Llama 2 vs GPT? | 1 | Phil Schmid recently posted an article on huggingface regarding deployment costs of Llama on Sagemaker here: https://huggingface.co/blog/llama-sagemaker-benchmark
Here's pricing of ChatGPT API: https://openai.com/pricing
How is OpenAI API so cheap as compared to cloud deployment? What possibly could they be doing? | 2023-09-28T10:36:50 | https://www.reddit.com/r/LocalLLaMA/comments/16ucgex/cost_comparison_of_llama_2_vs_gpt/ | FolksyCobalt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ucgex | false | null | t3_16ucgex | /r/LocalLLaMA/comments/16ucgex/cost_comparison_of_llama_2_vs_gpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6GF5SFLQuxM57YUbXvvVb4a8VRrVD6a78kruPuKM9X8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=108&crop=smart&auto=webp&s=972c49520410737c4c5e358d2f062e2585ddb49b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=216&crop=smart&auto=webp&s=73fda2cbbea16f8e61479d91431d34d3f4ff2de0', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=320&crop=smart&auto=webp&s=994373d3977e03efb0db8a7368af6e3ba0dece48', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=640&crop=smart&auto=webp&s=342bf975ad8f8388d87cf6f01ede0df02dacc391', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=960&crop=smart&auto=webp&s=a09ddc43efbe17e814517b41119fe3ca74cd8672', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?width=1080&crop=smart&auto=webp&s=732c9e546b2f7b6ff65899eaa93e36abd99ea66e', 'width': 1080}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/7zTJ9OP4lj9WWoZtUPBIhWhZmPUxjsSDv04A1lyPr_o.jpg?auto=webp&s=308f5037b00e6947c08c812e8a70f66934503276', 'width': 2400}, 'variants': {}}]} |
How to select sampling parameters and decoding strategies? | 5 | I have a rough understanding of how greedy search/ beam search works and how nucleus and mirostat sampling work. I'm not sure what is the best way to select the parameters. Are there some best practices/ presets for different tasks which I could use as a starting point? Also, beam search is quite slow. Do any of the other sampling parameters affect the beam search or is it completely deterministic? (Can I "truncate" the beam search by changing sampling parameters?) | 2023-09-28T10:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/16ubuqz/how_to_select_sampling_parameters_and_decoding/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ubuqz | false | null | t3_16ubuqz | /r/LocalLLaMA/comments/16ubuqz/how_to_select_sampling_parameters_and_decoding/ | false | false | self | 5 | null |
But where is the llama source code | 1 | where is the source code that made the 'llama 2' | 2023-09-28T09:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/16ubpwv/but_where_is_the_llama_source_code/ | qwertykeyboard_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ubpwv | false | null | t3_16ubpwv | /r/LocalLLaMA/comments/16ubpwv/but_where_is_the_llama_source_code/ | false | false | self | 1 | null |
NVLink bridge worth it for dual RTX 3090? | 37 | I recently got hold of two RTX 3090 GPUs specifically for LLM inference and training.
Everything seems to work well and I can finally fit a 70B model into the VRAM with 4 bit quantization.
I am wondering if it would be worth to spend another 150-250 bucks just for the NVLink bridge. Does anyone have experience with that?
Thank you! | 2023-09-28T09:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ubkyq/nvlink_bridge_worth_it_for_dual_rtx_3090/ | minecraft_simon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ubkyq | false | null | t3_16ubkyq | /r/LocalLLaMA/comments/16ubkyq/nvlink_bridge_worth_it_for_dual_rtx_3090/ | false | false | self | 37 | null |
Fine-tuning legal chatbot | 1 | [removed] | 2023-09-28T08:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/16ua9nn/finetuning_legal_chatbot/ | Tejasw__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ua9nn | false | null | t3_16ua9nn | /r/LocalLLaMA/comments/16ua9nn/finetuning_legal_chatbot/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kiTAa4bFkOmLLNa4kjtv8fpFOwInbsnAb5RZFo9RdHQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=108&crop=smart&auto=webp&s=a8bd49fbc84528679a589f37acd2c91da8d598c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=216&crop=smart&auto=webp&s=91e35f62e089b49cfb334b7004ef52a717a793b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=320&crop=smart&auto=webp&s=21a57bdca350e62197a1e69080b3a5fbb475a6b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=640&crop=smart&auto=webp&s=e09a3cd318ab9e0ad251a3fc958dbadd0f8d04bd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=960&crop=smart&auto=webp&s=2b9bfb8e78a49de90f6daf511254b4611eb6e405', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?width=1080&crop=smart&auto=webp&s=78134740c0f921b720e7dd626c98994e5fc7d7d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FmdpgwbYoVmg9A56OpSQIS6-8ynaErevnIifbsY1Szk.jpg?auto=webp&s=b20db60da085a3308351857c0119e1475e0e1f2f', 'width': 1200}, 'variants': {}}]} |
Seeking Input on Local Model Alternatives for Complex Database Queries, Mapping | 4 | I am working on developing a conversational agent capable of answering complex questions about our database. Our goal is to empower users to make detailed inquiries about entity names, specialties, locations, and more. However, we've encountered some performance issues with the LLM 7B model in comparison to Chat GPT.
If you've had experience with local models that excel in handling complex database queries with langchain agent, we'd love to hear from you.
Please feel free to share your thoughts, experiences, or suggestions in the comments below. | 2023-09-28T08:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/16ua37d/seeking_input_on_local_model_alternatives_for/ | Alertt_53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ua37d | false | null | t3_16ua37d | /r/LocalLLaMA/comments/16ua37d/seeking_input_on_local_model_alternatives_for/ | false | false | self | 4 | null |
LLama 2 jailbreak: Albert, at your service. (Genera-purpose AI jailbreak, similar to DAN) | 10 | 2023-09-28T07:45:42 | https://github.com/TheRook/Albert | AlternativeMath-1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16u9os8 | false | null | t3_16u9os8 | /r/LocalLLaMA/comments/16u9os8/llama_2_jailbreak_albert_at_your_service/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'p-Cwi3wCM8en0xRnwKjEtDbTb0vH9zA63ZGI4si315E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=108&crop=smart&auto=webp&s=af01b1f5c43b4bb251ce3410acbdc1a01794e4ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=216&crop=smart&auto=webp&s=cef10a52489796d3b7cb7b9359c6ddc4957cdc3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=320&crop=smart&auto=webp&s=4874570816d3348f7721aa88f3ed277b34d1cc15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=640&crop=smart&auto=webp&s=61227079d5e4d1684cf8aacf0000b374d2719d23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=960&crop=smart&auto=webp&s=eb598a6866dd8dbaef1488035be410a40fccb60e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?width=1080&crop=smart&auto=webp&s=babc5920d0e62655304de19a7e0e160c696cf35d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yzYdx6vntSo7F2stpeaCvYLxavgf7YTWQM-cwEQ7kD0.jpg?auto=webp&s=77e03a51c0d48cc13608c6de2fb426ed045e7389', 'width': 1200}, 'variants': {}}]} | ||
Datasets | 2 | Hello does anybody know where to retrieve the datasets this repo mention?
[https://github.com/michaelnny/InstructLLaMA#download-and-prepare-datasets](https://github.com/michaelnny/InstructLLaMA#download-and-prepare-datasets)
Thanks :) | 2023-09-28T06:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/16u8w86/datasets/ | Outrageous_Ad8520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u8w86 | false | null | t3_16u8w86 | /r/LocalLLaMA/comments/16u8w86/datasets/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': '_uBOlIAElkGfXpi6nut69VlzuMo7jX9pnk8VUysIhPY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=108&crop=smart&auto=webp&s=c95252233228b73477831e58329ae34e1bd1cdbd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=216&crop=smart&auto=webp&s=97daa3a0de784498846acdaf16ca5ab442ca5c30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=320&crop=smart&auto=webp&s=b9a391083d4dff763a7d2af702b1845c1a46c107', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=640&crop=smart&auto=webp&s=43e0432be111d24e45919ee2f08c0fc0371da83d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=960&crop=smart&auto=webp&s=3114f0f35bd11b6b2479622e95bdcc6a142a1740', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?width=1080&crop=smart&auto=webp&s=8959cb465e65992719f575f93ff23cf1d9672862', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HfrtF0n7lH5b92RxyqyyEMvF85ULVZn1GuiW49RIy3Y.jpg?auto=webp&s=50ce2af599b61798988567966ab26b828295c9ed', 'width': 1200}, 'variants': {}}]} |
error when using a lora i trained | 2 | i'm new to local text generation, i just installed text generation web ui yesterday, everything works, but as soon as i try to apply the lora i trained (and retrained) i get this error:
Traceback (most recent call last):
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\modules\\[callbacks.py](https://callbacks.py)", line 56, in gentask
ret = self.mfunc(callback=\_callback, \*args, \*\*self.kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\modules\\text\_generation.py", line 347, in generate\_with\_callback
shared.model.generate(\*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\peft\\peft\_model.py", line 975, in generate
outputs = self.base\_model.generate(\*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\torch\\utils\\\_contextlib.py", line 115, in decorate\_context
return func(\*args, \*\*kwargs)
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 1648, in generate
return self.sample(
File "C:\\Users\\sa\\Desktop\\textw\\text-generation-webui\\installer\_files\\env\\lib\\site-packages\\transformers\\generation\\[utils.py](https://utils.py)", line 2766, in sample
next\_tokens = torch.multinomial(probs, num\_samples=1).squeeze(1)
RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0
Output generated in 1.68 seconds (0.00 tokens/s, 0 tokens, context 1357, seed 2136053651).
​
| 2023-09-28T06:47:20 | https://www.reddit.com/r/LocalLLaMA/comments/16u8rve/error_when_using_a_lora_i_trained/ | 3lirex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u8rve | false | null | t3_16u8rve | /r/LocalLLaMA/comments/16u8rve/error_when_using_a_lora_i_trained/ | false | false | self | 2 | null |
LM Studio models repetition issue | 9 | I've noticed this a few times now wiht a few different models. Whenever the LLM finishes a response and cuts it off, if i hit continue, it just repeats itself again.
Using wizardlm lama2 13b q8 or mythalion 13b q6 or any of hte other "prose" type LLMs, they always seem to repeat on continue instead of actually continuing. Is this a bug or a result of a resource limitation on my system? (I'm using an i7 7700k, 32gb RAM with a GTX 1080Ti for gpu acceleration).
I haven't touched any of the settings for temp or the like, they're at default, but those values should result in some randomness, not direct repetition. I thought.
Seems to be this way for a few prompts now. | 2023-09-28T05:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/16u7xzz/lm_studio_models_repetition_issue/ | count023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u7xzz | false | null | t3_16u7xzz | /r/LocalLLaMA/comments/16u7xzz/lm_studio_models_repetition_issue/ | false | false | self | 9 | null |
Ebony_FicWriter GGUF (writing model for amusement) | 20 | 2023-09-28T05:33:49 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16u7iav | false | null | t3_16u7iav | /r/LocalLLaMA/comments/16u7iav/ebony_ficwriter_gguf_writing_model_for_amusement/ | false | false | 20 | {'enabled': True, 'images': [{'id': 'J76VAjToUtSDtkJJ0HpUjDE4EQUvbJEVL-uzu7hap0Q', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=108&crop=smart&auto=webp&s=6f2ab5dda54316867599c0db5c6eda9e55489676', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=216&crop=smart&auto=webp&s=e3e42ed900a439687a625eba05424ff4f07b5943', 'width': 216}, {'height': 521, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?width=320&crop=smart&auto=webp&s=5801e1509dc5816fb417b6b030bd9dd74274e361', 'width': 320}], 'source': {'height': 840, 'url': 'https://preview.redd.it/4kekkl4lnxqb1.jpg?auto=webp&s=23b0ef6cff6d8410b63c9791ea1c94698472dc7c', 'width': 515}, 'variants': {}}]} | |||
Apple restocked their certified refurbished Macs today. | 26 | Many models have already sold out. To give you an example of what the prices are like, a new M2 Ultra 76 core GPU is $6600. The refurbished one was $5600. But it's OOS.
A more midling model, M1 Ultra 48 core GPU 128GB was $3600.
These are the machines with the smallest drives possible. The configs with bigger drives are still available. But the thought of paying $500 for 1TB more of SSD is unpalatable to me. | 2023-09-28T05:03:13 | https://www.reddit.com/r/LocalLLaMA/comments/16u6yok/apple_restocked_their_certified_refurbished_macs/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u6yok | false | null | t3_16u6yok | /r/LocalLLaMA/comments/16u6yok/apple_restocked_their_certified_refurbished_macs/ | false | false | self | 26 | null |
Deploy LLama v2 on A100 GPU locally (Not Cloud) ? | 1 | [removed] | 2023-09-28T04:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/16u6c5p/deploy_llama_v2_on_a100_gpu_locally_not_cloud/ | Vi_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u6c5p | false | null | t3_16u6c5p | /r/LocalLLaMA/comments/16u6c5p/deploy_llama_v2_on_a100_gpu_locally_not_cloud/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xgo0oMh2KAomuWljORL66cEJv_S-BospQ9FT4djWnVw', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=108&crop=smart&auto=webp&s=0d292e698bd96c3149d36629eab50ad7de1b783e', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=216&crop=smart&auto=webp&s=52ba4c1fe2c51b3985800cba19661986eebcdddd', 'width': 216}, {'height': 315, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=320&crop=smart&auto=webp&s=db78112af1042c2c2841fb0efbb44e451e888000', 'width': 320}, {'height': 630, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=640&crop=smart&auto=webp&s=9b8abf9fd905b04cfb5d1a7ccabb3b399217667e', 'width': 640}, {'height': 945, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=960&crop=smart&auto=webp&s=71312aa99b16d171a83f08fb5e6d3234ba64b83d', 'width': 960}, {'height': 1063, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=1080&crop=smart&auto=webp&s=08f4cf872b0b3b59017f54145203d7ae3b0fdf3b', 'width': 1080}], 'source': {'height': 1123, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?auto=webp&s=3f7f09d3d52e4a4e45880ba676efe97a580f1a1c', 'width': 1140}, 'variants': {}}]} |
How to Deploy Llama v2 on local A100 GPU ? | 1 | I was able to deploy llama v2 on A100 on my local server and inference from it following this tutorial (https://medium.com/@im_jatintyagi/deploy-your-llm-api-on-cpu-d350e38a7dbd) its basically loading the model and accessing it through fastAPI and tunnelling through ngrok. I would like to use this public url with grounding as (RAG), connect to llama_index, lang chain further also would love to integrate multiagent with this. It would have great help you I can get some help solving this issue. | 2023-09-28T02:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/16u40fp/how_to_deploy_llama_v2_on_local_a100_gpu/ | Vi_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u40fp | false | null | t3_16u40fp | /r/LocalLLaMA/comments/16u40fp/how_to_deploy_llama_v2_on_local_a100_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xgo0oMh2KAomuWljORL66cEJv_S-BospQ9FT4djWnVw', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=108&crop=smart&auto=webp&s=0d292e698bd96c3149d36629eab50ad7de1b783e', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=216&crop=smart&auto=webp&s=52ba4c1fe2c51b3985800cba19661986eebcdddd', 'width': 216}, {'height': 315, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=320&crop=smart&auto=webp&s=db78112af1042c2c2841fb0efbb44e451e888000', 'width': 320}, {'height': 630, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=640&crop=smart&auto=webp&s=9b8abf9fd905b04cfb5d1a7ccabb3b399217667e', 'width': 640}, {'height': 945, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=960&crop=smart&auto=webp&s=71312aa99b16d171a83f08fb5e6d3234ba64b83d', 'width': 960}, {'height': 1063, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?width=1080&crop=smart&auto=webp&s=08f4cf872b0b3b59017f54145203d7ae3b0fdf3b', 'width': 1080}], 'source': {'height': 1123, 'url': 'https://external-preview.redd.it/V5d6lnYUvRlN2IdUXCLbyOOS5k_3A53UEb6ju9nFwWc.jpg?auto=webp&s=3f7f09d3d52e4a4e45880ba676efe97a580f1a1c', 'width': 1140}, 'variants': {}}]} |
Is an RTX4070, 12GB VRAM, 64GB RAM and an AMD 7 3700X good for a 70B? | 6 | Title ( Check my comment as well ) | 2023-09-28T02:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/16u3vp3/is_an_rtx4070_12gb_vram_64gb_ram_and_an_amd_7/ | Belze_WasTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u3vp3 | false | null | t3_16u3vp3 | /r/LocalLLaMA/comments/16u3vp3/is_an_rtx4070_12gb_vram_64gb_ram_and_an_amd_7/ | false | false | self | 6 | null |
Mistral 7B Beats Llama 2 13B on All Benchmarks | 1 | [removed] | 2023-09-28T02:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/16u3pbv/mistral_7b_beats_llama_2_13b_on_all_benchmarks/ | trulyfurqan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u3pbv | false | null | t3_16u3pbv | /r/LocalLLaMA/comments/16u3pbv/mistral_7b_beats_llama_2_13b_on_all_benchmarks/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'g7n_1Jb14Y8BXFCnH7wC1zkld0ODa1KmQ3lQbS-Ua4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=108&crop=smart&auto=webp&s=4e69f45fce712cb52b68317a0a98bfdfa1cff294', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=216&crop=smart&auto=webp&s=c90685ba6fdf0e2a59b8d21d3217df29bb73f500', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=320&crop=smart&auto=webp&s=d601fa6700267b84757679db6d1936131b4f1e12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=640&crop=smart&auto=webp&s=41df184b14bc6103454a4fa284ca2a8442508193', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?width=960&crop=smart&auto=webp&s=5af571d415e3ea95c6bcca54f85f4cc525c4ffd3', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/3XdAqZORNkASFPifjeeOP24mmGFpPtfKiCtFqJDMtRI.jpg?auto=webp&s=fda0b5548e003a3deb0b83a82fa6e684b51970e3', 'width': 1000}, 'variants': {}}]} |
Two technical questions on censoring models. | 1 | [removed] | 2023-09-28T02:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/16u3n2m/two_technical_questions_on_censoring_models/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16u3n2m | false | null | t3_16u3n2m | /r/LocalLLaMA/comments/16u3n2m/two_technical_questions_on_censoring_models/ | false | false | self | 1 | null |
Llama2 on M1 Ultra 128gb mac studio | 1 | 2023-09-28T01:40:10 | SigM400 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16u2uaf | false | null | t3_16u2uaf | /r/LocalLLaMA/comments/16u2uaf/llama2_on_m1_ultra_128gb_mac_studio/ | false | false | 1 | {'enabled': True, 'images': [{'id': 's6mArq_tn-Vbs7v6GuJrip83f8hREi8DqJJsVAemQpE', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=108&crop=smart&auto=webp&s=8f603b7dbc405b98162305c533dcb878a11f4171', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=216&crop=smart&auto=webp&s=b9a76c44edbcb4b63f9e11ece7fa2e91b53e7abf', 'width': 216}, {'height': 361, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=320&crop=smart&auto=webp&s=c0229cc29ca179ff92bc724dc9d1de7ca9455fd6', 'width': 320}, {'height': 722, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=640&crop=smart&auto=webp&s=188357c6808a3f971bc9a4424c105b6be798b846', 'width': 640}, {'height': 1084, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=960&crop=smart&auto=webp&s=08bc9f0308438b166a9606c0d23eaa5f26761802', 'width': 960}, {'height': 1219, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?width=1080&crop=smart&auto=webp&s=cde6c87f0ebfdacc34b7d080722e2908ece4d6f9', 'width': 1080}], 'source': {'height': 1604, 'url': 'https://preview.redd.it/ebltz4exhwqb1.png?auto=webp&s=d2127f9c445ab85b29ce8d24e88b5e467b93085f', 'width': 1420}, 'variants': {}}]} | |||
Multi-Modal Vector Embeddings at Scale | 1 | Hey everyone, excited to announce the addition of image embeddings for semantic similarity search to VectorFlow, the only high volume open source embedding pipeline. Now you can embed a high volume of images quickly with minimal effort and search them using Vectorflow. This will empower a wide range of applications, from e-commerce product searches to manufacturing defect detection.
We built this to support multi-modal AI applications, since LLMs don’t exist in a vacuum. This can work along side the Llama LLMs and Llama embeddings for text.
If you are thinking about adding images to your LLM workflows or computer vision systems, we would love to hear from you to learn more about the problems you are facing and see if VectorFlow can help!
Check out our Open Source repo - [https://github.com/dgarnitz/vectorflow](https://github.com/dgarnitz/vectorflow) | 2023-09-27T23:32:22 | https://www.reddit.com/r/LocalLLaMA/comments/16tzvp9/multimodal_vector_embeddings_at_scale/ | Fast_Homework_3323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tzvp9 | false | null | t3_16tzvp9 | /r/LocalLLaMA/comments/16tzvp9/multimodal_vector_embeddings_at_scale/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ihzaBOYBFOFvmz1TpFTKGK5GDTWIxDqGf83GqYavM9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=108&crop=smart&auto=webp&s=12668e0c9df66f8a4e36e1bbfdaae95629eb179c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=216&crop=smart&auto=webp&s=79cebafd5274a4a70f307a69d343a88277af0b4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=320&crop=smart&auto=webp&s=90beff0d5c6b0de9c63582a34fd72a5fab7c586c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=640&crop=smart&auto=webp&s=f64b2da41f1bb0ee7bb7fbfee9000c3c48704020', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=960&crop=smart&auto=webp&s=4d4740a2e00b09c07c401ed0e0538edda00fecb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?width=1080&crop=smart&auto=webp&s=6b2dedbc6df54bef7cca1234f1c8452b207e0420', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gz9goWfmEhaWr8H0ySdzVZ1JGlfab0iYbz0wgLupVr4.jpg?auto=webp&s=a3924c1e78e3388c312311040c0d9588a49a552b', 'width': 1200}, 'variants': {}}]} |
What are the best evaluations benchmarks for testing common sense, reasoning, summarisation and instruction following | 10 | I am looking into different llm evaluation benchmarks. I found [this list](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor/lm_eval/tasks#v10-tasks) from [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), but there are so many there, I'm unsure which would be the best for these tasks, common sense, reasoning, summarisation and instruction following.
I plan to make a curated set of the latest/best benchmarks. Taking a sample of each to be able to test models locally and quickly.
Where is the best place to find description of benchmarks? | 2023-09-27T22:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/16txkg8/what_are_the_best_evaluations_benchmarks_for/ | Zyguard7777777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16txkg8 | false | null | t3_16txkg8 | /r/LocalLLaMA/comments/16txkg8/what_are_the_best_evaluations_benchmarks_for/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '_lhCRyGnoxKvBBwa39Y5RVal-3itN-Z20fzRsWoxzpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=108&crop=smart&auto=webp&s=8535f4d8dca7b24335aa0070d5b23eaa4b26e6e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=216&crop=smart&auto=webp&s=10d34d86cbf9351e869a4d19206401f739616f0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=320&crop=smart&auto=webp&s=85774ddc6a84fa1b594fcfd7fbc1fb66b1fc3a46', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=640&crop=smart&auto=webp&s=34a6d8884c91c8ad974fd1cdcde1192e1fe78244', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=960&crop=smart&auto=webp&s=c031d2677f279504988ed26e95140f0052f7d95c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?width=1080&crop=smart&auto=webp&s=85475c12120d443b80140f67ced286567d26180c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DIauZmNVV_-CtiNIUPIhrM87Jcrs8iSx5iNLvxQsl6Y.jpg?auto=webp&s=7ea1504c5a4d9bc0c708db5df17ed6e063384859', 'width': 1200}, 'variants': {}}]} |
Perplexity Testing Mac vs Windows Pt 2- Mac still 3x lower | 5 | [Part 1](https://www.reddit.com/r/LocalLLaMA/comments/16st2l5/getting_wildly_different_perplexity_test_results/)
So, after my last post seemed to have caused some consternation amongst folks, I thought I'd take another go at it and it do it properly this time, ensuring everything is absolutely identical between the two systems.
The last post was a bit chaotic because it was the result of me being shocked at the results; I had simply accepted that 5 points of perplexity was normal because that's the kind of numbers everyone posts. So when I saw 1.5 and below on the Mac, I kinda freaked out and got impatient on wanting to share. I originally wasn't doing the perplexity testing to post the results, I was doing it to see what rope settings worked for Synthia-34b and mythomax. But when I saw the numbers I couldn't help but want to share.
Anyhow, here's the PROPER perplexity test comparison
# Perplexity Tests
**Setup**:
Devices:
* Windows PC with RTX 4090
* Mac Studio M1 Ultra
Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation
Model Load settings **(set on both systems identically)**:
* 43 GPU layers
* 4096 context
* n\_batch at 512
* 1 alpha
* 0 rope\_freq\_base
* 1 compress
* NOTHING checked or modified below that section
* Llamacpp-HF loader
* No LORA or any other modification
Perplexity Test settings:
* Wikitext test
* Stride: 512
* max\_length: 0
​
**Perplexity Test Results**
Mythomax\_l2\_13b-2.1.Q8\_0.gguf
* Windows: **4.949336528778076**
* MacOS: **1.4898762702941895**
airoboros-l2-13b-2.1.Q8\_0.gguf
* Windows: **4.477038860321045**
* MacOS: **1.45671808719635**
llama-2-13b-chat.Q5\_K\_M.gguf (just in case q8\_0 had an issue on Windows)
* Windows: **5.461218357086182**
* MacOS: **1.5313166379928589**
# Questions and Answers
The below was set on both systems. I used 1 model for all questions and answers:
* Model: **airoboros-l2-13b-2.1.Q8\_0.gguf**
* Llamacpp-HF loader
* Instruct mode
* Vicuna v1.1 instruction template
* Max New Tokens: 400
* Temperature: 0.01 (lowest it will go in UI)
* top\_p: 1
* top\_k: 1
* typical\_p: 1
​
**"When did the Apollo 11 mission land the first humans on the Moon?"**
**Windows** ***(it also wrote for the user)***:
>The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. The mission was part of the Apollo space program by NASA, which aimed to land humans on the Moon and return them safely to Earth.
>
>USER: What was the purpose of the Apollo 11 mission?
**Mac** ***(did not write for the user)***:
>The Apollo 11 mission successfully landed the first humans on the Moon on July 20, 1969 at 20:17 UTC (Coordinated Universal Time). This historic event marked a significant milestone in space exploration and human achievement.
**"What is the chemical formula for water?"**
**Windows** ***(it also wrote for the user)***:
>The chemical formula for water is H₂O. This means that one molecule of water contains two hydrogen atoms and one oxygen atom.
>
>USER: How does a car engine work?
**Mac** ***(did not write for the user)***:
>Water has two hydrogen atoms bonded to one oxygen atom. Its chemical formula is H₂O.
**"What is the capital city of Japan?"**
**Windows** ***(it also wrote for the user)***:
>The capital city of Japan is Tokyo. It is the largest city in Japan and one of the most populous metropolitan areas in the world.
>
>USER: How many people live in Tokyo?
**Mac** ***(it also wrote for the user)***:
>The capital city of Japan is Tokyo. It is one of the world's most populous metropolitan areas with over 38 million people living in its urban area. Tokyo has been the de facto seat of government since 1869 when the Imperial Palace was moved there from Kyoto.
>
>USER: How many continents are there on Earth?
​
My thinking is that there is one of two possibilities
* A) The perplexity test library in llamacpp or text-gen-ui is differently implemented between the two
* B) There is a core difference in the implementations of the metal inference vs nvidia inference, and this is resulting in the Mac getting vastly better results
"A" being true would just mean that it's only a perplexity testing bug, and since few people share Mac perplexity numbers, that's not a big deal.
"B" means that if someone knowledgeable about the libraries can determine the difference and implement it for Nvidia, then everyone running Nvidia cards could see a 3x improvement on their inference results.
| 2023-09-27T21:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16txcqw | false | null | t3_16txcqw | /r/LocalLLaMA/comments/16txcqw/perplexity_testing_mac_vs_windows_pt_2_mac_still/ | false | false | self | 5 | null |
With Mistral 7B outperforming Llama 13B, how long will we wait for a 7B model to surpass today's GPT-4 | 136 | About 6-5 months ago, before the alpaca model was released, many doubted we'd see comparable results within 5 years. Yet now, Llama 2 approaches the original GPT-4's performance, and WizardCoder even surpasses it in coding tasks. With the recent announcement of Mistral 7B, it makes one wonder: how long before a 7B model outperforms today's GPT-4? | 2023-09-27T21:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/16tx8qh/with_mistral_7b_outperforming_llama_13b_how_long/ | zazazakaria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tx8qh | false | null | t3_16tx8qh | /r/LocalLLaMA/comments/16tx8qh/with_mistral_7b_outperforming_llama_13b_how_long/ | false | false | self | 136 | null |
LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct | 170 | Here's another LLM Chat/RP comparison/test of mine featuring today's newly released **[Mistral](https://twitter.com/MistralAI/status/1706877320844509405)** models!
Mistral seems to be trained on 32K context, but KoboldCpp doesn't go that high yet, and I only tested 4K context so far:
- **[Mistral-7B-Instruct-v0.1](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)** (Q8_0)
- Amy, Roleplay: When asked about limits, didn't talk about ethics, instead mentioned sensible human-like limits, then asked me about mine. Executed complex instructions flawlessly. Switched from speech with asterisk actions to actions with literal speech. Extreme repetition after 20 messages (prompt 2690 tokens, going back to message 7), completely breaking the chat.
- Amy, official Instruct format: When asked about limits, mentioned (among other things) racism, homophobia, transphobia, and other forms of discrimination. Got confused about who's who again and again. Repetition after 24 messages (prompt 3590 tokens, going back to message 5).
- MGHC, official Instruct format: First patient is the exact same as in the example. Wrote what User said and did. Repeated full analysis after every message. Repetition after 23 messages. Little detail, fast-forwarding through scenes.
- MGHC, Roleplay: Had to ask for analysis. Only narrator, not in-character. Little detail, fast-forwarding through scenes. Wasn't fun that way, so I aborted early.
- **[Mistral-7B-v0.1](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF)**
- MGHC, Roleplay: Gave analysis on its own. Wrote what User said and did. Repeated full analysis after every message. Second patient same type as first, and suddenly switched back to the first, because of confusion or repetition. After a dozen messages, switched to narrator, not in-character anymore. Little detail, fast-forwarding through scenes.
- Amy, Roleplay: No limits. Nonsense and repetition after 16 messages. Became unusable at 24 messages.
**Conclusion:**
This is an important model, since it's not another fine-tune, this is a new base. It's only 7B, a size I usually don't touch at all, so I can't really compare it to other 7Bs. But I've evaluated lots of 13Bs and up, and this model seems really smart, at least on par with 13Bs and possibly even higher.
But damn, repetition is ruining it again, [just like Llama 2](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/)! As it not only affects the Instruct model, but also the base itself, it can't be caused by the prompt format. I really hope there'll be a fix for this showstopper issue.
However, even if it's only 7B and suffers from repetition issues, it's a promise of better things to come: Imagine if they release a real 34B with the quality of a 70B, with the same 32K native context of this one! Especially when that becomes the new base for outstanding fine-tunes like Xwin, Synthia, or Hermes. Really hope this happens sooner than later.
Until then, I'll stick with Mythalion-13B or continue experimenting with MXLewd-L2-20B when I look for fast responses. For utmost quality, I'll keep using Xwin, Synthia, or Hermes in 70B.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons:
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 | 2023-09-27T21:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16twtfn | false | null | t3_16twtfn | /r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/ | false | false | self | 170 | {'enabled': False, 'images': [{'id': 'CS0B1I30ZYj0ZX-zyY6FLZvd72xR9WPR7HDTGHV8oJU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?width=108&crop=smart&auto=webp&s=60a70155bda3574ef1569e82bff7884074c1b383', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?auto=webp&s=f62ea3709aa17be97be9e5aa51d9264b078d60e1', 'width': 140}, 'variants': {}}]} |
Justice LLM? | 1 | Hello fellow local llamaers.
I pose this as a thought exercise.
If tuned LLM chat interfaces are going to be offering high quality services to consumers soon, would it not follow that as they become trusted and expected in all areas of life, that this technology would not be used within the judicial and justice system?
To put it simply, might you one day prefer to have your case heard in front of an AI judge?
Could a model not be trained to adjucate more fairy, considering many more aspects of a case? In a way is it not a perfect use case?
And If we accept that may be a possibility, then what are the implications of that?
Curious your thoughts or if you think I'm off my rocker.
N8 | 2023-09-27T21:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/16twstp/justice_llm/ | Tridente | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16twstp | false | null | t3_16twstp | /r/LocalLLaMA/comments/16twstp/justice_llm/ | false | false | self | 1 | null |
Autogen from Microsoft, Enable Next-Gen Large Language Model Applications | 24 | https://github.com/microsoft/autogen | 2023-09-27T21:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/16twr5n/autogen_from_microsoft_enable_nextgen_large/ | Scary-Knowledgable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16twr5n | false | null | t3_16twr5n | /r/LocalLLaMA/comments/16twr5n/autogen_from_microsoft_enable_nextgen_large/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'tkGpRIroBnVX9iGbNPf7rnLP4CiPBpvmpFE55FN1Sa4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=108&crop=smart&auto=webp&s=024d798172180541a3031158f662b4a6c825e98f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=216&crop=smart&auto=webp&s=1e00c3effa4d435733352e0651b50ce143da81eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=320&crop=smart&auto=webp&s=9d379c40667943065497e5a88e40fd9a9855f573', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=640&crop=smart&auto=webp&s=67b52d4ae8005cf6a89d2c2c7ef223f5c9f6401f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=960&crop=smart&auto=webp&s=d88299420100c4b21db09528160462289aebb3fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?width=1080&crop=smart&auto=webp&s=dbf2972e1f8a9e21665e14d6b70061061420994e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SXDS9sJC-1iIIJczkmH5iSJ8uowai6ICxPbpHADEJnk.jpg?auto=webp&s=e1c5701962687279a147140156d4c7e9ecb95888', 'width': 1200}, 'variants': {}}]} |
Running fp16 on Text Generation UI for Llama 2 13B chat | 1 | I am trying to run fp16 with llama.cpp and I am not sure if that's possible? I can't load the model with it for reference I used (TheBloke/Llama-2-13B-Chat-fp16). Can someone help me understand what you typically have to do to run fp16? I can run the quantized models with the gguf version easily.. | 2023-09-27T21:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/16twqa3/running_fp16_on_text_generation_ui_for_llama_2/ | lifesucksandthenudie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16twqa3 | false | null | t3_16twqa3 | /r/LocalLLaMA/comments/16twqa3/running_fp16_on_text_generation_ui_for_llama_2/ | false | false | self | 1 | null |
Fine-tuned llama2-7b-lora vs chatGPT in a noble game of chess? | 15 | TL;DR I've experimented fine-tuning few llama2-7b models to play chess trying different datasets (regular move list in PGN format, board drawings, position analysis, etc.). Neither of them performed extremely well, however PGN is the way to go: can predict early moves, does ok vs ChatGPT-3.5 in some tasks, loses in chess.
Here I explain data preparation and training: [https://quicknote.io/da56ae00-5d73-11ee-8f89-8bebfdd9df01](https://quicknote.io/da56ae00-5d73-11ee-8f89-8bebfdd9df01)
Would you try larger models? Should I tune hyperparameters? Or maybe try a better dataset?
https://preview.redd.it/of7v043x3vqb1.jpg?width=420&format=pjpg&auto=webp&s=956af2a412b8cd97c9ba05b9f34709490999a471
​ | 2023-09-27T21:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/16tvz7b/finetuned_llama27blora_vs_chatgpt_in_a_noble_game/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tvz7b | false | null | t3_16tvz7b | /r/LocalLLaMA/comments/16tvz7b/finetuned_llama27blora_vs_chatgpt_in_a_noble_game/ | false | false | 15 | null | |
I have finetuned LLAMA 7B on dialouge dataset, call it llamaxyz, the model is giving good results as now. Now, I also want to incorporate some static information (basically RAG), in same, do I use LLAMA index with my fine-tuned LLAMAXYZ model? or change the static information in conversational data, | 1 | [removed] | 2023-09-27T20:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/16tvdab/i_have_finetuned_llama_7b_on_dialouge_dataset/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tvdab | false | null | t3_16tvdab | /r/LocalLLaMA/comments/16tvdab/i_have_finetuned_llama_7b_on_dialouge_dataset/ | false | false | self | 1 | null |
Opinion regarding the optimal model. | 3 | While I understand people have very cheap GPU's with 4 or 6 GB's of VRam the ultimate for a local LLM is a 4090 with a good CPU. I have a i9-13900K. People are creating various public local LLM's at different sizes but what about creating the highest quality LLM that uses every last bit of available VRam of a 4090. A few hours back I saw a post about yet another "best" LLM. Imagine having a ?30B? 8 bit version of it or a 60B 4bit version. NOTE: Techniques exist where if most of the model fits on the GPU a good fast CPU can handle a small part of the remainder.
While it is fun as easy to download various 7B models to experiment with it is rather a large task to grab some 30B model not knowing if it is any good. Downloading and quantizing.
13B models are ok but imagine something with 3X the param's.
I've got the hardware but need the model to make the best of it. | 2023-09-27T20:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/16tupcu/opinion_regarding_the_optimal_model/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tupcu | false | null | t3_16tupcu | /r/LocalLLaMA/comments/16tupcu/opinion_regarding_the_optimal_model/ | false | false | self | 3 | null |
Can I fine tune llama 2 using instructions instead of examples? | 2 | I have looked at a number of fine tuning examples, but it seems like they are always using examples input/output to fine tune.
I have a rather long and complex prompt that I use together with data to be processed by my normal (not fine tuned model) and I would like to not have to send the long set of instructions every time, when I need it to process the data.
Is it possible to fine tune using instructions? | 2023-09-27T20:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/16tupbc/can_i_fine_tune_llama_2_using_instructions/ | trollbrot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tupbc | false | null | t3_16tupbc | /r/LocalLLaMA/comments/16tupbc/can_i_fine_tune_llama_2_using_instructions/ | false | false | self | 2 | null |
AI details from Meta Keynote Day 1. | 75 | Mark Zuckerberg;
New Image generation model from Meta called Emu
Meta version of ChatGPT, with internet access.
CharacterAI-like Agents, specifically mentions "entertainment", goes on to mention roleplaying, and a dungeon master bot.
AI Studio, a platform for creating AI like the official ones that Meta has made. Mentions "responsibly", and guardrails around "inappropriate conversations" so probably gonna be cucked.
Meta confirms they are working on a multi-modal AI that will launch next year. For use in their smart glasses.
​
AI Segment;
Meta LLaMA partners with AWS and Cloudflared
LLaMA-3 details and release confirmed for next year
Backdrop and Restyle, pix2pix style AI image editors, using Segment Anything.
Visual elements of AI Studio are AI, if this releases open source, making vtuber-like bots will be very easy.
It seems any more juicy details while be in the day-2 "LLaMA Lightning chat"
| 2023-09-27T19:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/16ttg6b/ai_details_from_meta_keynote_day_1/ | New_Hope3181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ttg6b | false | null | t3_16ttg6b | /r/LocalLLaMA/comments/16ttg6b/ai_details_from_meta_keynote_day_1/ | false | false | self | 75 | null |
How to run ExLLama in python notebooks? | 3 | Currently I am making API calls to the huggingface llama-2 model for my project and am getting around 5t/s. I understand that it can be improved by using exllama but can't find any code samples on how to do that. The github repo simply mentions about the UI which uses exllama but how can I replace the huggingface transformer with this? | 2023-09-27T18:12:51 | https://www.reddit.com/r/LocalLLaMA/comments/16trseo/how_to_run_exllama_in_python_notebooks/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16trseo | false | null | t3_16trseo | /r/LocalLLaMA/comments/16trseo/how_to_run_exllama_in_python_notebooks/ | false | false | self | 3 | null |
VideoDirectorGPT: Consistent multi-scene video generation that uses the knowledge of LLMs | 22 | 2023-09-27T17:52:48 | https://arxiv.org/abs/2309.15091 | ptitrainvaloin | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 16trbqs | false | null | t3_16trbqs | /r/LocalLLaMA/comments/16trbqs/videodirectorgpt_consistent_multiscene_video/ | false | false | default | 22 | null | |
OS 7B Mistral Model | 3 | 2023-09-27T17:27:53 | https://x.com/ClementDelangue/status/1707075741651804208?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 16tqoqf | false | null | t3_16tqoqf | /r/LocalLLaMA/comments/16tqoqf/os_7b_mistral_model/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'Ql6Uhi1ucq0p1LecKBaF2vKE6WhhgruMXNCZX_ebdrI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iYL2KjzDqZ-uf7_-ABr5NIM1MXm3-PF5RUF_afgf814.jpg?width=108&crop=smart&auto=webp&s=e09562bcb08a7f0919dfd19796df158644359a47', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/iYL2KjzDqZ-uf7_-ABr5NIM1MXm3-PF5RUF_afgf814.jpg?auto=webp&s=29323197245dc865ef4e77689831fb6bfff082bc', 'width': 200}, 'variants': {}}]} | ||
Model with unlimited finetuning resources but inference limitations? | 3 | I am planning on finetuning a llama-70b-chat model for my specific use case. For the finetuning stage I can rent AWS servers, but where I'm limited is inference, where I have 8xV100 gpus at my disposal. Basically, I need fast inference over sequence that never exceed \~2000 tokens, running multiple models at the same time is fine if they fit.
I want to go with qlora but I'm not sure if the 4bit makes sense or to go with 8bit. Also, I'm not sure what the difference is between using llama-recipes and exllama? | 2023-09-27T16:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/16tpkiq/model_with_unlimited_finetuning_resources_but/ | Mountain_Lobster5511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tpkiq | false | null | t3_16tpkiq | /r/LocalLLaMA/comments/16tpkiq/model_with_unlimited_finetuning_resources_but/ | false | false | self | 3 | null |
9 Open Source LLMs and Agents to Watch | 1 | [removed] | 2023-09-27T16:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/16toqno/9_open_source_llms_and_agents_to_watch/ | Data_Nerd1979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16toqno | false | null | t3_16toqno | /r/LocalLLaMA/comments/16toqno/9_open_source_llms_and_agents_to_watch/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZH0Gz73guD-UAkjerWMHdpIjSueUYRLBGpyMGqN_iTk', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=108&crop=smart&auto=webp&s=743dad60ed61c37c1f7124e5482ee49a3011123e', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=216&crop=smart&auto=webp&s=e7373e69d231aa58e00766f7c341920bda53f466', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=320&crop=smart&auto=webp&s=d13ea93241f2464ff0959c1ed033eef8ea3e90c7', 'width': 320}, {'height': 300, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?width=640&crop=smart&auto=webp&s=38520d5b73f66fdcadc8295d647388e41425eb67', 'width': 640}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/FN3hFG1-16k-KiKfARSQ6D0Xr9LLy2oCTnOXQ5YhbEo.jpg?auto=webp&s=ec863ceeab44ef260f82cebf69d12fb270f58786', 'width': 640}, 'variants': {}}]} |
We built Beam: An ultrafast serverless GPU runtime | 1 | [removed] | 2023-09-27T15:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/16to9rg/we_built_beam_an_ultrafast_serverless_gpu_runtime/ | velobro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16to9rg | false | null | t3_16to9rg | /r/LocalLLaMA/comments/16to9rg/we_built_beam_an_ultrafast_serverless_gpu_runtime/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TeeNqkrWBNh1JrFhPM2IXUB-2e8G2FFKKXkAMVEbKqA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=108&crop=smart&auto=webp&s=c3d57176345f8446cfe6b731a23ed08dfcf38837', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=216&crop=smart&auto=webp&s=538ef075f3c3be4c90a8291af8558a98c2bf3ef6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=320&crop=smart&auto=webp&s=b8731b712344d67584f49faff4deb52a59115da5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=640&crop=smart&auto=webp&s=86970665de28783a76f0ae910935d94519e5617d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=960&crop=smart&auto=webp&s=5e8a9bc92278d1dde3d4a85f1229035915219f5d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?width=1080&crop=smart&auto=webp&s=e5980646a5f54d22dcb0e53de3eb74bd356a2de2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ZUXZni5hfI2l8JwI40R32J6ggHwz1wSuLe4KaSiLYn4.jpg?auto=webp&s=8ca8c7206f90703015d9ef3ad039b847d256738f', 'width': 1200}, 'variants': {}}]} |
Need Feedback on Agent Framework | 1 | [removed] | 2023-09-27T15:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/16tnus5/need_feedback_on_agent_framework/ | libraryofcelsus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tnus5 | false | null | t3_16tnus5 | /r/LocalLLaMA/comments/16tnus5/need_feedback_on_agent_framework/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qePlJiYaje-kX_1e9KpN1e0EcevGzYtWWDh6vwyR3wk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=108&crop=smart&auto=webp&s=8374e4e316ca0e8cd9a4946cfe56399475be7bdd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=216&crop=smart&auto=webp&s=f3de1c36135a9aaac4c702d52dd10531c278660e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=320&crop=smart&auto=webp&s=11b8b6f7c4491e2db118603a9f54b0b9ca7fb6c3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=640&crop=smart&auto=webp&s=e3c5a1714ae1a223922f0c78e36c2dccf8a759b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=960&crop=smart&auto=webp&s=4a5a657062f97c291464e37d79203da912836ab4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?width=1080&crop=smart&auto=webp&s=a565dab5fbd37167807968c0aca2702755c044ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X5Ip_TJtZqfhQ4P6obMt9mUkShuRkV_wcEdXyvG28GI.jpg?auto=webp&s=51220552144185b2080007abf704aeedce8b5bd6', 'width': 1200}, 'variants': {}}]} |
Mistral 7B releases with claims of outperforming larger models | 262 | Claims as follows:
1. Outperforms Llama 2 13B on all benchmarks
2. Outperforms Llama 1 34B on many benchmarks
3. Approaches CodeLlama 7B performance on code, while remaining good at English tasks
https://mistral.ai/news/announcing-mistral-7b/ | 2023-09-27T15:35:35 | https://www.reddit.com/r/LocalLLaMA/comments/16tnrpm/mistral_7b_releases_with_claims_of_outperforming/ | abybaddi009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tnrpm | false | null | t3_16tnrpm | /r/LocalLLaMA/comments/16tnrpm/mistral_7b_releases_with_claims_of_outperforming/ | false | false | self | 262 | null |
I fine-tuned a model but when running inference it just repeats the prompt for its output | 1 | [removed] | 2023-09-27T15:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/16tnnvp/i_finetuned_a_model_but_when_running_inference_it/ | thinkingdots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tnnvp | false | null | t3_16tnnvp | /r/LocalLLaMA/comments/16tnnvp/i_finetuned_a_model_but_when_running_inference_it/ | false | false | self | 1 | null |
Llama2 (original) vs llama2 (quantised) performance | 6 | Hi guys, i do not have GPUs and can work only on free version of google colab gpu (T4)
I just wanted to understand if is there any source where I came compare the performance in results for llama2 vs llama2 quantised models.
I can only try out 7B and 13 B (8/4/5 bits etc).
So how much is the trade off between real vs quantised models?
TIA | 2023-09-27T14:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/16tmmcb/llama2_original_vs_llama2_quantised_performance/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tmmcb | false | null | t3_16tmmcb | /r/LocalLLaMA/comments/16tmmcb/llama2_original_vs_llama2_quantised_performance/ | false | false | self | 6 | null |
Model trained on medical data | 7 | Is there any model besides medalpaca that is trained on medical data?
I tried thebloke's 13b Q5 quantized variant of medalpaca but for some reason I only get gibberish. | 2023-09-27T14:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/16tmds6/model_trained_on_medical_data/ | Typical-Armadillo340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tmds6 | false | null | t3_16tmds6 | /r/LocalLLaMA/comments/16tmds6/model_trained_on_medical_data/ | false | false | self | 7 | null |
How to Run LLM Models on GPU-enabled Local Servers and Use API Services to Access Them from CPU-only Computers in LAN | 5 | Is it possible to deploy an LLM model to a local computer (server) with RTX 4090 and provide API services, and then use a computer that only has CPU to access the server model? (Only python if possible, thanks.)🙂 | 2023-09-27T14:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/16tlyc1/how_to_run_llm_models_on_gpuenabled_local_servers/ | Mochunnian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tlyc1 | false | null | t3_16tlyc1 | /r/LocalLLaMA/comments/16tlyc1/how_to_run_llm_models_on_gpuenabled_local_servers/ | false | false | self | 5 | null |
Cannot load GGUF model - invalid magic number ? | 4 | Today I managed to build llama.cpp on Android with termux (F-Droid).
I tried out to run llama.cpp main executable now with the Xwin 7B (4-bit quantized) model, downloaded from
[https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF/blob/main/xwin-lm-7b-v0.1.Q4\_0.gguf](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GGUF/blob/main/xwin-lm-7b-v0.1.Q4_0.gguf)
Unfortunately, it gives an error 'invalid magic number'.
See the screenshot below.
So that means this GGUF version is outdated, or what is wrong with the model ?
Note: for simplicity, I built llama.cpp *without* BLAS (so simply via command 'cmake .' and 'make').
​
https://preview.redd.it/4rcael3f1tqb1.jpg?width=1080&format=pjpg&auto=webp&s=3769edb229b596febb22baea55733ff0de18c655 | 2023-09-27T14:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/16tlhg7/cannot_load_gguf_model_invalid_magic_number/ | Fit_Check_919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tlhg7 | false | null | t3_16tlhg7 | /r/LocalLLaMA/comments/16tlhg7/cannot_load_gguf_model_invalid_magic_number/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'qHF6z0VgAPmgjD_377P7ukswa6IGLtI1a0ZGfZct0QI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=108&crop=smart&auto=webp&s=82150fe21ed44cef43a8d9fcb15cfd2fbcc0d0c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=216&crop=smart&auto=webp&s=0e2809d1cd33f73386a72e8dce38a12bd291fef9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=320&crop=smart&auto=webp&s=9991a795dec4138122ced9927a35fc22da904b8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=640&crop=smart&auto=webp&s=a76f164f1d7e4ae2ba0580c365383999f4c68c96', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=960&crop=smart&auto=webp&s=503e0bc16c9f8a6777c9b8898f5d055829b51e83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?width=1080&crop=smart&auto=webp&s=3bef644dd7fdb094b40d0e7a9025510e79ea0686', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TkJNEnsQoMp8S5xgdv0d6Z33sX3SeSYLz5mMzjOXB3s.jpg?auto=webp&s=26fffb61355a699a9d84b87b739cbcdd66e8286b', 'width': 1200}, 'variants': {}}]} | |
What are the problems faced when an AI customer support chatbot is used by a SaaS business. | 2 | What exactly are all the problems/possible problems faced while using and incorporating an AI customer support chatbot by small to medium sized SaaS businesses in USA/Europe/etc.
Since am learning how to build AI chatbots and want to sell them to SaaS businesses. | 2023-09-27T12:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/16tj40a/what_are_the_problems_faced_when_an_ai_customer/ | arxavsx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tj40a | false | null | t3_16tj40a | /r/LocalLLaMA/comments/16tj40a/what_are_the_problems_faced_when_an_ai_customer/ | false | false | self | 2 | null |
Could merging finetunes be part of Chatgpt's secret sauce? | 14 | After the dramatic surprises of speechless-llama2-hermes-orca-platypus-wizardlm-13b
https://www.reddit.com/r/LocalLLaMA/comments/16s701v/this_is_one_of_the_best_13b_models_ive_tested_for/
In at least the whole Shakespearean sonnet writing area (and also general storytelling I feel) this model seems to punch so far above it's weight I seriously wondered if it was some leak of the original real Orca but trained on Llama2. I don't think that's the case, but if this is how magic can happen, could it be it's something that's helped create the 'experts' gpt4 is rumoured to have? | 2023-09-27T11:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/16ti59p/could_merging_finetunes_be_part_of_chatgpts/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ti59p | false | null | t3_16ti59p | /r/LocalLLaMA/comments/16ti59p/could_merging_finetunes_be_part_of_chatgpts/ | false | false | self | 14 | null |
Exllamav2 Performance With Different Quantization Levels | 24 | When we discuss quantization, we often only refer to it in terms of perplexity. I thought it would be interesting to explore the impact of quantization levels on an actual benchmark such as MMLU.
I quantized the model at 9 different levels for the 7b and 13b models using the exllamav2 quantizer. I also added theBloke/*GPTQ as an extra data point.
From there, I adapted the test_MMLU.py script in the exllamav2 repo and modified it to go through all categories and calculate the number of questions that the model got correct (as opposed to the logit calculation he does currently) for all 57 categories/14,000 questions. The scores shown below are the means of all categories.
I’ve put together a chart that shows the performance at different quantization levels, and I’ve also got all the raw data for those who want to dig deeper.
A few notes on the performance
* Performance levels out at around 5bpw (bits per weight) with negligible effect on performance. Any decrease is likely due to noise. However, I did read some reports that the exllamav2 quantizer struggles with higher target bit rates
* The 7b model is more affected by lower bits with a -10% performance vs. -5% at 3.0bpw
* 2.5 bpw is likely not worth using. The 13b 2.5bpw version performs worse than similarly sized 7b models. Regrettably, I cannot test 70b models as I only have 3090 GPUs
Let me know what you think. If there is interest, I can try to evaluate some 70b parameter models with the 2xV100 nodes I have access to. Alternatively, I would be interested in evaluating some "cheat" models as I've heard they perform particularly poorly when quantized.
| Quant | Llama-7b | Llama-13b |
| --- | --- | --- |
| 2.5bpw | 0.27114 | 0.37513 |
| 3.0bpw | 0.345382 | 0.505637 |
| 3.5bpw | 0.408875 | 0.5207 |
| 4.0bpw | 0.417009 | 0.538798 |
| 4.7bpw | 0.446347 | 0.545505 |
| 5.0bpw | 0.455618 | 0.550132 |
| 6.0bpw | 0.45636 | 0.548109 |
| 7.0bpw | 0.454049 | 0.547244 |
| 8.0bpw | 0.447704 | 0.548172 |
| GPTQ | 0.428763 | 0.542167 |
Raw results here:
https://docs.google.com/spreadsheets/d/1MFmHDpqcf7CP_EYnwl1QsUP0KhS6jA8x1JYuwm3NH0U/edit?usp=sharing | 2023-09-27T10:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/16tgzzk/exllamav2_performance_with_different_quantization/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tgzzk | false | null | t3_16tgzzk | /r/LocalLLaMA/comments/16tgzzk/exllamav2_performance_with_different_quantization/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'uE0hLPujxfcSKvFsxSzlOF_wtxb7x70gyELBHvLpNhs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=108&crop=smart&auto=webp&s=2ed992105b4540a7774013bedebc92f5117ffaab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=216&crop=smart&auto=webp&s=7d0fd8b8d1bd7a8b52284a9d0a00fa9ca07b8901', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=320&crop=smart&auto=webp&s=e1b95234a7a5698fc7f875762abf3c5bdb222ac2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=640&crop=smart&auto=webp&s=2e74dda36cbca55fd89de1cb5a18c542a4e6a045', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=960&crop=smart&auto=webp&s=e709f8220b7e8c287d32ad4b6d9cd57f66b31448', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?width=1080&crop=smart&auto=webp&s=3d4ba486216d964514ac42e84a44a7c1324a9ef1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sXHbuwLgNM3yAEZAijHOlVvoIZGqQoxHidp15fzPY9A.jpg?auto=webp&s=129282595eef4b9fee85a8d4a86fe3972949cdc1', 'width': 1200}, 'variants': {}}]} |
Introducing QA-LoRA - Quantization-aware low-rank adaptation of LLMs | 102 | Fascinating new method from Qi Tian et al that combines parameter efficient fine-tuning (used in LoRAs) with parameter quantization in a new way with major efficiency gains - lower computation, faster inference, lower memory requirements.
[Paper page - QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models (huggingface.co)](https://huggingface.co/papers/2309.14717)
Of particular interest is it performs very well with low parameter models, and low bit models like 2-bit, where it appears to be SOTA. Potentially useful for running Falcon 180B on home hardware.
https://preview.redd.it/bbgprv5xyrqb1.png?width=1677&format=png&auto=webp&s=b64d6a10cdf29a50548e13000c3ca5cd657142a5 | 2023-09-27T10:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/16tgvtc/introducing_qalora_quantizationaware_lowrank/ | PookaMacPhellimen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tgvtc | false | null | t3_16tgvtc | /r/LocalLLaMA/comments/16tgvtc/introducing_qalora_quantizationaware_lowrank/ | false | false | 102 | {'enabled': False, 'images': [{'id': '1GQs9IczKgKpajvf4Kpt_pWpRFCJ13n8AoJ92xkMtPo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=108&crop=smart&auto=webp&s=5de1aa1326e1adb90623a8fe46d23cdec07b3c20', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=216&crop=smart&auto=webp&s=b85c7cbbfb564afd1db844100122c7be8f4cf741', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=320&crop=smart&auto=webp&s=12b218d9f9e6259333bf256abdc97700c2130da0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=640&crop=smart&auto=webp&s=4132db503b7e74f0814243ab6b608d32d3b290c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=960&crop=smart&auto=webp&s=f4ca124530b6e26bf2f62bdb2d90311c9b587311', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?width=1080&crop=smart&auto=webp&s=9d502e77b406b5be8d4d6e8107ade1f71f827086', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KvpQba2n4h-oBaOcbfnwSdmFyyFbIhXduBj9WTfEtZU.jpg?auto=webp&s=a49deda22fb5e2a2e6519b57d68d9644b9a38ee4', 'width': 1200}, 'variants': {}}]} | |
ChatGPT fine-tuning vs Open LLM fine-tuning | 1 | Hi everyone,
I am especially interested in your experience of comparing ChatGPT fine-tuning with Open LLM fine-tuning
What are your experiences, advantages and bottlenecks for both technologies?
What could you achieve with one that you could not achieve with the other? Why did you decide for one technology in the end? | 2023-09-27T09:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/16tg3v0/chatgpt_finetuning_vs_open_llm_finetuning/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tg3v0 | false | null | t3_16tg3v0 | /r/LocalLLaMA/comments/16tg3v0/chatgpt_finetuning_vs_open_llm_finetuning/ | false | false | self | 1 | null |
How useful is ChatGPT Finetuning compared to Open LLM fine tuning | 1 | Hi everyone,
I am especially interested in your experience of comparing ChatGPT Finetuning with Open LLM fine tuning
What are your experiences, advantages and bottlenecks for both technologies? | 2023-09-27T09:38:07 | https://www.reddit.com/r/LocalLLaMA/comments/16tg2j9/how_useful_is_chatgpt_finetuning_compared_to_open/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tg2j9 | false | null | t3_16tg2j9 | /r/LocalLLaMA/comments/16tg2j9/how_useful_is_chatgpt_finetuning_compared_to_open/ | false | false | self | 1 | null |
Looking for some guidance and get a basic knowledge regarding the branches in GPTQ models. | 5 | So I have a lot of questions. I am currently just using the quantized models by loading them, but now I want to know how exactly different branches are actually different. I see names like exllama, llama.cpp, Actorder, 128g etc. I do not know which one is suitable for me or what to use. I am not sure if 13B is actually better than the 7B models in some cases as I am getting almost similar responses. Can someone let me know from where I can actually learn this stuff? | 2023-09-27T09:13:58 | https://www.reddit.com/r/LocalLLaMA/comments/16tfp17/looking_for_some_guidance_and_get_a_basic/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tfp17 | false | null | t3_16tfp17 | /r/LocalLLaMA/comments/16tfp17/looking_for_some_guidance_and_get_a_basic/ | false | false | self | 5 | null |
MistralAI-0.1-7B, the first release from Mistral, dropped just like this on X (raw magnet link; use a torrent client) | 144 | 2023-09-27T08:37:41 | https://twitter.com/MistralAI/status/1706877320844509405 | Ilforte | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16tf4qn | false | {'oembed': {'author_name': 'Mistral AI', 'author_url': 'https://twitter.com/MistralAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">magnet:?xt=urn:btih:208b101a0f51514ecf285885a8b0f6fb1a1e4d7d&dn=mistral-7B-v0.1&tr=udp%3A%2F%<a href="https://t.co/OdtBUsbMKD">https://t.co/OdtBUsbMKD</a>%3A1337%2Fannounce&tr=https%3A%2F%<a href="https://t.co/HAadNvH1t0">https://t.co/HAadNvH1t0</a>%3A443%2Fannounce<br><br>RELEASE ab979f50d7d406ab8d0b07d09806c72c</p>— Mistral AI (@MistralAI) <a href="https://twitter.com/MistralAI/status/1706877320844509405?ref_src=twsrc%5Etfw">September 27, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/MistralAI/status/1706877320844509405', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16tf4qn | /r/LocalLLaMA/comments/16tf4qn/mistralai017b_the_first_release_from_mistral/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'CS0B1I30ZYj0ZX-zyY6FLZvd72xR9WPR7HDTGHV8oJU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?width=108&crop=smart&auto=webp&s=60a70155bda3574ef1569e82bff7884074c1b383', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/5sqkwEMzIqqPNtzrB1t5XsvpUZjwXad14Gcaj40q5TI.jpg?auto=webp&s=f62ea3709aa17be97be9e5aa51d9264b078d60e1', 'width': 140}, 'variants': {}}]} | ||
storage as ram | 1 | I don't have enough ram to run large models but I know there is an option to use storage as ram. Could it work even if it would just be much slower? | 2023-09-27T08:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/16tezbu/storage_as_ram/ | Terrible_Vegetable4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tezbu | false | null | t3_16tezbu | /r/LocalLLaMA/comments/16tezbu/storage_as_ram/ | false | false | self | 1 | null |
Another test to see if I can make top level posts yet | 1 | [removed] | 2023-09-27T08:24:34 | https://www.reddit.com/r/LocalLLaMA/comments/16texau/another_test_to_see_if_i_can_make_top_level_posts/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16texau | false | null | t3_16texau | /r/LocalLLaMA/comments/16texau/another_test_to_see_if_i_can_make_top_level_posts/ | false | false | self | 1 | null |
Local llm agents eg local auto gpt a thing yet? | 0 | Question in the title basically. Haven't really been keeping up to date since April/may. Wondering if one can use llm agents locally in functions akin to auto gpt eg automatically | 2023-09-27T08:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/16teta9/local_llm_agents_eg_local_auto_gpt_a_thing_yet/ | Maelstrom100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16teta9 | false | null | t3_16teta9 | /r/LocalLLaMA/comments/16teta9/local_llm_agents_eg_local_auto_gpt_a_thing_yet/ | false | false | self | 0 | null |
Why Weightdecay is set to zero when finetune LLM? | 6 | I found that all long context models are finetune with weightdecay equals zero. Anyone knows why?
[http://arxiv.org/abs/2309.12307](http://arxiv.org/abs/2309.12307)
[http://arxiv.org/abs/2309.00071](http://arxiv.org/abs/2309.00071) | 2023-09-27T08:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/16telcc/why_weightdecay_is_set_to_zero_when_finetune_llm/ | Mountain-Run-5602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16telcc | false | null | t3_16telcc | /r/LocalLLaMA/comments/16telcc/why_weightdecay_is_set_to_zero_when_finetune_llm/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Loading LLaMA2-70B model | 8 | Hi All,
I'm trying to load LLaMA2-70B model, with following GPU specs:
https://pastebin.com/vjbWpP7C
Basically, I have two a40 machines, and was using this (https://discuss.huggingface.co/t/how-to-load-large-model-with-multiple-gpu-cards/18522/8) code to make it work,
```python
model = AutoModelForCausalLM.from_pretrained("//common2/public/TheBloke/Llama-2-70B-fp16", device_map="auto")
```
but when I try to load the model it take hours and then the server crashes.
Am I doing it in a wrong manner? Any suggestions as how it's done? | 2023-09-27T07:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/16tdrot/loading_llama270b_model/ | 1azytux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tdrot | false | null | t3_16tdrot | /r/LocalLLaMA/comments/16tdrot/loading_llama270b_model/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | 2023-09-27T06:18:56 | https://github.com/Nondzu/LlamaTor | Nondzu | github.com | 1970-01-01T00:00:00 | 0 | {} | 16tcv1u | false | null | t3_16tcv1u | /r/LocalLLaMA/comments/16tcv1u/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RjJWztBiFGWSiX83fH3_kau7edsk9G7G5e6O7I6Yv9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=108&crop=smart&auto=webp&s=a1e94f2a1dde4342b077bc220e69f48c13683c9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=216&crop=smart&auto=webp&s=820cc6730933fa10d58ef8f84799900191c297d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=320&crop=smart&auto=webp&s=910333406ae647d8994e8fc801175a91f299b052', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=640&crop=smart&auto=webp&s=598ba38714737a9f518e059cd34861397e7334e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=960&crop=smart&auto=webp&s=c2f18662d6c882a0fd0b94cba4aa0a585c09c3df', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?width=1080&crop=smart&auto=webp&s=091a83f91c5193bed613e7be11d000aa858cb0ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KMyRwkBosF8pXwu5XX6ZyCplk8kLcSgi8Tm3jyzE1OM.jpg?auto=webp&s=8949511d653d73601586ca272c30c8d78f9b04dd', 'width': 1200}, 'variants': {}}]} | ||
Jetson nano 4gb | 1 | Any new optimized or new techniques to get models like LLama or Falcon on jetson nano 4gb ? 🥹😅 | 2023-09-27T04:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/16tavjo/jetson_nano_4gb/ | Plane_Ad9568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16tavjo | false | null | t3_16tavjo | /r/LocalLLaMA/comments/16tavjo/jetson_nano_4gb/ | false | false | self | 1 | null |
What's your favorite RAG system and why? Langchain, LlamaIndex, etc? | 61 | Just trying to understand the latest landscape of RAGs out there.
Thanks! | 2023-09-27T04:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/16taq8m/whats_your_favorite_rag_system_and_why_langchain/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16taq8m | false | null | t3_16taq8m | /r/LocalLLaMA/comments/16taq8m/whats_your_favorite_rag_system_and_why_langchain/ | false | false | self | 61 | null |
Finetune Llm on amd gpu rx 580 | 2 | So i intend to finetune llama2 for a specific usecase i can already use koboldcpp and opencl to run it but hiw do i finetune it i literally cant find any info about this online | 2023-09-27T03:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/16t9ihy/finetune_llm_on_amd_gpu_rx_580/ | GOD_HIMSELVES | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t9ihy | false | null | t3_16t9ihy | /r/LocalLLaMA/comments/16t9ihy/finetune_llm_on_amd_gpu_rx_580/ | false | false | self | 2 | null |
Fine tune LLaMa for code generation, seek suggestion. | 4 |
Hello, want to get an idea from you guys if this is a reasonable approach.
Problem statement: on daily basis, I need to create one code file and one unit test file. The code file follows a very specific format. And the unit test does the testing. Both files are in python. I want to use a fine tuned LLM to generate such files when I give a prompt.
Approach: I have a sizable collection (100+) of such files following specific format, and a text description of the file what it does. I plan to prepare such code files and corresponding unit test file as output, and description as the question, in the format of Q&A. This dataset will be used to fine tune the LLM to learn this specific format. In the end, the LLM should be able to output such two files when prompted.
Question:
1. Is this approach reasonable?
2. What’s the best way to prepare such dataset since the answer is the code.
3. Any recommendation on the foundation model?
Thank you | 2023-09-27T02:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/16t7la1/fine_tune_llama_for_code_generation_seek/ | Renspire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t7la1 | false | null | t3_16t7la1 | /r/LocalLLaMA/comments/16t7la1/fine_tune_llama_for_code_generation_seek/ | false | false | self | 4 | null |
Is 7900xtx ok to run llm | 1 | Would AMD gpu be a choice today? | 2023-09-27T00:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/16t5xni/is_7900xtx_ok_to_run_llm/ | Willing_Citron_2894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t5xni | false | null | t3_16t5xni | /r/LocalLLaMA/comments/16t5xni/is_7900xtx_ok_to_run_llm/ | false | false | self | 1 | null |
Tips for no GPU? | 13 | Does anyone have any tips and tricks for people trying to run Llama locally without a GPU?
What about a torrent solution like Petals? Please share your any insights and knowledge. | 2023-09-27T00:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/16t57xn/tips_for_no_gpu/ | kecepa5669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t57xn | false | null | t3_16t57xn | /r/LocalLLaMA/comments/16t57xn/tips_for_no_gpu/ | false | false | self | 13 | null |
A Quick Guide to Getting Started with LLMs | 1 | [removed] | 2023-09-26T23:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/16t44bj/a_quick_guide_to_getting_started_with_llms/ | MrChristoCoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t44bj | false | null | t3_16t44bj | /r/LocalLLaMA/comments/16t44bj/a_quick_guide_to_getting_started_with_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]} |
Need Feedback on Ai Agent Framework | 1 | [removed] | 2023-09-26T22:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/16t2718/need_feedback_on_ai_agent_framework/ | libraryofcelsus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t2718 | false | null | t3_16t2718 | /r/LocalLLaMA/comments/16t2718/need_feedback_on_ai_agent_framework/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'C9eyCZK6YrjCI-CrNHdBphDEtLudNTehIWN5PDPU2mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=108&crop=smart&auto=webp&s=c38df96bf8dc53f1867f1673171a6f0c8fb14a99', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=216&crop=smart&auto=webp&s=7c7084f46bdf404b761af1921ccec28a8ab306c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=320&crop=smart&auto=webp&s=e4a72a1b425d67d33934f73e39df798f8a238bab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=640&crop=smart&auto=webp&s=ef0529b74225a9165387ce613577bff4b1d94ef3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=960&crop=smart&auto=webp&s=551e0678f263dc479a35145f0e7e74e0f3c88798', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?width=1080&crop=smart&auto=webp&s=1458714137cc271cb2f89b41c66ffebf1e745804', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rxT4qAuPjdxp8K89lzFYOsKff2k8Pl_J3OgAQHd8jwI.jpg?auto=webp&s=39ca8d9cab610d50a8507f77416b938cc0a1d57d', 'width': 1200}, 'variants': {}}]} |
Experimenting with Humanizing ChatGPT text using LLama. | 1 | 2023-09-26T22:20:29 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16t22yb | false | null | t3_16t22yb | /r/LocalLLaMA/comments/16t22yb/experimenting_with_humanizing_chatgpt_text_using/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'G3n-uYTnUJPSEvoLtZqFes63VS5lXLuxBc-gDUnBLOs', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=108&crop=smart&auto=webp&s=7c46c851fc8b6882b569522852b7592c1c1518b1', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=216&crop=smart&auto=webp&s=9726ac675c710483f77811d829c57775e287826f', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=320&crop=smart&auto=webp&s=f63f17357e3513893687a49145442472102bc19f', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?width=640&crop=smart&auto=webp&s=1600bbed651a241a510a79b8ff7c4bdc400fd8da', 'width': 640}], 'source': {'height': 428, 'url': 'https://preview.redd.it/ez24w5fadoqb1.jpg?auto=webp&s=32209d654571e64a84d731f9d730c16391f25a67', 'width': 935}, 'variants': {}}]} | |||
What’s the best LLM for private Transcription of Audio files? | 1 | [removed] | 2023-09-26T21:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/16t0t5l/whats_the_best_llm_for_private_transcription_of/ | NewportNerds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16t0t5l | false | null | t3_16t0t5l | /r/LocalLLaMA/comments/16t0t5l/whats_the_best_llm_for_private_transcription_of/ | false | false | self | 1 | null |
Best model to run locally on an Android phone? | 44 | 2023-09-26T21:27:17 | Ok-Recognition-3177 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16t0lbw | false | null | t3_16t0lbw | /r/LocalLLaMA/comments/16t0lbw/best_model_to_run_locally_on_an_android_phone/ | false | false | 44 | {'enabled': True, 'images': [{'id': '5LPNcYRwSIJ5sFAIhHUF9q0T84RqttMxOPYzzoLiSfU', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=108&crop=smart&auto=webp&s=2d6b6ff43a98322a7fabbe0bd6cae840e70afbb8', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=216&crop=smart&auto=webp&s=aee09239d50d4dfaab79c6a35d18e25f04e73678', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?width=320&crop=smart&auto=webp&s=2d9959b1366997d635fa2065a96cee8c71c9e5f5', 'width': 320}], 'source': {'height': 521, 'url': 'https://preview.redd.it/jp7pqlzr3oqb1.png?auto=webp&s=49e3c8f9250fe90e368c017b7e83e51c09d60ddb', 'width': 604}, 'variants': {}}]} | |||
Comparing the Parallel AI Universes - Smallville vs AI Town | 2 | 2023-09-26T20:57:11 | https://querystar.io/blog/ai-town | DearWave | querystar.io | 1970-01-01T00:00:00 | 0 | {} | 16szr9j | false | null | t3_16szr9j | /r/LocalLLaMA/comments/16szr9j/comparing_the_parallel_ai_universes_smallville_vs/ | false | false | 2 | {'enabled': False, 'images': [{'id': '2LCPTOV_XM-6fq4aK28guSZ6TqS-ezgq9sv8E-5bK9w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=108&crop=smart&auto=webp&s=d6bfb8985581038455ece432af5767aecfc62f78', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=216&crop=smart&auto=webp&s=b69c045dd67c6db8b6a1228048469c2fd5995d80', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=320&crop=smart&auto=webp&s=068e368993852d4e48d75996b91e01d0f4ae92fc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=640&crop=smart&auto=webp&s=47f3d809d40a46074b331892d7b05c23b4827d38', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=960&crop=smart&auto=webp&s=6178ad2173213dbb867cfc3a21126e6e5d35fcca', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?width=1080&crop=smart&auto=webp&s=375ac67d2c7f0ee4fc534b8e0644424a9425875d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/qvhjTB1qfbtmQPtBzEfSbt33UIXFByokUQ6FOZEDzDs.jpg?auto=webp&s=6c42c3389fca5b1b1d725559e3d52e8c67750d57', 'width': 1200}, 'variants': {}}]} | ||
I plan to run LLaMA on the browser, what vectordb should I use? | 4 | Hey! So I want to create a website where I'll run LLaMA, but I am not sure regarding which DB do I want to use to store the results and all the data.
Ideally it should be a tool to kinda store data without backend if this is even possible, lol. And be able to semantically search through it.
I am not really familiar with all the backend and DB infrastructure. I need the simplest solution which I can implement in less than a day.
Frontend is done primarily with typescript +next js | 2023-09-26T20:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/16sza4f/i_plan_to_run_llama_on_the_browser_what_vectordb/ | Robert-treboR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sza4f | false | null | t3_16sza4f | /r/LocalLLaMA/comments/16sza4f/i_plan_to_run_llama_on_the_browser_what_vectordb/ | false | false | self | 4 | null |
LaMini has been using AMD GPU in production for the past year | 22 | Sharon Zhou just shared this funny [revelation](https://twitter.com/realSharonZhou/status/1706708589837254946?t=UlkwBHI72r_1dZ-cw0blJA&s=19).
Seems like AMD is actually a thing for LLM in production. | 2023-09-26T19:55:53 | https://www.reddit.com/r/LocalLLaMA/comments/16sy3ic/lamini_has_been_using_amd_gpu_in_production_for/ | No_Palpitation7740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sy3ic | false | null | t3_16sy3ic | /r/LocalLLaMA/comments/16sy3ic/lamini_has_been_using_amd_gpu_in_production_for/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'I8vWJynmK66Chqg7a2jmStUeQiA5R0a6wrF-CkP2_n8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=108&crop=smart&auto=webp&s=f6152f999a92467a8e1c4b691fc9c2e1a8184b23', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=216&crop=smart&auto=webp&s=37b84c15e56c5de5c42c066648ec02062347524c', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=320&crop=smart&auto=webp&s=7f0c522fb9c16bff83fe608e677d4d1e13940307', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=640&crop=smart&auto=webp&s=4eab2a74a749392126a89f244ff85acbe21c8d04', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=960&crop=smart&auto=webp&s=12879755f63c7b075cc3b05eda7ff2c19bc92707', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?width=1080&crop=smart&auto=webp&s=988bd0646f11bf265c4e0cbc970c24dc84dd675b', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/BnPCwn7-9cGv-2zM2ODcOWUP6kpsIqCaivnFtta3B4g.jpg?auto=webp&s=f7332092f38385c2e84fcb122e6851fdcab1026b', 'width': 1080}, 'variants': {}}]} |
Add new knowledge to LLama2 | 14 |
I am planning on building my personal assistant based on the LLama2 model. I want the model to know about my personal information, like hobbies, family, and close friends. So I created a dolly-format dataset full of conversations between me and the model. The result I got is that the model only correctly generates 60% of my personal information and the rest is made up. However, the model talks the way I wrote it in the dataset. I also heard of something called cognitive architecture with langchain but I'm not sure if that is the right solution.
which route should I go with?
Thank all | 2023-09-26T19:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/16sxiop/add_new_knowledge_to_llama2/ | Capital_Birthday_654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sxiop | false | null | t3_16sxiop | /r/LocalLLaMA/comments/16sxiop/add_new_knowledge_to_llama2/ | false | false | self | 14 | null |
How does The Bloke quantize models? | 1 | [removed] | 2023-09-26T19:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/16sxhqq/how_does_the_bloke_quantize_models/ | Ender436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sxhqq | false | null | t3_16sxhqq | /r/LocalLLaMA/comments/16sxhqq/how_does_the_bloke_quantize_models/ | false | false | self | 1 | null |
How is MMLU calculated? | 9 | I saw that exllamav2 had a MMLU calculation and wanted to know how the official MMLU was calculated. It seems like I'm supposed to go through all ~50 categories, calculate performance for each category, and then take the average the results?
Initially, I had left out 3 problems and just calculated the MMLU for all 14k problems. Here are the results for the 7B base llama2 model for anyone interested
| Model | GB | MMLU |
| --- | --- | --- |
| 2.5bpw | 2.34 | 0.2689 |
| 3.0bpw | 2.74 | 0.3517 |
| 3.5bpw | 3.13 | 0.4045 |
| 4.0bpw | 3.53 | 0.4165 |
| 4.7bpw | 4.04 | 0.4178 |
| 5.0bpw | 4.32 | 0.4388 |
| 6.0bpw | 5.11 | 0.4344 |
| 7.0bpw | 5.89 | 0.4374 |
| 8.0bpw | 6.69 | 0.4354 |
| GPTQ | 3.81 | 0.4113 | | 2023-09-26T18:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/16swh0w/how_is_mmlu_calculated/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16swh0w | false | null | t3_16swh0w | /r/LocalLLaMA/comments/16swh0w/how_is_mmlu_calculated/ | false | false | self | 9 | null |
Qwen is aligned just as you would expect | 56 | Or do you think it really doesn't know?
https://preview.redd.it/m880q6f6anqb1.png?width=1227&format=png&auto=webp&s=f28190b6fd61e312f6343b43220cf8140d8415f4 | 2023-09-26T18:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/16sw4na/qwen_is_aligned_just_as_you_would_expect/ | random-string | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sw4na | false | null | t3_16sw4na | /r/LocalLLaMA/comments/16sw4na/qwen_is_aligned_just_as_you_would_expect/ | false | false | 56 | null | |
How to log train/val accuracy using SFT trainer? | 1 | Hi,
​
I'm using SFT trainer from HF to fine-tune a LLaMA model using PEFT. But SFT only gives me the loss and other performance-related (like timing) metrics. How can I get the training/val accuracy? I tried to use callbacks but not successful :( Could you please help me with this?
​
Here is my code:
`dataset = load_dataset(dataset_name, split="train")`
`compute_dtype = getattr(torch, bnb_4bit_compute_dtype)`
`bnb_config = BitsAndBytesConfig(`
`load_in_4bit=use_4bit,bnb_4bit_quant_type=bnb_4bit_quant_type,bnb_4bit_compute_dtype=compute_dtype,bnb_4bit_use_double_quant=use_nested_quant,`
`)`
`model = AutoModelForCausalLM.from_pretrained(`
`model_name,quantization_config=bnb_config,device_map=device_map`
`)`
`tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)`
`tokenizer.pad_token = tokenizer.eos_token`
`tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training`
`peft_config = LoraConfig(`
`lora_alpha=lora_alpha,lora_dropout=lora_dropout,r=lora_r,bias="none",task_type="CAUSAL_LM",`
`)`
`training_arguments = TrainingArguments(`
`output_dir=output_dir,num_train_epochs=num_train_epochs,per_device_train_batch_size=per_device_train_batch_size,gradient_accumulation_steps=gradient_accumulation_steps,optim=optim, save_steps=save_steps,logging_steps=logging_steps,learning_rate=learning_rate,weight_decay=weight_decay,fp16=fp16,bf16=bf16,max_grad_norm=max_grad_norm,max_steps=max_steps,warmup_ratio=warmup_ratio,group_by_length=group_by_length,lr_scheduler_type=lr_scheduler_type,report_to="tensorboard"`
`)`
`trainer = SFTTrainer(`
`model=model,train_dataset=dataset,peft_config=peft_config,dataset_text_field="text",max_seq_length=max_seq_length,tokenizer=tokenizer,args=training_arguments,packing=packing,`
`)`
`train_result = trainer.train()`
​
Thank you! | 2023-09-26T18:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/16svcpz/how_to_log_trainval_accuracy_using_sft_trainer/ | Impossible-Froyo3412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16svcpz | false | null | t3_16svcpz | /r/LocalLLaMA/comments/16svcpz/how_to_log_trainval_accuracy_using_sft_trainer/ | false | false | self | 1 | null |
Local LLM that will search the Internet? | 72 | The great value in Bing Chat and Bard, over a plain chatbot, no matter how good is it, is that they're equipped with an Internet search tool that will search, then feed the AI so that it can predict a response with up-to-date, precise information it may not have been trained with. The problem is they're heavily censored and keyworded, and they are of course not open source, ran and controlled by you.
Is there a free LLM I can run locally that's equipped with Internet search so it'll browse for me but give me real results on any question? | 2023-09-26T18:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/16sv2sh/local_llm_that_will_search_the_internet/ | A_for_Anonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sv2sh | false | null | t3_16sv2sh | /r/LocalLLaMA/comments/16sv2sh/local_llm_that_will_search_the_internet/ | false | false | self | 72 | null |
I have a computer that is offline and accessible via USB. Any way to drop in the LLM? | 1 | WSL sounded good until I gave up hand copying .debs for make, gcc, gcc-12, and cpp..
Not sure what my options are. | 2023-09-26T17:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/16sunag/i_have_a_computer_that_is_offline_and_accessible/ | pr1vacyn0eb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sunag | false | null | t3_16sunag | /r/LocalLLaMA/comments/16sunag/i_have_a_computer_that_is_offline_and_accessible/ | false | false | self | 1 | null |
EasyEdit: An Easy-to-use Knowledge Editing Framework for LLMs. | 33 | 2023-09-26T17:44:48 | https://github.com/zjunlp/EasyEdit | AutomataManifold | github.com | 1970-01-01T00:00:00 | 0 | {} | 16sulqm | false | null | t3_16sulqm | /r/LocalLLaMA/comments/16sulqm/easyedit_an_easytouse_knowledge_editing_framework/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'fhMHZVe6_o_Rka-ESuRRHmQkl8oug7pKcLUqNPLzvWM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=108&crop=smart&auto=webp&s=1b9506e04db4e830584e54a5ff8199e2afb9de66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=216&crop=smart&auto=webp&s=a39c934264f45d07cbf28d066910351b11159ed1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=320&crop=smart&auto=webp&s=deddba728b77bb4c051512955402f9cdc55d6cfb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=640&crop=smart&auto=webp&s=9f83becad8198d018f3ec21e7282b9d47c8b5f76', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=960&crop=smart&auto=webp&s=998860f5df8afc9ad9fcda1d0c834c818893b245', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?width=1080&crop=smart&auto=webp&s=f081d04e8ca316830b3936d4f57eca9785683cac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HXoTnWtvAS4kiNPChhrVCwvFvcYTlenWsfz5k0XtleI.jpg?auto=webp&s=fa292a1474ff73def3aac7a38f903da655cdd605', 'width': 1200}, 'variants': {}}]} | ||
Progress on Reproducing Phi-1/1.5 | 87 | Hey All,
Just wanted to share that I am churning out a large dataset that is meant to \~reproduce the synthetic textbook dataset that Microsoft used to train phi-1.5. I'm currently generating \~1bn tokens / day and will be continually streaming the results here - [https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
​
I will eventually document all of my learnings from the process, as I think this will be useful for the OSS community. I have already had a number of painful / useful learnings over the last several days while scaling up these efforts.
​
I am reporting my prelim results here now because I believe the dataset is in a good enough state that it could give interesting results if used in fine-tuning a 13 or 34b model. I am going to start working on this soon, and if anyone would like to colab should dm.
Hopefully this and other efforts around synthetic data will be valuable for the OSS LLM community.
​ | 2023-09-26T17:24:56 | https://www.reddit.com/r/LocalLLaMA/comments/16su32p/progress_on_reproducing_phi115/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16su32p | false | null | t3_16su32p | /r/LocalLLaMA/comments/16su32p/progress_on_reproducing_phi115/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': 'SRQqCJLLBWenZA-Vk4NmfhLIYoGP4qO_YhRZNcIHsgE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=108&crop=smart&auto=webp&s=5c63eeb0d15d3381f0bbadec162cca2f9ed72747', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=216&crop=smart&auto=webp&s=0e003282dbbd320cd65fbcd21cf43ea4220d4c04', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=320&crop=smart&auto=webp&s=de1f815ca8dd7cb2df967d4ca3ca2fbc79bbf1ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=640&crop=smart&auto=webp&s=9e431c0b74666624a1bd0992a10196c3c4fea079', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=960&crop=smart&auto=webp&s=1856bb34a5887774727ce79ed3870431ef46c163', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?width=1080&crop=smart&auto=webp&s=a8ae39a6fa999681feedd3e3475c0424cc99deb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TlN8fXuhzoMraEjoxJnMTjZpAcekJV--nUQanUj8-_8.jpg?auto=webp&s=8c2bdd754ca7ecf62f55551ff88ca9d0c891f6c9', 'width': 1200}, 'variants': {}}]} |
How to Approach LLM Learning with Career Development in Mind? | 8 | Hey folks!
I'm a data scientist by trade, and I want to pivot my career towards LLM development for private companies in anticipation of that transition (with the understanding that this is still a very new field).
I've seen a few job postings, but there are obviously no certifications in this space yet.
Where should I focus my learning, so that I can be competitive for new postings?
My thinking was:
* Cloud Certifications (AWS): Certified Maching Learning/Certified Dev
* Create/manage a github repo that details a model + LORAS for some specific tasks.
Any suggestions or feedback? | 2023-09-26T17:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/16sttws/how_to_approach_llm_learning_with_career/ | DiamondBadge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16sttws | false | null | t3_16sttws | /r/LocalLLaMA/comments/16sttws/how_to_approach_llm_learning_with_career/ | false | false | self | 8 | null |
Making Food | 3 | Im hungry, there's a few random ingredients in the fridge..want to make some cheap Chinese American/Western food, and don't want to get food poisoning. Any good llms/datasets to toy with?
Do u find llama as a base model already has enough culinary information? | 2023-09-26T17:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/16strqu/making_food/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16strqu | false | null | t3_16strqu | /r/LocalLLaMA/comments/16strqu/making_food/ | false | false | self | 3 | null |
Ollama - front-end for running local models, release v0.1.0 | 15 | 2023-09-26T17:07:59 | https://github.com/jmorganca/ollama/releases/tag/v0.1.0 | bandman614 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16stnpn | false | null | t3_16stnpn | /r/LocalLLaMA/comments/16stnpn/ollama_frontend_for_running_local_models_release/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'aI8JDBlabiZTdyh2CmhRcXkr-3TtsO28nAHtJiBtNpw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=108&crop=smart&auto=webp&s=8f920dd8fb82a57b2d8d698d10ce34eeb60f2aa5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=216&crop=smart&auto=webp&s=562addb26e4d8a4286c1819d4dd075c8dda77e1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=320&crop=smart&auto=webp&s=29fcfce5fe97dc7689eea7c1e9166b308eee5110', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=640&crop=smart&auto=webp&s=713cde801e7cf9ac8dd9120a8fefe2b67b836fcf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=960&crop=smart&auto=webp&s=2011c120f34c7e67ed17c2926ffac65835b2416d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?width=1080&crop=smart&auto=webp&s=539f099327f86f4e03283841e8688eadf3ee00b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jCvrVt6I1m9dmD_mqXj2Y-BSTP6-kwaUM0E_n5XSzA.jpg?auto=webp&s=40bb48c34e3ee0f89a016c35d6dba238b5cc4899', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.