title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Using LLMs to build custom Operating Systems?
5
Sometime ago I was reading a conversation here on reddit, between people more well informed than myself, about how language models with large enough context windows could (one day) write custom operating systems for computers / mobile devices from the ground up - meaning one day everyone could just develop their own proprietary OS based on one’s specific interests or needs / aesthetic tastes. That being said how close is such a thing, is it even possible or realistic?
2023-10-03T02:17:39
https://www.reddit.com/r/LocalLLaMA/comments/16yfuqd/using_llms_to_build_custom_operating_systems/
Wroisu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16yfuqd
false
null
t3_16yfuqd
/r/LocalLLaMA/comments/16yfuqd/using_llms_to_build_custom_operating_systems/
false
false
self
5
null
dolphin-2.0-mistral-7b
80
Today I released dolphin-2.0-mistral-7b [https://huggingface.co/ehartford/dolphin-2.0-mistral-7b](https://huggingface.co/ehartford/dolphin-2.0-mistral-7b) [https://erichartford.com/dolphin](https://erichartford.com/dolphin) This model is based on mistralAI, so it is suitable for commercial or non-commercial use. This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly. This model was made possible by a generous grant from my sponsor a16z. I modified the dataset for uncensoring, deduping, cleaning, and quality. I added Jon Durbin's excellent Airoboros dataset to increase creativity. It took 48 hours to train 10 epochs on 4x A100s. This model (and all my future releases) use [**ChatML**](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format. <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> &#x200B; https://preview.redd.it/va2oxgojbwrb1.png?width=1758&format=png&auto=webp&s=8050bcd4226ae1cdbfae621198c58b072aa5b81c
2023-10-03T02:08:41
https://www.reddit.com/r/LocalLLaMA/comments/16yfnt4/dolphin20mistral7b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16yfnt4
false
null
t3_16yfnt4
/r/LocalLLaMA/comments/16yfnt4/dolphin20mistral7b/
false
false
https://b.thumbs.redditm…m94askY-JLkA.jpg
80
null
Any cool new 13B or so models?
29
We hear about mistral and others at 7b, but what about the slightly bigger models? I am doing 64GB sys ram with GGUF, no gpu, and a bad ass 13B is the sweet spot right?
2023-10-03T00:56:46
https://www.reddit.com/r/LocalLLaMA/comments/16ye3zz/any_cool_new_13b_or_so_models/
Overall-Importance54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ye3zz
false
null
t3_16ye3zz
/r/LocalLLaMA/comments/16ye3zz/any_cool_new_13b_or_so_models/
false
false
self
29
null
What's the best general model for 24GB vram? [LLama 1 30B VS LLama 2 13B VS Q2 LLama 2 70B VS Code Llama 34B VS LLama 2 70B ExLlamaV2 ]
25
As the title says there seems to be 5 types of models which can be fit on a 24GB vram GPU: 1. Q4 LLama 1 30B 2. Q8 LLama 2 13B 3. Q2 LLama 2 70B 4. Q4 Code Llama 34B (finetuned for chat) 5. Q2.55 format LLama 2 70B ExLlamav2 A special leaderboard for quantized models made to fit on 24GB vram would be useful, as currently it's really hard to compare these quantized models.
2023-10-03T00:42:00
https://www.reddit.com/r/LocalLLaMA/comments/16ydsdj/whats_the_best_general_model_for_24gb_vram_llama/
ChromeCat1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ydsdj
false
null
t3_16ydsdj
/r/LocalLLaMA/comments/16ydsdj/whats_the_best_general_model_for_24gb_vram_llama/
false
false
self
25
null
So they "are training the next version of Llama now"
38
Just heard this in this video from 5 days ago https://youtube.com/clip/Ugkx2fUKl11YiwxDRxY8be1SD3KHQ-Hv3Qqh?si=3Kn8Ml9llVabKIYS . I have my own theories about Llama 3, but what do you expect to be released and in what timeframe?
2023-10-03T00:32:05
https://www.reddit.com/r/LocalLLaMA/comments/16ydklz/so_they_are_training_the_next_version_of_llama_now/
hapliniste
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ydklz
false
null
t3_16ydklz
/r/LocalLLaMA/comments/16ydklz/so_they_are_training_the_next_version_of_llama_now/
false
false
self
38
{'enabled': False, 'images': [{'id': 'H7Y8wOitV_Dhk3pncktPXim57D8Vfi6L0VEPTIgkDX0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=108&crop=smart&auto=webp&s=ff2ebcddfdba5fb96ff7b25f2c11c45ce6aff3a9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=216&crop=smart&auto=webp&s=f30b3f8e717b6a595a02ed029891565f1419a3e2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=320&crop=smart&auto=webp&s=90064058b37de1fce079cc8e5060ccb88b567ae6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=640&crop=smart&auto=webp&s=bf16469a0c23e765a35d42aecee4f6c12f4ebcc6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=960&crop=smart&auto=webp&s=d3fb2d798cd66f44f912a5d03b4d083ae8d1e5a6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?width=1080&crop=smart&auto=webp&s=10877dd0af609167d38698e81a81ca517a4d9e77', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/oO3KtVQtGeykiPGJBDQCJeamz__B6hTRcSDb4L3OsR0.jpg?auto=webp&s=cec02098db587045b962b2d9ef7b1bf06d7e50c8', 'width': 1280}, 'variants': {}}]}
How do you run small models (3b and 7b) for text classification, removing PII, sentiment analysis, etc.? I don't want to chat with the model—Just a small model, preferably capable of function calling or structured output in some other way. I assume no GPU is required for this? What server is best?
1
[removed]
2023-10-02T22:20:54
https://www.reddit.com/r/LocalLLaMA/comments/16yaf9k/how_do_you_run_small_models_3b_and_7b_for_text/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16yaf9k
false
null
t3_16yaf9k
/r/LocalLLaMA/comments/16yaf9k/how_do_you_run_small_models_3b_and_7b_for_text/
false
false
self
1
null
What are the best settings for a computer with an integrated GPU and 16gb RAM?
8
I've been trying to run the smallest llama 2 7b model ( [**llama2\_7b\_chat\_uncensored.Q2\_K.gguf**](https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGUF/blob/main/llama2_7b_chat_uncensored.Q2_K.gguf) ), but despite that it still runs incredibly slow (taking more than a minute to generate an output). I currently have a PC that has Intel Iris Xe (128mb of dedicated VRAM), and 16GB of DDR4 memory. I have a tiger lake (11th gen) Intel CPU. Is there anything in my current settings below in text-generation-webui that should be changed to get faster response times? &#x200B; [text-generation-webui settings under the \\"model\\" tab](https://preview.redd.it/tgypbw7x5vrb1.png?width=1890&format=png&auto=webp&s=0e6a7f28ceee10b0ccbef0f6700a08fba7c8f833) &#x200B;
2023-10-02T22:15:49
https://www.reddit.com/r/LocalLLaMA/comments/16yaacr/what_are_the_best_settings_for_a_computer_with_an/
Cheems63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16yaacr
false
null
t3_16yaacr
/r/LocalLLaMA/comments/16yaacr/what_are_the_best_settings_for_a_computer_with_an/
false
false
https://a.thumbs.redditm…ZpM0M6OtIcj4.jpg
8
{'enabled': False, 'images': [{'id': 'ER-CItww6gz_7N6NuepmqOviEmJGTdtIyK1AuJawdR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=108&crop=smart&auto=webp&s=61b8f4399a61fdedebd6c2d9d1751701d842500a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=216&crop=smart&auto=webp&s=46377af33c87eb53b6457d438c420b11a6171c6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=320&crop=smart&auto=webp&s=9729b1a8d33428f3097ccab7a10c8f006e1f8821', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=640&crop=smart&auto=webp&s=ea0524f48b1f6d90d013f948bbe02925af0d5865', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=960&crop=smart&auto=webp&s=96bbe35d042364ba148a67b2c315729ff973ae93', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?width=1080&crop=smart&auto=webp&s=f389e63965041b28f54f785cacf7c12fb0a9f14a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D_Dm4aCSuAwl2QcSqoXdgwRCNUzhtyb6C4O6KhyQcoQ.jpg?auto=webp&s=0519c368bdcb4e072ade55e4ac829da2316dbffc', 'width': 1200}, 'variants': {}}]}
Meanwhile...
84
2023-10-02T22:02:50
https://i.redd.it/g94zexzo3vrb1.png
perlthoughts
i.redd.it
1970-01-01T00:00:00
0
{}
16y9y6n
false
null
t3_16y9y6n
/r/LocalLLaMA/comments/16y9y6n/meanwhile/
false
false
https://b.thumbs.redditm…S_y0tC1pYZro.jpg
84
{'enabled': True, 'images': [{'id': 'e5Bltwrhj4Y8Ua24RfEoWhsqLzD1V5wpS9vdo3_wBJk', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/g94zexzo3vrb1.png?width=108&crop=smart&auto=webp&s=0345c51276b53089fd7eba5696202244c77ec485', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/g94zexzo3vrb1.png?width=216&crop=smart&auto=webp&s=8e8ba5144d6f6b94629201c1db209ce89bc5c047', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/g94zexzo3vrb1.png?width=320&crop=smart&auto=webp&s=18d85e97f31613752e313b12dc27aad738d9bbe8', 'width': 320}], 'source': {'height': 430, 'url': 'https://preview.redd.it/g94zexzo3vrb1.png?auto=webp&s=e60032e25278a2e2ca389815580735712c2df526', 'width': 581}, 'variants': {}}]}
Quantize LLaMA 2 70B
6
Hi community. I am trying to quantize the LLaMA 2 70B model to 4bits so I can then train it. I've been using the Hugging face documentation and was able to get the model and load it to system RAM (259GB it took) my impression was that I would be able to use CPU for this but I am getting the error that GPTQ needs CUDA. From what I am researching it looks as I need a GPU where GPTQ loads parts of the model (Offloads). Is it possible to do this on CPU only? Is it a bad idea? I am trying to avoid using GPU because getting instances on AWS that have that much system RAM but also have a GPU have to be really high capacity. Another question is that on Hugging Face docs it states that it is recommended to use the GPTQ paper datasets for quantizing. Is this the case for all quantizing or is it better to quantize on samples from your own training data? Thanks in advance for the help.
2023-10-02T21:32:26
https://www.reddit.com/r/LocalLLaMA/comments/16y96mz/quantize_llama_2_70b/
nomusichere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y96mz
false
null
t3_16y96mz
/r/LocalLLaMA/comments/16y96mz/quantize_llama_2_70b/
false
false
self
6
null
A Starter Guide for Playing with Your Own Local AI!
1,078
# LearningSomeCode's Starter Guide for Local AI! &#x200B; So I've noticed a lot of the same questions pop up when it comes to running LLMs locally, because much of the information out there is a bit spread out or technically complex. My goal is to create a stripped down guide of "Here's what you need to get started", without going too deep into the why or how. That stuff is important to know, but it's better learned after you've actually got everything running. ***This is not meant to be exhaustive or comprehensive; this is literally just to try to help to take you from "I know nothing about this stuff" to "Yay I have an AI on my computer!"*** I'll be breaking this into sections, so feel free to jump to the section you care the most about. There's lots of words here, but maybe all those words don't pertain to you. **Don't be overwhelmed; just hop around between the sections.** My recommendation installation steps are up top, with general info and questions about LLMs and AI in general starting halfway down. &#x200B; **Table of contents** * **Installation** * ***I have an Nvidia Graphics Card on Windows or Linux!*** * ***I have an AMD Graphics card on Windows or Linux!*** * ***I have a Mac!*** * ***I have an older machine!*** * **General Info** * ***I have no idea what an LLM is!*** * ***I have no idea what a Fine-Tune is!*** * ***I have no idea what "context" is!*** * ***I have no idea where to get LLMs!*** * ***I have no idea what size LLMs to get!*** * ***I have no idea what quant to get!*** * ***I have no idea what "K" quants are!*** * ***I have no idea what GGML/GGUF/GPTQ/exl2 is!*** * ***I have no idea what settings to use when loading the model!*** * ***I have no idea what flavor model to get!*** * ***I have no idea what normal speeds should look like!*** * ***I have no idea why my model is acting dumb!*** &#x200B; &#x200B; # Installation Recommendations &#x200B; >***I have an NVidia Graphics Card on Windows or Linux!*** &#x200B; If you're on Windows, the fastest route to success is probably **Koboldcpp**. It's literally just an executable. It doesn't have a lot of bells and whistles, but it gets the job done great. The app also acts as an API if you were hoping to run this with a secondary tool like SillyTavern. [https://github.com/LostRuins/koboldcpp/wiki#quick-start](https://github.com/LostRuins/koboldcpp/wiki#quick-start) &#x200B; Now, if you want something with more features built in or you're on Linux, I recommend **Oobabooga**! It can also act as an API for things like SillyTavern. [https://github.com/oobabooga/text-generation-webui#one-click-installers](https://github.com/oobabooga/text-generation-webui#one-click-installers) If you have git, you know what to do. If you don't- scroll up and click the green "Code" dropdown and select "Download Zip" There used to be more steps involved, but I no longer see the requirements for those, so I think the 1 click installer does everything now. How lucky! For Linux Users: [Please see the comment below](https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/comment/k37b0kh/?utm_source=share&utm_medium=web2x&context=3) suggesting running Oobabooga in a docker container! &#x200B; &#x200B; &#x200B; >***I have an AMD Graphics card on Windows or Linux!*** &#x200B; For Windows- use **koboldcpp**. It has the best windows support for AMD at the moment, and it can act as an API for things like SillyTavern if you were wanting to do that. [https://github.com/LostRuins/koboldcpp/wiki#quick-start](https://github.com/LostRuins/koboldcpp/wiki#quick-start) and here is more info on the AMD bits. Make sure to read both before proceeding [https://github.com/YellowRoseCx/koboldcpp-rocm/releases](https://github.com/YellowRoseCx/koboldcpp-rocm/releases) &#x200B; If you're on Linux, you can probably do the above, but **Oobabooga** also supports AMD for you (I think...) and it can act as an API for things like SillyTavern as well. [https://github.com/oobabooga/text-generation-webui/blob/main/docs/One-Click-Installers.md#using-an-amd-gpu-in-linux](https://github.com/oobabooga/text-generation-webui/blob/main/docs/One-Click-Installers.md#using-an-amd-gpu-in-linux) If you have git, you know what to do. If you don't- scroll up and click the green "Code" dropdown and select "Download Zip" For Linux Users: [Please see the comment below](https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/comment/k37b0kh/?utm_source=share&utm_medium=web2x&context=3) suggesting running Oobabooga in a docker container! &#x200B; &#x200B; &#x200B; >***I have a Mac!*** &#x200B; Macs are great for inference, but note that y'all have some special instructions. First- if you're on an M1 Max or Ultra, or an M2 Max or Ultra, you're in good shape. Anything else that is not one of the above processors is going to be a little slow... maybe very slow. The original M1s, the intel processors, all of them don't do quite as well. But hey... maybe it's worth a shot? Second- Macs are special in how they do their VRAM. Normally, on a graphics card you'd have somewhere between 4 to 24GB of VRAM on a special dedicated card in your computer. Macs, however, have specially made really fast RAM baked in that also acts as VRAM. The OS will assign up to 75% of this total RAM as VRAM. So, for example, the 16GB M2 Macbook Pro will have about 10GB of available VRAM. The 128GB Mac Studio has 98GB of VRAM available. This means you can run MASSIVE models [with relatively decent speeds.](https://www.reddit.com/r/LocalLLaMA/comments/16oww9j/running_ggufs_on_m1_ultra_part_2/) &#x200B; For you, the quickest route to success if you just want to toy around with some models is **GPT4All,** but it is pretty limited. However, it was my first program and what helped me get into this stuff. It's a simple 1 click installer; super simple. It can act as an API, but isn't recognized by a lot of programs. So if you want something like SillyTavern, you would do better with something else. (**NOTE:** It CAN act as an API, and it uses the OpenAPI schema. If you're a developer, you can likely tweak whatever program you want to run against GPT4All to recognize it. Anything that can connect to openAI can connect to GPT4All as well). Also note that it only runs GGML files; they are older. But it does Metal inference (Mac's GPU offloading) out of the box. A lot of folks think of GPT4All as being CPU only, but I believe that's only true on Windows/Linux. Either way, it's a small program and easy to try if you just want to toy around with this stuff a little. [https://gpt4all.io/index.html](https://gpt4all.io/index.html) &#x200B; Alternatively, **Oobabooga** works for you as well, and it can act as an API for things like SillyTavern! [https://github.com/oobabooga/text-generation-webui#installation](https://github.com/oobabooga/text-generation-webui#installation) If you have git, you know what to do. If you don't- scroll up and click the green "Code" dropdown and select "Download Zip". There used to be more to this, but the instructions seem to have vanished, so I think the 1 click installer does it all for you now! &#x200B; There's another easy option as well, but I've never used it. However, a friend set it up quickly and it seemed painless. **LM Studios.** [https://lmstudio.ai/](https://lmstudio.ai/) Some folks have posted about it here, so maybe try that too and see how it goes. &#x200B; &#x200B; &#x200B; >***I have an older machine!*** &#x200B; I see folks come on here sometimes with pretty old machines, where they may have 2GB of VRAM or less, a much older cpu, etc. Those are a case by case basis of trial and error. In your shoes, I'd start small. **GPT4All** is a CPU based program on Windows and supports Metal on Mac. It's simple, it has small models. I'd probably start there to see what works, using the smallest models they recommend. After that, I'd look at something like **KoboldCPP** [https://github.com/LostRuins/koboldcpp/wiki#quick-start](https://github.com/LostRuins/koboldcpp/wiki#quick-start) Kobold is lightweight, tends to be pretty performant. &#x200B; I would start with a 7b gguf model, even as low down as a 3\_K\_S. I'm not saying that's all you can run, but you want a baseline for what performance looks like. Then I'd start adding size. It's ok to not run at full GPU layers (see above). If there are 35 in the model (it'll usually tell you in the command prompt window), you can do 30. You will take a bigger performance hit having 100% of the layers in your GPU if you don't have enough VRAM to cover the model. You will get better performance doing maybe 30 out of 35 layers in that scenario, where 5 go to the CPU. At the end of the day, it's about seeing what works. There's lots of posts talking about how well a 3080, 3090, etc will work, but not many for some Dell G3 laptop from 2017, so you're going to have test around and bit and see what works. &#x200B; &#x200B; &#x200B; # General Info &#x200B; >***I have no idea what an LLM is!*** &#x200B; An LLM is the "brains" behind an AI. This is what does all the thinking and is something that we can run locally; like our own personal ChatGPT on our computers. Llama 2 is a free LLM base that was given to us by Meta; it's the successor to their previous version Llama. The vast majority of models you see online are a "Fine-Tune", or a modified version, of Llama or Llama 2. Llama 2 is generally considered smarter and can handle more context than Llama, so just grab those. If you want to try any before you start grabbing, please [check out a comment below](https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/comment/k3k3mdy/) where some free locations to test them out have been linked! &#x200B; &#x200B; &#x200B; >***I have no idea what a Fine-Tune is!*** &#x200B; It's where people take a model and add more data to it to make it better at something (or worse if they mess it up lol). That something could be conversation, it could be math, it could be coding, it could be roleplaying, it could be translating, etc. People tend to name their Fine-Tunes so you can recognize them. Vicuna, Wizard, Nous-Hermes, etc are all specific Fine-Tunes with specific tasks. If you see a model named Wizard-Vicuna, it means someone took both Wizard and Vicuna and smooshed em together to make a hybrid model. You'll see this a lot. Google the name of each flavor to get an idea of what they are good at! &#x200B; &#x200B; &#x200B; >***I have no idea what "context" is!*** &#x200B; "Context" is what tells the LLM what to say to you. The AI models don't remember anything themselves; every time you send a message, you have to send everything that you want it to know to give you a response back. If you set up a character for yourself in whatever program you're using that says "My name is LearningSomeCode. I'm kinda dumb but I talk good", then that needs to be sent EVERY SINGLE TIME you send a message, because if you ever send a message without that, it forgets who you are and won't act on that. In a way, [you can think of LLMs as being stateless](https://www.reddit.com/r/Oobabooga/comments/16qa4cj/comment/k1vy8rr/?context=3). 99% of the time, that's all handled by the program you're using, so you don't have to worry about any of that. But what you DO have to worry about is that there's a limit! Llama models could handle 2048 context, which was about 1500 words. Llama 2 models handle 4096. So the more that you can handle, the more chat history, character info, instructions, etc you can send. &#x200B; &#x200B; &#x200B; >***I have no idea where to get LLMs!*** &#x200B; [Huggingface.co](https://Huggingface.co). Click "models" up top. Search there. &#x200B; &#x200B; &#x200B; >***I have no idea what size LLMs to get!*** &#x200B; It all comes down to your computer. **Models come in sizes**, which we refer to as "b" sizes. 3b, 7b, 13b, 20b, 30b, 33b, 34b, 65b, 70b. Those are the numbers you'll see the most. **The b stands for "billions of parameters",** and the bigger it is the smarter your model is. A 70b feels almost like you're talking to a person, where a 3b struggles to maintain a good conversation for long. Don't let that fool you though; some of my favorites are 13b. They are surprisingly good. A full sizes model is 2 bytes per "b". That means a **3b's real size is** **6GB**. But thanks to quantizing, you can get a "compressed" version of that file for FAR less. &#x200B; &#x200B; >***I have no idea what quant to get!*** &#x200B; "Quantized" models come in q2, q3, q4, q5, q6 and q8. The smaller the number, the smaller and dumber the model. This means a 34b q3 is only 17GB! That's a far cry from the full size of 68GB. Rule of thumb: You are generally better off running a small q of a bigger model than a big q of a smaller model. 34b q3 is going to, in general, be smarter and better than a 13b q8. [https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fr9gd7dn2ksgb1.png%3Fwidth%3D792%26format%3Dpng%26auto%3Dwebp%26s%3Db9dce2e22724665754cc94a22442f2795f594345](https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fr9gd7dn2ksgb1.png%3Fwidth%3D792%26format%3Dpng%26auto%3Dwebp%26s%3Db9dce2e22724665754cc94a22442f2795f594345) In the above picture, higher is worse. The higher up you are on that chart, the more "perplexity" the model has; aka, the model acts dumber. As you can see in that picture, the best 13b doesn't come close to the worst 30b. It's basically a big game of "what can I fit in my video RAM?" The size you're looking for is the biggest "b" you can get and the biggest "q" you can get that fits within your Video Card's VRAM. Here's an example: [https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF) This is a 7b. If you scroll down, you can see that TheBloke offers a very helpful chart of what size each is. So even though this is a 7b model, the q3\_K\_L is "compressed" down to a 3.6GB file! Despite that, though, "Max RAM required" column still says 6.10GB, so don't be fooled! A 4GB card might still struggle with that. &#x200B; &#x200B; &#x200B; >***I have no idea what "K" quants are!*** &#x200B; Additionally, along with the "q"s, you might also see things like "K\_M" or "K\_S". Those are "K" quants, and S stands for "small", the M for "medium" and the L for "Large". So a q4\_K\_S is smaller than a q4\_K\_L, and both of those are smaller than a q6. &#x200B; &#x200B; &#x200B; >***I have no idea what GGML/GGUF/GPTQ/exl2 is!*** &#x200B; Think of them as file types. * GGML runs on a combination of graphics card and cpu. These are outdated and only older applications run them now * GGUF is the newer version of GGML. An upgrade! They run on a combination of graphics card and cpu. It's my favorite type! These run in Llamacpp. Also, if you're on a mac, you probably want to run these. * GPTQ runs purely on your video card. It's fast! But you better have enough VRAM. These run in AutoGPTQ or ExLlama. * exl2 also runs on video card, and it's mega fast. Not many of them though... These run in ExLlama2! There are other file types as well, but I see them mentioned less. I usually recommend folks choose GGUF to start with. &#x200B; &#x200B; &#x200B; >***I have no idea what settings to use when loading the model!*** &#x200B; * Set the **context** or **ctx** to whatever the max is for your model; it will likely be either 2048 or 4096 (check the readme for the model on huggingface to find out). * Don't mess with rope settings; that's fancy stuff for another day. That includes alpha, rope compress, rope freq base, rope scale base. If you see that stuff, just leave it alone for now. You'll know when you need it. * If you're using GGUF, it should be automatically set the rope stuff for you depending on the program you use, like Oobabooga! * Set your **Threads** to the number of CPU cores you have. Look up your computer's processor to find out! * On mac, it might be worth taking the number of cores you have and subtracting 4. They do "Efficiency Cores" and I think there is usually 4 of them; they aren't good for speed for this. So if you have a 20 core CPU, I'd probably put 16 threads. * For **GPU layers** or **n-gpu-layers** or **ngl** (if using GGML or GGUF)- * If you're on mac, any number that isn't 0 is fine; even 1 is fine. It's really just on or off for Mac users. 0 is off, 1+ is on. * If you're on Windows or Linux, do like 50 layers and then look at the Command Prompt when you load the model and it'll tell you how many layers there. If you can fit the entire model in your GPU VRAM, then put the number of layers it says the model has or higher (it'll just default to the max layers if you g higher). If you can't fit the entire model into your VRAM, start reducing layers until the thing runs right. * **EDIT-** In a comment below I added a bit more info in answer to someone else. Maybe this will help a bit. [https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/comment/k3ebnpv/](https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/comment/k3ebnpv/?utm_source=share&utm_medium=web2x&context=3) * If you're on Koboldcpp, don't get hung up on BLAS threads for now. Just leave that blank. I don't know what that does either lol. Once you're up and running, you can go look that up. * You should be fine ignoring the other checkboxes and fields for now. These all have great uses and value, but you can learn them as you go. &#x200B; &#x200B; &#x200B; >***I have no idea what flavor model to get!*** &#x200B; Google is your friend lol. I always google "reddit best 7b llm for \_\_\_\_\_" (replacing \_\_\_\_ with chat, general purpose, coding, math, etc. Trust me, folks love talking about this stuff so you'll find tons of recommendations). Some of them are aptly named, like "CodeLlama" is self explanatory. "WizardMath". But then others like "Orca Mini" (great for general purpose), MAmmoTH (supposedly really good for math), etc are not. &#x200B; &#x200B; &#x200B; >***I have no idea what normal speeds should look like!*** &#x200B; For most of the programs, it should show an output on a command prompt or elsewhere with the Tokens Per Second that you are achieving (T/s). If you hardware is weak, it's not beyond reason that you might be seeing 1-2 tokens per second. If you have great hardware like a 3090, 4090, or a Mac Studio M1/M2 Ultra, then you should be seeing speeds on 13b models of at least 15-20 T/s. If you have great hardware and small models are running at 1-2 T/s, then it's time to hit Google! Something is definitely wrong. &#x200B; &#x200B; &#x200B; >***I have no idea why my model is acting dumb!*** &#x200B; There are a few things that could cause this. * You fiddled with the rope settings or changed the context size. Bad user! Go undo that until you know what they do. * Your presets are set weird. Things like "Temperature", "Top\_K", etc. Explaining these is pretty involved, but most programs should have presets. If they do, look for things like "Deterministic" or "Divine Intellect" and try them. Those are good presets, but not for everything; I just use those to get a baseline. Check around online for more info on what presets are best for what tasks. * Your context is too low; ie you aren't sending a lot of info to the model yet. I know this sounds really weird, but models have this funky thing where if you only send them 500 tokens or less in your prompt, they're straight up stupid. But then they slowly get better over time. [Check out this graph](https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2F2qdj7itsb39b1.png%3Fwidth%3D662%26format%3Dpng%26auto%3Dwebp%26v%3Denabled%26s%3Df9b2f044f59fbad5ad51fefacda0b61f724f12f1), where you can see that at the first couple hundred tokens the "perplexity" (which is bad. lower is better) is WAY high, then it balances out, it goes WAY high again if you go over the limit. &#x200B; &#x200B; &#x200B; Anyhow, hope this gets you started! There's a lot more info out there, but perhaps with this you can at least get your feet off the ground. &#x200B; &#x200B;
2023-10-02T21:31:14
https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/
LearningSomeCode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y95hk
false
null
t3_16y95hk
/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/
false
false
self
1,078
{'enabled': False, 'images': [{'id': 'bsrUrF-k2x1U7RkcRbq1z_kp9wCj2QtGbDYoPoTMjAQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=108&crop=smart&auto=webp&s=32d09f84def717508d5d99ff4e76f741f91ef773', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=216&crop=smart&auto=webp&s=657b325b695d9da3efa1251a1eade31268c9c72e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=320&crop=smart&auto=webp&s=e6d19832329732c2cff421ca604d983d277b8bc0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=640&crop=smart&auto=webp&s=15399f68a8429a56d3e442ffad76b9ed23c93d1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=960&crop=smart&auto=webp&s=f480e384e0741a9e07acfb7cfe87d33ff696d3a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?width=1080&crop=smart&auto=webp&s=c1f572a18c1fc4aa8865061815159ad2fb10720f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GcC5wkIjMR4P8rCed0aMTkiW__sQYgbZvRAHw4XdUYg.jpg?auto=webp&s=c66a93ce298a9aa9ec2d395c50bcbf297981c459', 'width': 1200}, 'variants': {}}]}
Is anyone tracking which coding models are winning?
11
I want to know which models have the most users, but it's a bit difficult to do the math when there's so many different versions of the same models on hugging face. Is anyone keeping score on this? Specifically models with a coding used case, or coding specific models, or large models that are trained on Code would be fine too.
2023-10-02T20:46:54
https://www.reddit.com/r/LocalLLaMA/comments/16y7wes/is_anyone_tracking_which_coding_models_are_winning/
XPartyUSA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y7wes
false
null
t3_16y7wes
/r/LocalLLaMA/comments/16y7wes/is_anyone_tracking_which_coding_models_are_winning/
false
false
self
11
null
Anyone knows email clients that allows using local LLMs with them?
8
While improving LLMs is a good thing, making interfaces for where they can be used (coding, mail, doc analysis, statistics...) is probably even more useful. Since the mail usage is one of the most obvious ones, what setup did you do?
2023-10-02T20:07:22
https://www.reddit.com/r/LocalLLaMA/comments/16y6uho/anyone_knows_email_clients_that_allows_using/
Divniy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y6uho
false
null
t3_16y6uho
/r/LocalLLaMA/comments/16y6uho/anyone_knows_email_clients_that_allows_using/
false
false
self
8
null
a 7b better than llama 65b now??? Mistral Orca is OUT!
296
Today, we are releasing Mistral-7B-OpenOrca. This is the first 7B model to score better overall than all other models below 30B. We've achieved 98% of Llama2-70B-chat's performance! thanks to MistralAI for showing the way with the amazing open release of Mistral-7B! So great to have this much capability ready for home GPUs. Also thanks to [@winglian](https://twitter.com/winglian) [@Teknium1](https://twitter.com/Teknium1) [@TheBlokeAI](https://twitter.com/TheBlokeAI) and all Alignment Lab AI friends! We're so close to beating a model 10X larger now! check out our [huggingface space](https://huggingface.co/spaces/Open-Orca/Mistral-7B-OpenOrca) for you to try our model live on fast GPUs in the browser right now! We have used our own [**OpenOrca dataset**](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [**Mistral 7B**](https://huggingface.co/mistralai/Mistral-7B-v0.1). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [**Orca Paper**](https://arxiv.org/abs/2306.02707). We use [**OpenChat**](https://huggingface.co/openchat) packing, trained with [**Axolotl**](https://github.com/OpenAccess-AI-Collective/axolotl). This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our [**OpenOrcaxOpenChat-Preview2-13B model**](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). **HF Leaderboard evals place this model as #2 for all models smaller than 30B at release time, outperforming all but one 13B model.** This release provides a first: a fully open model with class-breaking performance, capable of running fully accelerated on even moderate consumer GPUs. Our thanks to the Mistral team for leading the way here. We affectionately codename this model: "*MistralOrca*" We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our server!, which you can find here: [**https://AlignmentLab.ai**](https://alignmentlab.ai/) or check the OpenAccess AI Collective server for more information about Axolotl trainer here: &#x200B;
2023-10-02T20:03:45
https://www.reddit.com/r/LocalLLaMA/comments/16y6r3x/a_7b_better_than_llama_65b_now_mistral_orca_is_out/
Alignment-Lab-AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y6r3x
false
null
t3_16y6r3x
/r/LocalLLaMA/comments/16y6r3x/a_7b_better_than_llama_65b_now_mistral_orca_is_out/
false
false
self
296
{'enabled': False, 'images': [{'id': 'eneSk2GzNn2l4kiGIcaA958RZjxMFPpZ0GWE6rnyRys', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/WRugcO4eZ-aAtaoEtR4fMAuA3CtR7HuD0JQMMbqNZR0.jpg?auto=webp&s=ebad8cee27692be41aa40e1b939263c4cac6b3e5', 'width': 48}, 'variants': {}}]}
What's a strategy for encoding the list of documents the LLM was given?
2
Title. I've done some research online but I can't really find any guides. Ideally I'd be able to ask the model something like "What documents are at your disposal" and it would list them, or at least a few of them. I'm only using about 100 pdf-s.
2023-10-02T19:59:51
https://www.reddit.com/r/LocalLLaMA/comments/16y6n4u/whats_a_strategy_for_encoding_the_list_of/
TheRealSwagrid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y6n4u
false
null
t3_16y6n4u
/r/LocalLLaMA/comments/16y6n4u/whats_a_strategy_for_encoding_the_list_of/
false
false
self
2
null
One-shot / few-shot prompt just returning the prompt back to me?
1
I am trying to execute a summary, I have given the LLM either one-shot or few-shot examples as the performance isn't great initially but I'm finding that I'm just getting the summary of the prompt given back to me. I've structured it as Example input then example output. Is the issue length of prompt its c.500 tokens currently as it is a full example. Are there any techniques to avoid this?
2023-10-02T19:54:35
https://www.reddit.com/r/LocalLLaMA/comments/16y6icc/oneshot_fewshot_prompt_just_returning_the_prompt/
BeigeSponge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y6icc
false
null
t3_16y6icc
/r/LocalLLaMA/comments/16y6icc/oneshot_fewshot_prompt_just_returning_the_prompt/
false
false
self
1
null
Mistral-7B-OpenOrca Released
155
> We're releasing Mistral-7B-OpenOrca! You can call it MistralOrca too: [https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) This time we have a 7B model that places above all but one of all models below 30B! (The other is a 13B also trained on our data) *We now have a 7B model with 98% of Llama2-70B-chat's performance on the HuggingFace leaderboard!* This release is a full finetune of Mistral-7B base on our OpenOrca dataset for 4 epochs. umm >> *7B model with 98% of Llama2-70B-chat's performance on the HuggingFace leaderboard* gotta try this one.
2023-10-02T19:22:06
https://www.reddit.com/r/LocalLLaMA/comments/16y5nq8/mistral7bopenorca_released/
ab2377
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y5nq8
false
null
t3_16y5nq8
/r/LocalLLaMA/comments/16y5nq8/mistral7bopenorca_released/
false
false
self
155
{'enabled': False, 'images': [{'id': '1-UMy28d5RaIiKP9qwQPG23Zi1isvkTG_pqmCwh1HLQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=108&crop=smart&auto=webp&s=1e4612d852d2136528e4e4705f9df9b254ada76e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=216&crop=smart&auto=webp&s=ded51e13ea34763ab5b2206c1d7d70a282cfb224', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=320&crop=smart&auto=webp&s=f14df47abc4c8243250ed442607d0be31da890ee', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=640&crop=smart&auto=webp&s=8afddbce1458688e5e2d03c733a5cccc65adeb58', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=960&crop=smart&auto=webp&s=44afbbfb605c47e28bbcc9b0b758e9c92a006908', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?width=1080&crop=smart&auto=webp&s=c13deea533e926519ed1d304de6951aac4bfa973', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/344T851aeNtz-H3kxCBtnrFQDu7B_yqKTRq-WMiwWDI.jpg?auto=webp&s=84e3064e48ea9b5d51e352551578effc992ac747', 'width': 1200}, 'variants': {}}]}
Which Smartphone Reigns Supreme: Pixel 7 or S23 Ultra for Running 13b Llama Models?
1
[removed]
2023-10-02T19:16:54
https://www.reddit.com/r/LocalLLaMA/comments/16y5ir2/which_smartphone_reigns_supreme_pixel_7_or_s23/
Butt-Fingers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y5ir2
false
null
t3_16y5ir2
/r/LocalLLaMA/comments/16y5ir2/which_smartphone_reigns_supreme_pixel_7_or_s23/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OFWBx7P-O8DRHEGaphxDI0ue7l3JmaUT8BlBDzrE0xw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=108&crop=smart&auto=webp&s=11c45c099d19eb322153521f359b6dc007756760', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=216&crop=smart&auto=webp&s=db9325c12b64bb828b11fa8130365003e2fff467', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=320&crop=smart&auto=webp&s=246782a3edd503c2d368e4a265517dc6cb2c1516', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=640&crop=smart&auto=webp&s=cb0b65a002a969b9c7b1507b05765745ea1ec904', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=960&crop=smart&auto=webp&s=e6a84e208ac5a563e0de6b865e5585f42bade52d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?width=1080&crop=smart&auto=webp&s=5428fbdd2d3bef9eb7ff6c015dbf20960e4e10ef', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qbCPxwdwdBy0u97AlbgGdsQ-2XNYttc2REdLDtJy7bY.jpg?auto=webp&s=15e23c3aa228863dafb0c7a3a386c75668d3d044', 'width': 1200}, 'variants': {}}]}
Best approach to create a large number of text descriptions/summaries
1
Hello I have been using various local LMs for basic QA stuff. Now I need to use some LLM to generate a largish number (50,000) of text descriptions. I have a bunch of texts that are case notes, fiction, whitepapers etc. And I want to generate short descriptions of the texts. What is the best approach to do this? I have played around with various models using llama.cpp and text-generation-webui, but the most I am getting is about 3tokens/second. I need each description to be about 20 words or so. are local LLMs the best way to do this or should I use some other approach? Thank you &#x200B;
2023-10-02T18:39:29
https://www.reddit.com/r/LocalLLaMA/comments/16y4k58/best_approach_to_create_a_large_number_of_text/
bigbuttsandsteampunk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y4k58
false
null
t3_16y4k58
/r/LocalLLaMA/comments/16y4k58/best_approach_to_create_a_large_number_of_text/
false
false
self
1
null
Open Source multimodality
13
Multimodality has finally entered the game. OpenAI has announced that multimodality is finally coming to ChatGPT, giving us access to GPT4's ability to see and analise images alongside the text prompt (as well as being connected to Dall-e 3 for generating images, and being able to re-feed those images into the context of the conversation). Now we've had LLaVA for a bit now, but open source multimodality hasn't evolved much after that. And it seems like GPT4's multimodality works at a much lower level than the "multimodality" attempts we've seen thus far, making it much truer multimodality, as well as having much more accurate and precise "sight". And Google's Gemini model is supposed to drop soon too, reportedly being a truly multimodal AI, having been trained to be multimodal from the ground up. So it seems like we've entered a new and exciting era, marking a very powerful new step in the capabilities of AI models. It's no longer text from now on. So I'm wondering how you guys thing this will affect the open source community here. How fast do you think we'll start getting these advancements here, and start getting our hands on truly multimodal models that are built from the ground up to completely understand and generate both images and text, and maybe even other things like audio too? Do you guys think this will make text-only models obsolete fast? And how will this change things when we all start getting access to uncensored personal multimodal AI which we can run locally on our computer? What do you guys think? **(I'm sure people will say "these things have barely come out, have some patience", but it's obviously hard to not get excited by these technological advancements. I don't expect anyone to read the future and give me an exact timeframe of when these things will come out open source, I mainly just wanna hear your thoughts)**
2023-10-02T18:28:44
https://www.reddit.com/r/LocalLLaMA/comments/16y4adf/open_source_multimodality/
PenguinTheOrgalorg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y4adf
false
null
t3_16y4adf
/r/LocalLLaMA/comments/16y4adf/open_source_multimodality/
false
false
self
13
null
Is there a place to share and explore the generations of different LLM model?
1
Sort of like there is for text to image generation
2023-10-02T16:39:34
https://www.reddit.com/r/LocalLLaMA/comments/16y1gqq/is_there_a_place_to_share_and_explore_the/
crischu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y1gqq
false
null
t3_16y1gqq
/r/LocalLLaMA/comments/16y1gqq/is_there_a_place_to_share_and_explore_the/
false
false
self
1
null
New research: Teaching LLMs to use tools helps them solve way more complex math problems.
1
When trying to get language models to solve complex math problems, researchers kept running into limits. Models like GPT-3 and ChatGPT still struggle with advanced algebra, calculus, and geometry questions. The math is just too abstract and symbol-heavy for them. To break through this barrier, researchers from Tsinghua University and Microsoft taught models to combine natural language reasoning with calling external math tools. The key is their new "tool-integrated reasoning" format. Models generate a natural language plan first, then write code to invoke tools like SymPy to solve equations. They take the output results and continue verbal reasoning. By interleaving natural language and symbolic computations, they get the best of both worlds - semantic understanding from language models and rigorous math from tools. They trained versions of Anthropic's open-source LLaMA model this way, producing their Tool-Integrated Reasoning Agent (TORA). They present some strong results: * In evaluations on 10 math datasets, TORA substantially outperformed prior state-of-the-art methods, achieving 13-19% higher accuracy on average. * On one competition test, TORA-7B scored 40% accuracy, beating the previous best model by 22 percentage points. This demonstrates that integrating tools directly into the reasoning process can significantly enhance mathematical capabilities, even for large models like GPT-4. However, tough problems involving geometry and advanced algebra are still there. New techniques for symbolic reasoning and spatial understanding will likely be needed to push further. Overall though, tool integration seems a promising path to improve reasoning skills. Applying this to other domains like logic and programming could also be impactful. **TLDR:** Teaching language models to use math tools helps them solve way more complex problems. [**Full Paper Summary**](https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/) [**arXiv Link**](https://arxiv.org/pdf/2309.17452.pdf)
2023-10-02T16:31:26
https://www.reddit.com/r/LocalLLaMA/comments/16y19l2/new_research_teaching_llms_to_use_tools_helps/
Successful-Western27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y19l2
false
null
t3_16y19l2
/r/LocalLLaMA/comments/16y19l2/new_research_teaching_llms_to_use_tools_helps/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fWGniop6U6MRWd1GQhlxWVjwAG2THH28YNVlum8siUI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6RrRC9dCtxEUQTYGlOcVjpt04QJNoBRTOexBh_zqxM8.jpg?width=108&crop=smart&auto=webp&s=c522628e7ffda67413755dd55eac01da13c9a1e7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6RrRC9dCtxEUQTYGlOcVjpt04QJNoBRTOexBh_zqxM8.jpg?width=216&crop=smart&auto=webp&s=abe3f3fc0871ae72b7c61d6ddaabf351bacda9fb', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/6RrRC9dCtxEUQTYGlOcVjpt04QJNoBRTOexBh_zqxM8.jpg?width=320&crop=smart&auto=webp&s=91a8f51db993035b5fcacd057ec41bc8f910cd67', 'width': 320}], 'source': {'height': 315, 'url': 'https://external-preview.redd.it/6RrRC9dCtxEUQTYGlOcVjpt04QJNoBRTOexBh_zqxM8.jpg?auto=webp&s=00ee8bc48e8836cfe42d39e02668707ca8137e71', 'width': 562}, 'variants': {}}]}
ELI5, training a model for localllama
2
I read maybe hundred of times how to train models but I'd love to get a IRL use case and real examples how and what you would train llama for. It's such an abstract topic, maybe b/c I've never trained a model, so I'd like to have a good ELI5. Bonus question: Is it not a lot of work and can it be automated?
2023-10-02T15:45:05
https://www.reddit.com/r/LocalLLaMA/comments/16y02ts/eli5_training_a_model_for_localllama/
upk27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16y02ts
false
null
t3_16y02ts
/r/LocalLLaMA/comments/16y02ts/eli5_training_a_model_for_localllama/
false
false
self
2
null
StreamingLLM —a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning
274
From researchers at Meta and MIT, the paper came out a couple days ago but the chatbot demo and code were recently released. **edit**: The title of this post was taken straight from the paper and wasn't meant to be misleading. I thought the paper was clear about it, but if you're unsure what StreamingLLM is for, they added a simple clarification on Github. TL;DR This doesn't mean infinite context and this can't be used to summarize books. This is for more efficiency so you don't need a cache reset when handling unlimited texts. Paper: [http://arxiv.org/abs/2309.17453](http://arxiv.org/abs/2309.17453) Code: [https://github.com/mit-han-lab/streaming-llm](https://github.com/mit-han-lab/streaming-llm) Abstract: >Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a "sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided in the [link](https://github.com/mit-han-lab/streaming-llm). Video example: https://reddit.com/link/16xzxwv/video/c7qx2mgx6trb1/player
2023-10-02T15:40:01
https://www.reddit.com/r/LocalLLaMA/comments/16xzxwv/streamingllm_a_simple_and_efficient_framework/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xzxwv
false
null
t3_16xzxwv
/r/LocalLLaMA/comments/16xzxwv/streamingllm_a_simple_and_efficient_framework/
false
false
https://b.thumbs.redditm…smZwaAkbACMw.jpg
274
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
StreamingLLM —a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning
1
Paper: [http://arxiv.org/abs/2309.17453](http://arxiv.org/abs/2309.17453) Code: [https://github.com/mit-han-lab/streaming-llm](https://github.com/mit-han-lab/streaming-llm) Abstract: >Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a "sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided in the [link](https://github.com/mit-han-lab/streaming-llm). Video example: https://reddit.com/link/16xzper/video/jhfswqij5trb1/player
2023-10-02T15:31:06
https://www.reddit.com/r/LocalLLaMA/comments/16xzper/streamingllm_a_simple_and_efficient_framework/
More_Code3540
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xzper
false
null
t3_16xzper
/r/LocalLLaMA/comments/16xzper/streamingllm_a_simple_and_efficient_framework/
false
false
https://b.thumbs.redditm…smZwaAkbACMw.jpg
1
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
StreamingLLM — a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning
1
[removed]
2023-10-02T15:17:23
https://www.reddit.com/r/LocalLLaMA/comments/16xzd2x/streamingllm_a_simple_and_efficient_framework/
llamaShill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xzd2x
false
null
t3_16xzd2x
/r/LocalLLaMA/comments/16xzd2x/streamingllm_a_simple_and_efficient_framework/
false
false
self
1
null
Anyone Know any Good Orca/Phi datasets, or other good datasets for text gen?
4
Hi everyone, As I mentioned before, about the Project NanoPhi. One of my problems is that I need 2.4B tokens of Data for a properly chinchilla scaled gpt-2(124m) and I only have 640mil tokens, and as the TinyLlama project has shown me, stuffing way more tokens in works pretty good! I'm only using Sciphi and tiny-textbooks right now, I have a couple datasets I plan to expand into, including the platypus dataset, open-orca, and some math ones, maybe some roleplay, do you know any other good ones I should be adding? I've cu
2023-10-02T14:00:37
https://www.reddit.com/r/LocalLLaMA/comments/16xxeny/anyone_know_any_good_orcaphi_datasets_or_other/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xxeny
false
null
t3_16xxeny
/r/LocalLLaMA/comments/16xxeny/anyone_know_any_good_orcaphi_datasets_or_other/
false
false
self
4
null
Deploy LLama 2 Chatbot Using AWS Sagemaker, Hugging Face, and Terraform
1
2023-10-02T13:50:09
https://www.joumenharzli.com/blog/deploy-your-own-private-llm-chatbot-using-aws-sagemaker-huggingface-terraform/
CompteMta3Glag
joumenharzli.com
1970-01-01T00:00:00
0
{}
16xx5w7
false
null
t3_16xx5w7
/r/LocalLLaMA/comments/16xx5w7/deploy_llama_2_chatbot_using_aws_sagemaker/
false
false
default
1
null
Should I use LlamaIndex or txtai for local RAG?
39
I want to build my **own local RAG** system that can integrate into **my Obsidian knowledge base** and is at best **multi-modal**. To achieve this I found the following two frameworks that could handle the retrieval part: * [LlamaIndex](https://github.com/jerryjliu/llama_index) * [txtai](https://github.com/neuml/txtai) LlamaIndex has 4 times as many stars as txtai, but besides that, both seem to more or less have the same features and similar extensive documentation. So if you had any experiences with these frameworks, I wonder what the **main differences** are and which would be the **better fit for my use case** in your opinion. (I am also open to other alternatives)
2023-10-02T13:23:29
https://www.reddit.com/r/LocalLLaMA/comments/16xwk3k/should_i_use_llamaindex_or_txtai_for_local_rag/
Dry-Advance52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xwk3k
false
null
t3_16xwk3k
/r/LocalLLaMA/comments/16xwk3k/should_i_use_llamaindex_or_txtai_for_local_rag/
false
false
self
39
{'enabled': False, 'images': [{'id': 'Y1slAfFnUJ7S0lr5MnG0QiZaynm0DJmQxgNREYkdEq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=108&crop=smart&auto=webp&s=d0948c0e1b37a9c06cdeecd175a0be217a463652', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=216&crop=smart&auto=webp&s=6ff21974c2ab8bba48f03b9c75f947008dda6d95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=320&crop=smart&auto=webp&s=59df26352e974ed2535db71fae6f5e5dec25a9a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=640&crop=smart&auto=webp&s=54329787b855aaed645b57f89712553dec2daf51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=960&crop=smart&auto=webp&s=8fbda8ca80761e3d420dd0ebee262b7a33c623c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?width=1080&crop=smart&auto=webp&s=07407eb5de2470013f5ef4421f11909d563c2a9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YSnAd3Kgc7QCgfE5w4ZZEOwVp9i-AbqFi9kQzAjndic.jpg?auto=webp&s=2caa1cf2bd96e5b1962ad25c4d8e97091e00127f', 'width': 1200}, 'variants': {}}]}
What's the best/practical use you've found for (Llama 2) 7B small models?
61
Just wondering if the small models (7b0or even 13b)have any practical use as of yet.... Even if it's in a small supporting capacity. Thanks for sharing your thoughts
2023-10-02T12:51:08
https://www.reddit.com/r/LocalLLaMA/comments/16xvug2/whats_the_bestpractical_use_youve_found_for_llama/
nuketro0p3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xvug2
false
null
t3_16xvug2
/r/LocalLLaMA/comments/16xvug2/whats_the_bestpractical_use_youve_found_for_llama/
false
false
self
61
null
Too good to be true? Removing token limit
1
[removed]
2023-10-02T12:33:26
https://www.reddit.com/r/LocalLLaMA/comments/16xvhay/too_good_to_be_true_removing_token_limit/
cvdbdo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xvhay
false
null
t3_16xvhay
/r/LocalLLaMA/comments/16xvhay/too_good_to_be_true_removing_token_limit/
false
false
self
1
{'enabled': False, 'images': [{'id': 'i8Ufu2PQki-9WbJsUUP08KctYl0SkjZLnNoyIvFItHE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=108&crop=smart&auto=webp&s=ae92237272ee7c9427b46804621ad5a4a346eec8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=216&crop=smart&auto=webp&s=cff44ce5b777388a3cfe0a25e8b114978eacfc44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=320&crop=smart&auto=webp&s=e2461185d04881ec88d750c0279edbb2b83f9039', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=640&crop=smart&auto=webp&s=3f735714c134df63c219d7910f3fb86ef932f494', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=960&crop=smart&auto=webp&s=94a1d11a1bb4abbaa406b3de6bff3476296ed117', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?width=1080&crop=smart&auto=webp&s=2d5848c2fc9b484ea176de2349fc8a61654da8a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2YWT0jY7V7F30B6M_N6YWy_0N_Rd-pnCmk0oEHwAQuc.jpg?auto=webp&s=86a9c3c6d7c64652d68f3ef9ed2956d5fae51185', 'width': 1200}, 'variants': {}}]}
Related Communities
1
[removed]
2023-10-02T11:33:14
https://www.reddit.com/r/LocalLLaMA/comments/16xu8ic/related_communities/
Calm_List3479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xu8ic
false
null
t3_16xu8ic
/r/LocalLLaMA/comments/16xu8ic/related_communities/
false
false
self
1
null
What is better 4bit full fine tuning or lora?
1
Hi Guys, I got another newbie question: According to my understanding lora freezes the parameters of a model and introduces its "own" parameters for the training. Afterwards the weights will be combined somehow. (Not perfect but I think the idea is correct) Now I wonder about the differences to a 4bit full fine tuning? Which one is faster when I use 7B or 13B models? Is lora only used for very big models? What is the difference in performance? I would like to train a model and create an expert in terms of PCB soldering, so the context is pretty important in my case. So which method is more about the context? Thanks in advance! &#x200B;
2023-10-02T11:23:04
https://www.reddit.com/r/LocalLLaMA/comments/16xu1ba/what_is_better_4bit_full_fine_tuning_or_lora/
Enkay55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xu1ba
false
null
t3_16xu1ba
/r/LocalLLaMA/comments/16xu1ba/what_is_better_4bit_full_fine_tuning_or_lora/
false
false
self
1
null
Best way to distill information from a long HTML document?
3
Hi, I am looking for some pointers to distill information from a single, large but quite structural (to human) **HTML** document containing some event schedules\*. I am using langchain's WebBaseLoader to perform the data ingestion, RecursiveCharacterTextSplitter for chunking and Chroma DB as the vector store. Retrieval is done using ConversationalRetrievalChain. The problems I am facing are the title of the event and the schedule information do not sit next to each other, and the html2text essentially messed up all the event-schedule relationship (it ended up with all the events jammed together in one long paragraph and the schedules scattered everywhere). The chunking of text also led to further separation of the events and schedule information, and that the answers produced, if any, were only limited to what the maximum context the LLM could handle. My questions are, do you have a better experience dealing with this kind of problem involving many large HTML tables in a single long HTML document? What is the best way to have the LLM to parse and understand such document in order to extract information, not from part of it, but through the whole thing in order to compose the answer? \* A very typical example is [https://www.royalalberthall.com/tickets/](https://www.royalalberthall.com/tickets/) . with the exception that the title and the schedule are farther apart (many tables each with many rows). I would like to make a query to the LLM such as: "List all classical music concerts in Oct 2023 in a tabular form containing only the titles and the date-time of the concert. For examples: Title | Date Easy listening classical | 28 Oct 18:30 Four seasons- Winter | 30 Oct 19:00"
2023-10-02T10:58:12
https://www.reddit.com/r/LocalLLaMA/comments/16xtjq0/best_way_to_distill_information_from_a_long_html/
edwios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xtjq0
false
null
t3_16xtjq0
/r/LocalLLaMA/comments/16xtjq0/best_way_to_distill_information_from_a_long_html/
false
false
self
3
null
llama-cpp on T4 google colab, Unable to use GPU
2
In Google Colab, though have access to both CPU and GPU T4 GPU resources for running following code. However, what is the reason I am encounter limitations, the GPU is not being used? I selected T4 from runtime options `!CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python` &#x200B; `!pip install langchain` `!wget` [`https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_0.gguf`](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_0.gguf) &#x200B; `from langchain.llms import LlamaCpp` `from langchain.prompts import PromptTemplate` `from langchain.chains import LLMChain` `from langchain.callbacks.manager import CallbackManager` `from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler` &#x200B; `prompt = PromptTemplate(template=template, input_variables=["question"])` &#x200B; `callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])` &#x200B; `llm = LlamaCpp(` `model_path="/content/llama-2-7b-chat.Q5_0.gguf",` `temperature=0.75,` `max_tokens=500,` `top_p=1,` `callback_manager=callback_manager,` `verbose=True,` `)` `llm_chain = LLMChain(prompt=prompt, llm=llm)` &#x200B; `llm_chain.run(question)` &#x200B;
2023-10-02T10:20:11
https://www.reddit.com/r/LocalLLaMA/comments/16xswej/llamacpp_on_t4_google_colab_unable_to_use_gpu/
Chemical_Recover_995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xswej
false
null
t3_16xswej
/r/LocalLLaMA/comments/16xswej/llamacpp_on_t4_google_colab_unable_to_use_gpu/
false
false
self
2
null
LLM safety finally solved
1
[removed]
2023-10-02T09:04:36
https://x.com/kcimc/status/1708266384516653353?s=61&t=-YemkyX5QslCGQDNKu_hPQ
krazzmann
x.com
1970-01-01T00:00:00
0
{}
16xrqar
false
null
t3_16xrqar
/r/LocalLLaMA/comments/16xrqar/llm_safety_finally_solved/
false
false
https://b.thumbs.redditm…jdwIceVW_MsU.jpg
1
{'enabled': False, 'images': [{'id': 'JyoiWmkIQ1IBlLCh50_AIxDResRTcLPo0GjUpd28Hhg', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=108&crop=smart&auto=webp&s=2e7c73fbc8203d18eb78b9334f742eb8eedfd9cc', 'width': 108}, {'height': 221, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=216&crop=smart&auto=webp&s=99af5d85d6c03962a9c91e436f258a68863d697d', 'width': 216}, {'height': 327, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=320&crop=smart&auto=webp&s=9f7a25c443763330cfb026365e0f4625dd7c30ed', 'width': 320}, {'height': 655, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=640&crop=smart&auto=webp&s=b3c9a424b334f0e15ecc8c66a79609ec49415e3b', 'width': 640}, {'height': 983, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=960&crop=smart&auto=webp&s=07bf9ad3a83a5ed287ddee4320fab233d6bd5212', 'width': 960}, {'height': 1106, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?width=1080&crop=smart&auto=webp&s=fa6c432a04453d72cbe89d87c7f5d635db1cd9bb', 'width': 1080}], 'source': {'height': 1480, 'url': 'https://external-preview.redd.it/XXgabM5VzEGSDO4FrCdqJj7cTQC8paAbKx4QmXrnePo.jpg?auto=webp&s=fd5706d04eaf94f113e50074c048721be9491eb3', 'width': 1444}, 'variants': {}}]}
Looking for a small model to host on my server
1
Is there a small but yet reasonably useful llama model that I can use on my server? I have tried openllama-3B which takes 10 minutes for a prompt..
2023-10-02T08:47:05
https://www.reddit.com/r/LocalLLaMA/comments/16xrfu8/looking_for_a_small_model_to_host_on_my_server/
ccpsleepyjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xrfu8
false
null
t3_16xrfu8
/r/LocalLLaMA/comments/16xrfu8/looking_for_a_small_model_to_host_on_my_server/
false
false
self
1
null
Summarization benchmarks for quantized models
8
I'm trying to find benchmarks for all the llama-2 quantized models, especially ROUGE 1,2 (can also include other metrics relevant to text summarization). Can anyone guide me to a resource that already ran the benchmarks ? &#x200B;
2023-10-02T08:36:49
https://www.reddit.com/r/LocalLLaMA/comments/16xr9y4/summarization_benchmarks_for_quantized_models/
DirectionOdd9824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xr9y4
false
null
t3_16xr9y4
/r/LocalLLaMA/comments/16xr9y4/summarization_benchmarks_for_quantized_models/
false
false
self
8
null
About to buy Hardware for 7k
56
Hey, At the end of the month, I'll be receiving a profit share of 7k, which I intend to use to purchase hardware for an LLM. I was thinking of: 2x 3090(Used) Intel Core i9-13900K 256GB DDR5 RAM A suitable motherboard, of course (500-1k$) Is a 1000W PSU enough, or would a 1500W PSU be better? Using the cloud is not an option for me because I want to train my LLM on my own data, and I generally prefer it to be uncensored, as I'm tired of hearing phrases like, 'As an LLM, I'd like to remind you that blablabla.' The goal is to learn how to create a self-operating AI Assistant with Langchain, etc., and an RAG where I can store ALL my knowledge about everything that interests me. I just want to dive in and learn by working on various projects and expanding my knowledge. I'm thinking of using 2x 3090 so that I can experiment with combining different models and see how they interact with each other, among other things. I'm just asking here for any potential pitfalls to watch out for. Any hardware suggestions? Are there any good beginner projects? I'm not particularly skilled in Python, but that doesn't mean I can't learn it.
2023-10-02T07:26:33
https://www.reddit.com/r/LocalLLaMA/comments/16xq65o/about_to_buy_hardware_for_7k/
Moist_Influence1022
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xq65o
false
null
t3_16xq65o
/r/LocalLLaMA/comments/16xq65o/about_to_buy_hardware_for_7k/
false
false
self
56
null
What type and quantity of data are required to fine-tune a Language Model, such as LoRA, for the purpose of representing a prompt template?
3
I am currently in the process of integrating a prompt template into LoRA. I have determined that it will be successful if the following conditions are met: * My input data is denoted as 'p0.' * My target label is 'llm(p0 + p1).' This will allow for the fine-tuning of 'p1' within my LoRA."
2023-10-02T07:10:49
https://www.reddit.com/r/LocalLLaMA/comments/16xpxeh/what_type_and_quantity_of_data_are_required_to/
Sad-Lengthiness-6938
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xpxeh
false
null
t3_16xpxeh
/r/LocalLLaMA/comments/16xpxeh/what_type_and_quantity_of_data_are_required_to/
false
false
self
3
null
Dual GPU, ExLlama or Llama.cpp?
1
[removed]
2023-10-02T06:42:16
https://www.reddit.com/r/LocalLLaMA/comments/16xpgby/dual_gpu_exllama_or_llamacpp/
Complex_Variation772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xpgby
false
null
t3_16xpgby
/r/LocalLLaMA/comments/16xpgby/dual_gpu_exllama_or_llamacpp/
false
false
self
1
null
One click installation of PETALS ?
2
Everyone wants to run big models locally... Why isn’t Petals more optimized for ease of use-installation? Everyone would be using it and won’t have a need for super expensive server cards.
2023-10-02T05:51:40
https://www.reddit.com/r/LocalLLaMA/comments/16xol1s/one_click_installation_of_petals/
Majestical-psyche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xol1s
false
null
t3_16xol1s
/r/LocalLLaMA/comments/16xol1s/one_click_installation_of_petals/
false
false
self
2
null
Building a RAG, but how to deal with questions that may require multiple relevant chunks
8
Hi all, I am building a Retrieval Augmentation Generation model, which answers queries of users by using documents of a domain. The first step would be chunking and finding the top-n chunks relevant to the query. But, the query may require just 1 chunk or more than 5 chunks to answer the query. How to deal with this, selecting a low value of "n" in top-relevant chunks, might not give a complete context, but increasing the n value may cause the context to be greater than input limit of LLAMA-2. How to deal with this situation? Also, if I were to fine-tune LLAMA-2 for this, that would be creating a diverse questions and answer pairs using gpt-4 (synthetic data) to train LLAMA-2 model would it help in this case?
2023-10-02T04:02:04
https://www.reddit.com/r/LocalLLaMA/comments/16xmlrt/building_a_rag_but_how_to_deal_with_questions/
vile_proxima
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xmlrt
false
null
t3_16xmlrt
/r/LocalLLaMA/comments/16xmlrt/building_a_rag_but_how_to_deal_with_questions/
false
false
self
8
null
NanoPhi, Implementing some of the success of Phi-1.5, with GPT-2(124m)
52
Hi, r/LocalLLama, i'm trying to replicate at least some of the success of Phi 1.5 on a model 10x smaller, gpt-2 124m. I have started with model finetuning, and have a simple github with roadmap, https://github.com/VatsaDev/NanoPhi, check it out there!
2023-10-02T03:33:54
https://www.reddit.com/r/LocalLLaMA/comments/16xm2hw/nanophi_implementing_some_of_the_success_of_phi15/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xm2hw
false
null
t3_16xm2hw
/r/LocalLLaMA/comments/16xm2hw/nanophi_implementing_some_of_the_success_of_phi15/
false
false
self
52
{'enabled': False, 'images': [{'id': '6mYNOoNDX7Jshix5nTdvRJd9gf_zoSncpqCmnRcyOCg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=108&crop=smart&auto=webp&s=f065c10845f99816d52fb0cd1d7e18b9f6d40f8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=216&crop=smart&auto=webp&s=7b961dd2012f2a44a7d6c8ea9905399dab00828c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=320&crop=smart&auto=webp&s=54758160313f7b8e7a794f98c55eac953444b197', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=640&crop=smart&auto=webp&s=059683402e6cdc5f7185eedd280ab68527d23984', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=960&crop=smart&auto=webp&s=0ee8f288e0c7254c3201e62c002bcdde989d4d75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?width=1080&crop=smart&auto=webp&s=872fd26e75c4ef201bcd833a0aceac6803a0a45b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3AyBqJDXHqb8D5U8bxsqEPZlOVOVa3l-EQ6I7NAc32g.jpg?auto=webp&s=3f2c33f0233d983c4a098ba6ffb675937d66b8a5', 'width': 1200}, 'variants': {}}]}
Mistral 7B gguf use cases for RAG and langchain
1
[removed]
2023-10-02T02:20:59
https://www.reddit.com/r/LocalLLaMA/comments/16xkijo/mistral_7b_gguf_use_cases_for_rag_and_langchain/
ianuvrat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xkijo
false
null
t3_16xkijo
/r/LocalLLaMA/comments/16xkijo/mistral_7b_gguf_use_cases_for_rag_and_langchain/
false
false
self
1
null
Running [Crataco ]Pythia Deduped GGML on 4 GB Ram Laptop
7
Today i manage run pythia deduped ggml version on my device and want share what experience running it. for technicall note Hardware: Acer Aspire 5 Ram : 4gb GPU/proccesor: Amd radeon Athlon Silver Tool use : Koboldcpp and SillyTavern as Chat interface Quantized model version use : ggmlv3-pythia-1b-deduped-q5_1 Koboldcpp setting : openblass : 1 Context size : 1024 &#x200B; https://preview.redd.it/jmkvtli74prb1.png?width=1134&format=png&auto=webp&s=b4f8ce01298f84b7379f9bed90f9b397d34651fc First it take about 30 second for koboldcpp to load module then i use SillyTavern and connected to kobold. I using c.ai character that i download using Chrome extension Character Ai downliader. the character that i download form cai is "Assistant" with function to help writing summary and generate text. First I input " Make poem about Ai " and it take about 50 second to generate or 5.0 token per second. it seem attempt to generate poem but end up spelling giberish word. &#x200B; https://preview.redd.it/216kpk394prb1.png?width=1349&format=png&auto=webp&s=699eaba9a4853f81c9126d3e74eb11bcc3a46414 Second attempt are asking simple math question " 1+1 = ?" instead answering it somehow debate with itself and me and answering other math problem instead. it take 59 second or 2.5 token per seconds. That my report on experience and perfomance &#x200B;
2023-10-02T01:58:16
https://www.reddit.com/r/LocalLLaMA/comments/16xk069/running_crataco_pythia_deduped_ggml_on_4_gb_ram/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xk069
false
null
t3_16xk069
/r/LocalLLaMA/comments/16xk069/running_crataco_pythia_deduped_ggml_on_4_gb_ram/
false
false
https://b.thumbs.redditm…1kS2H46HLack.jpg
7
null
How to get LLama 2 70b 32k running locally with LongLora or otherwise?
1
[removed]
2023-10-01T23:37:06
https://www.reddit.com/r/LocalLLaMA/comments/16xgsxu/how_to_get_llama_2_70b_32k_running_locally_with/
Marco_Ferreira43516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xgsxu
false
null
t3_16xgsxu
/r/LocalLLaMA/comments/16xgsxu/how_to_get_llama_2_70b_32k_running_locally_with/
false
false
self
1
null
Really fast, but terrible responses with GPTQ models.
20
I recently grabbed a few models to test out on sillytavern, but every single GPTQ model i use gives me 1-3 second responses, and they are straight up garbage, regardless of what preset im using. stuff like ZaraBlend (transformers) works just fine, but Mythalion, Pygmalion-8k-superhot, and Wizard-vicuna GPTQ models return terrible results. Im using Text-generation-webui /ooga.
2023-10-01T23:21:09
https://www.reddit.com/r/LocalLLaMA/comments/16xgetl/really_fast_but_terrible_responses_with_gptq/
Count_Dankula666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xgetl
false
null
t3_16xgetl
/r/LocalLLaMA/comments/16xgetl/really_fast_but_terrible_responses_with_gptq/
false
false
self
20
null
llama-2 refuses to answer!!!!
1
Can you tell me any workaround if I get the following responses from llama-2 13b Chat? 'I apologize, but I cannot provide an answer to this question as the tweet contains harmful and inappropriate language. ' &#x200B; I am trying to classify a tweet text.
2023-10-01T22:49:41
https://www.reddit.com/r/LocalLLaMA/comments/16xflnt/llama2_refuses_to_answer/
Awkward_Grab_6189
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xflnt
false
null
t3_16xflnt
/r/LocalLLaMA/comments/16xflnt/llama2_refuses_to_answer/
false
false
self
1
null
What are some open source models with the largest context windows? And how good are they?
17
I’m curious about good models for summarising entire papers, if possible
2023-10-01T21:55:29
https://www.reddit.com/r/LocalLLaMA/comments/16xe6h4/what_are_some_open_source_models_with_the_largest/
Chance_Confection_37
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xe6h4
false
null
t3_16xe6h4
/r/LocalLLaMA/comments/16xe6h4/what_are_some_open_source_models_with_the_largest/
false
false
self
17
null
Merge and unmerge LoRA wights into GPTQ or AWQ quickly
6
Is there a way to merge LoRa weights into the GPTQ or AWQ quantized versions and achieve this in milliseconds? I want to load multiple LoRA weights onto a single GPU and then merge them into a quantized version of Llama 2 based on the requests. Once the request is fulfilled (i.e., the model has generated an output), we can unload the model. Integrating this with vLLM would be a bonus.
2023-10-01T21:29:14
https://www.reddit.com/r/LocalLLaMA/comments/16xdhu8/merge_and_unmerge_lora_wights_into_gptq_or_awq/
Independent_Key1940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xdhu8
false
null
t3_16xdhu8
/r/LocalLLaMA/comments/16xdhu8/merge_and_unmerge_lora_wights_into_gptq_or_awq/
false
false
self
6
null
CodeLlama-34b-Mistral-GGUF
130
Undi95 on HuggingFace may have made quite a breakthrough in frankensteining models. [https://huggingface.co/Undi95](https://huggingface.co/Undi95) He made a delta of the data between Mistral 7b 0.1 and Llama2 7b, used it as a source, and injected it on various target models. [https://huggingface.co/Undi95/llama2-to-mistral-diff](https://huggingface.co/Undi95/llama2-to-mistral-diff) Including the dumb but nevertheless big context friendly CodeLlama 34b [https://huggingface.co/Undi95/CodeLlama-34b-Mistral-GGUF](https://huggingface.co/Undi95/CodeLlama-34b-Mistral-GGUF) And it works, lol. This, beyond 10k context already. I'm baffled. It shows the defects of Mistral 0.1 (a single test made) on CL34b (target model in this case), and seemingly the low brains of CodeLlama2 as well, but with the extended context of CodeLlama. More tests are required (perplexity, hellaswag, knowledge, reasoning, etc), but that's very promising, considering the horizon of potential combinations it opens. A more conclusive experience : [https://huggingface.co/Undi95/Llama-2-7b-Mistral-GGUF](https://huggingface.co/Undi95/Llama-2-7b-Mistral-GGUF), without the defects of Mistral 7b 0.1 (a single test made), but with 4k context only without rope modification. Including as a merge methodology, if it works. More testers are needed !
2023-10-01T21:10:41
https://www.reddit.com/r/LocalLLaMA/comments/16xczu1/codellama34bmistralgguf/
Nexesenex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xczu1
false
null
t3_16xczu1
/r/LocalLLaMA/comments/16xczu1/codellama34bmistralgguf/
false
false
self
130
{'enabled': False, 'images': [{'id': 'mFIIL0hsjOIaOVH8q7cKlzxC6A2jTPzGGAI_cWbNs7c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=108&crop=smart&auto=webp&s=1e89e254fe50a947dc5e8600eda9646eb7312b70', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=216&crop=smart&auto=webp&s=e6b6647a224cdb063aa1b0c8d31cff9146c957f8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=320&crop=smart&auto=webp&s=cd92cdfdff034a6953767897653b8c3f05955163', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=640&crop=smart&auto=webp&s=daed1988b19f23f7e4f5db5b3a70b4a47df0776f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=960&crop=smart&auto=webp&s=da8f21c6b4ad179bdae8717a930602468f994c43', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?width=1080&crop=smart&auto=webp&s=4101cf858668d1d77fd98d0fe5bbbefc15913529', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/457dwNe2vjdATnSO2XnJ3Lr1ntxEjrNvitJBLJxuWDE.jpg?auto=webp&s=ce3fad1ac8f929aedbfd31ff5a885f278f800578', 'width': 1200}, 'variants': {}}]}
Database/Index of fine tuned models with good tagging and searchability?
4
In the Stable Diffusion world, CivitAI has been amazing for finding new fine tuned models for a variety of tasks. I was looking for something similar but the most I could find was just Huggingface which has no ability to search via tags etc. Have I just not been diligent enough in my search or is there really no widely used index? If not, I would like to know if there is a demand for such a product (assuming it's free and open source)?
2023-10-01T20:29:33
https://www.reddit.com/r/LocalLLaMA/comments/16xbx6g/databaseindex_of_fine_tuned_models_with_good/
unconcious_insomnia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16xbx6g
false
null
t3_16xbx6g
/r/LocalLLaMA/comments/16xbx6g/databaseindex_of_fine_tuned_models_with_good/
false
false
self
4
null
Running AWQ Models
5
Hey folks. I'm curious how people are running AWQ models for chat. I've been playing with vLLM but I'm running into a dependency conflict. Also, I would rather use something that has a web interface. Any tips would be appreciated!
2023-10-01T18:52:47
https://www.reddit.com/r/LocalLLaMA/comments/16x9d4d/running_awq_models/
x0xxin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x9d4d
false
null
t3_16x9d4d
/r/LocalLLaMA/comments/16x9d4d/running_awq_models/
false
false
self
5
null
LangDiversity: software to identify LLM errors
1
[removed]
2023-10-01T18:08:32
https://www.reddit.com/r/LocalLLaMA/comments/16x86jx/langdiversity_software_to_identify_llm_errors/
Neurosymbolic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x86jx
false
{'oembed': {'author_name': 'Neuro Symbolic', 'author_url': 'https://www.youtube.com/@neurosymbolic', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/86J_K9mR7lw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="LangDiversity: software to identify LLM errors"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/86J_K9mR7lw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LangDiversity: software to identify LLM errors', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_16x86jx
/r/LocalLLaMA/comments/16x86jx/langdiversity_software_to_identify_llm_errors/
false
false
https://b.thumbs.redditm…t3mhELbnlh0k.jpg
1
{'enabled': False, 'images': [{'id': 'ahtirqBqcBYDrk97DJdbcIziI7RYntyxNke0C6vWXg0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5h2TgNVImmuVXyR4MSxN9BmdjHnpMHUK-sHtsFdxR_M.jpg?width=108&crop=smart&auto=webp&s=70dcd6d53ce82d6c28546ad098cf46263f92ef3b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5h2TgNVImmuVXyR4MSxN9BmdjHnpMHUK-sHtsFdxR_M.jpg?width=216&crop=smart&auto=webp&s=5923b3f7dd90b9a32d8e915bc0ea9f3364a4f0aa', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5h2TgNVImmuVXyR4MSxN9BmdjHnpMHUK-sHtsFdxR_M.jpg?width=320&crop=smart&auto=webp&s=4b857a753ad77d2fd66dfabded0ff0875bed549c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5h2TgNVImmuVXyR4MSxN9BmdjHnpMHUK-sHtsFdxR_M.jpg?auto=webp&s=050826a86c1b2d4d3391a13c6ce4bcffc715fbda', 'width': 480}, 'variants': {}}]}
AI Engineering Survey - vote for r/LocalLLaMA
1
[removed]
2023-10-01T18:00:38
https://www.reddit.com/r/LocalLLaMA/comments/16x7yri/ai_engineering_survey_vote_for_rlocalllama/
CombinationMother337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x7yri
false
null
t3_16x7yri
/r/LocalLLaMA/comments/16x7yri/ai_engineering_survey_vote_for_rlocalllama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'D0Qz6GxUPw1dPuXLNRj221_D47HtTlHnHoW9FmJjAB4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?width=108&crop=smart&auto=webp&s=a01a193017c72b59da5100d567495545f7fa57b7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?width=216&crop=smart&auto=webp&s=e779c3e13b45960ede6537b468bdf29119d8a2f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?width=320&crop=smart&auto=webp&s=268d8833407b9d76317791b87e0d505fbf030acf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?width=640&crop=smart&auto=webp&s=0aa15afd3baeb4f2a0915061dba07b198b167d89', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?width=960&crop=smart&auto=webp&s=23b8d28a60bbfecbc803c96e7fc3b3dbf6123f0b', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/Vsq5TIPcAUVgpDjfvzNhxR4vCxDDGnUNgfvoncj7uL8.jpg?auto=webp&s=db8ee6355ecee314b61ff47cba3bfcf840d3894e', 'width': 960}, 'variants': {}}]}
AWQ vs GPTQ and some questions about training LoRAs
13
Hello everyone. Looks like new type quantization, called AWQ, become widely available, and it raises several questions. 1. How fast are token generations against GPTQ with Exllama (Exllama2)? 2. Does this new quantization require less VRAM than GPTQ? Is it possible to run 70B model on 24GB GPU ? 3. How good it at keeping context? Is it possible to stretch context size bigger than original Llama2's 4K context? 4. Are there any tools to train LoRAs for AWQ or we need to fully finetune it? 5. What is minimal size of training dataset that will return reasonable results for AWQ? Currently LLMs and software around them developing so fast, which is good, but is overwhelming and hard to keep up.
2023-10-01T17:34:54
https://www.reddit.com/r/LocalLLaMA/comments/16x7b8p/awq_vs_gptq_and_some_questions_about_training/
DaniyarQQQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x7b8p
false
null
t3_16x7b8p
/r/LocalLLaMA/comments/16x7b8p/awq_vs_gptq_and_some_questions_about_training/
false
false
self
13
null
Will we ever be able to train a model locally without relying on big tech?
29
I would love to train a model on my personal data. I can see it helping me plan out my diet, my research, vacation plans, etc. But at the same time I would *never* trust ANY of that data with companies. I would also love to use it for personal/career projects which aren't any of big tech's business. You guys think we'll be able to have hardware to do this for < $10k by 2030? Its still expensive but I find that preferable to companies knowing every fiber of my being.
2023-10-01T16:02:28
https://www.reddit.com/r/LocalLLaMA/comments/16x4zr2/will_we_ever_be_able_to_train_a_model_locally/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x4zr2
false
null
t3_16x4zr2
/r/LocalLLaMA/comments/16x4zr2/will_we_ever_be_able_to_train_a_model_locally/
false
false
self
29
null
Home automation using Arduino ?
5
Is it i possible to send the replies from ooba to a telnet server, so I can trim them and broadcast them to various Arduino controller? I want to use it to pilot relays, but if I ask something in the ooba webui, the reply is only send to the webui.
2023-10-01T15:53:40
https://www.reddit.com/r/LocalLLaMA/comments/16x4s0p/home_automation_using_arduino/
ervertes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x4s0p
false
null
t3_16x4s0p
/r/LocalLLaMA/comments/16x4s0p/home_automation_using_arduino/
false
false
self
5
null
How to stop LLama 2 (13B) from starting each response with "Sure, I'd be happy to help"
6
I tried adjusting my prompt in multiple ways, if i tell it to stop saying something, it will tweak it a bit and use that instead, please help.
2023-10-01T15:30:02
https://www.reddit.com/r/LocalLLaMA/comments/16x4733/how_to_stop_llama_2_13b_from_starting_each/
Radiant_Angle_161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x4733
false
null
t3_16x4733
/r/LocalLLaMA/comments/16x4733/how_to_stop_llama_2_13b_from_starting_each/
false
false
self
6
null
How to use Madlad 400 LM and Machine translation model
10
Has anyone been able to use the MADLAD 400 multilingual models from Google? They are not available in hugging face and I was wondering how to use the weights in its github. It looks like they are decoder only models but I am not sure if AutomodelforCausalLLM can use them directly in transformers library.
2023-10-01T15:10:57
https://www.reddit.com/r/LocalLLaMA/comments/16x3q7f/how_to_use_madlad_400_lm_and_machine_translation/
testerpce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x3q7f
false
null
t3_16x3q7f
/r/LocalLLaMA/comments/16x3q7f/how_to_use_madlad_400_lm_and_machine_translation/
false
false
self
10
null
"IndexError: list index out of range" when trying to run TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GPTQ
1
[removed]
2023-10-01T14:57:15
https://www.reddit.com/r/LocalLLaMA/comments/16x3drv/indexerror_list_index_out_of_range_when_trying_to/
TheHunter920
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x3drv
false
null
t3_16x3drv
/r/LocalLLaMA/comments/16x3drv/indexerror_list_index_out_of_range_when_trying_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hB1RJx5Vne9EYODIDpH42zYg3_agwfeevk1qV4sqVlk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NRczRqEXtgLRr5IuFut8w_eEaCiX5WC0lM5VMaD_q-Q.jpg?width=108&crop=smart&auto=webp&s=a112fb39b29352ce5de6ad1ac29800e9d728989e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/NRczRqEXtgLRr5IuFut8w_eEaCiX5WC0lM5VMaD_q-Q.jpg?width=216&crop=smart&auto=webp&s=bdc8ee75b196e6c9a3a06f81b53d353a62085ac6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/NRczRqEXtgLRr5IuFut8w_eEaCiX5WC0lM5VMaD_q-Q.jpg?width=320&crop=smart&auto=webp&s=a13b867f7e49491e13188922982056a33cd206a5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/NRczRqEXtgLRr5IuFut8w_eEaCiX5WC0lM5VMaD_q-Q.jpg?auto=webp&s=437e666a9ecdd712f04caaa0441c6d61311c6fba', 'width': 480}, 'variants': {}}]}
Would going from 24 to 48GB of vram unlock access to many models or would I need to go higher to get any real difference?
44
I'm running 24GB card right now and have an opportunity to get another for a pretty good price used. I know SD and image stuff needs to be all on same card but llms can run on different cards even without nvlink. Are there many models available at 48 GB that aren’t available to me at 24 GB basically is it worth doing the upgrade? Thanks
2023-10-01T14:49:51
https://www.reddit.com/r/LocalLLaMA/comments/16x376t/would_going_from_24_to_48gb_of_vram_unlock_access/
Rollingsound514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x376t
false
null
t3_16x376t
/r/LocalLLaMA/comments/16x376t/would_going_from_24_to_48gb_of_vram_unlock_access/
false
false
self
44
null
Article: Free Github copilot- code llama + vscode extension
2
2023-10-01T14:33:03
https://beginai.co/free-github-copilot-code-llama-continue-extension/
bohemianLife1
beginai.co
1970-01-01T00:00:00
0
{}
16x2shs
false
null
t3_16x2shs
/r/LocalLLaMA/comments/16x2shs/article_free_github_copilot_code_llama_vscode/
false
false
https://b.thumbs.redditm…tfn0p5zVngRI.jpg
2
{'enabled': False, 'images': [{'id': 'G2L7mQ06TJYYEYg9o_gu7C7fh5j-7xTd5vLE2Fi8W44', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=108&crop=smart&auto=webp&s=07a36108951d0409a54b3c2c7aa6008c94303a16', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=216&crop=smart&auto=webp&s=2cb14f4b0df9b2339ac113d0a68c0cdc2e1d86b5', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=320&crop=smart&auto=webp&s=d191500c8096b76a001b337392f9114450eeb73d', 'width': 320}, {'height': 312, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=640&crop=smart&auto=webp&s=8790a235dc74e27e36dbf33d84aa15e9bd476542', 'width': 640}, {'height': 468, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=960&crop=smart&auto=webp&s=ca7ec8fd08865ba0a7bd88ef94b9ba96a65a2c38', 'width': 960}, {'height': 527, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?width=1080&crop=smart&auto=webp&s=69af5fb894479c2bd852a0dcfa2b52d43111550e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aXjaA1WWE1aOo-6PcC50CiuSLBKia6T8YkiElzrvrR0.jpg?auto=webp&s=de44b40b8d04a7f15483d7aa2b62c6d8af632684', 'width': 1229}, 'variants': {}}]}
Related Communities
1
[removed]
2023-10-01T14:00:46
https://www.reddit.com/r/LocalLLaMA/comments/16x1z84/related_communities/
Calm_List3479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x1z84
false
null
t3_16x1z84
/r/LocalLLaMA/comments/16x1z84/related_communities/
false
false
self
1
null
cheap prebuilt box for holding at least 2 nvidia cards?
2
Can anyone suggest some prebuilt boxes that I could buy to host at least 2 nvidia cards? I want to host 3090s. I built a box before (7+ years ago for deep learning), but while it still works, it is very fragile and will sometimes just hang, I believe it has as faulty part somewhere and I tested it a lot. I never want to deal with that again, I just want to buy something that is fairly cheap and plugin cards. It all seems pretty expensive from looking around. Can anyone give suggestions?
2023-10-01T13:58:08
https://www.reddit.com/r/LocalLLaMA/comments/16x1wtb/cheap_prebuilt_box_for_holding_at_least_2_nvidia/
stylizebot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x1wtb
false
null
t3_16x1wtb
/r/LocalLLaMA/comments/16x1wtb/cheap_prebuilt_box_for_holding_at_least_2_nvidia/
false
false
self
2
null
Is there a way to effectively prompt a model to "show, don't tell?"
21
I've been playing around with a lot of models that have put out some pretty top-notch writing (Euryale and Xwin 70b are recent big winners for me) except for the critical flaw in that they tend to tell me what a character is thinking directly instead of demonstrating it through the character's actions. I've tried a number of different ways to try to explain this in the author's note, mostly along the lines of "Convey {{char}}'s emotional state through her actions rather than describing it in narrative" but I don't really think that the model is catching on as well as I'd like it to. I think it does sometimes, but others I think it's just good old placebo. Has anyone found a way of communicating this concept to an LLM? Is it something it can even really understand?
2023-10-01T13:48:51
https://www.reddit.com/r/LocalLLaMA/comments/16x1p2r/is_there_a_way_to_effectively_prompt_a_model_to/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16x1p2r
false
null
t3_16x1p2r
/r/LocalLLaMA/comments/16x1p2r/is_there_a_way_to_effectively_prompt_a_model_to/
false
false
self
21
null
Local llm project
0
I'm trying to create a tool used for incorporating LLM locally such as oogabooga Can anyone suggest a unique feature i can add to the tool that could be very useful
2023-10-01T12:30:19
https://www.reddit.com/r/LocalLLaMA/comments/16wzx0g/local_llm_project/
privateX0XO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wzx0g
false
null
t3_16wzx0g
/r/LocalLLaMA/comments/16wzx0g/local_llm_project/
false
false
self
0
null
Using a local LLaMA or Mistral to give context window of several programming files
5
I want to get feedback on refactoring python code across several files in a development environment. Is there any way to extend the context window to allow for this?
2023-10-01T11:59:47
https://www.reddit.com/r/LocalLLaMA/comments/16wzab4/using_a_local_llama_or_mistral_to_give_context/
adlabco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wzab4
false
null
t3_16wzab4
/r/LocalLLaMA/comments/16wzab4/using_a_local_llama_or_mistral_to_give_context/
false
false
self
5
null
CodeLLama with Langchain on Rpi, YES IT WORKS FINE.
77
2023-10-01T11:42:40
https://i.redd.it/vhdcun8wukrb1.png
Purple_Session_6230
i.redd.it
1970-01-01T00:00:00
0
{}
16wyyul
false
null
t3_16wyyul
/r/LocalLLaMA/comments/16wyyul/codellama_with_langchain_on_rpi_yes_it_works_fine/
false
false
https://b.thumbs.redditm…KQiBYr8Hrlgw.jpg
77
{'enabled': True, 'images': [{'id': 'ZkIat8Kmebdl0QS2ZWyPpDo9uOx5m8gdF-2dyHXomCM', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=108&crop=smart&auto=webp&s=6a3ff1f735f0b11e43a0bf208f383c0b4bff2070', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=216&crop=smart&auto=webp&s=316d580f899eb090c1b502ad8f4aa572ee5ef0fa', 'width': 216}, {'height': 163, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=320&crop=smart&auto=webp&s=8a95c79a314b2148ed0010573e95c5f96fad4212', 'width': 320}, {'height': 327, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=640&crop=smart&auto=webp&s=e422c20ac5677b436bb06858d23eb61b3c1f76bc', 'width': 640}, {'height': 491, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=960&crop=smart&auto=webp&s=4e9143d91ad8b7482bb85d3372f3a82511139d09', 'width': 960}, {'height': 552, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?width=1080&crop=smart&auto=webp&s=ea41f792c3f22f8a4c3f8aa153301c2f11f00a3d', 'width': 1080}], 'source': {'height': 983, 'url': 'https://preview.redd.it/vhdcun8wukrb1.png?auto=webp&s=b85a56c95cd23fbaa13030b9e652a3a9599e4520', 'width': 1920}, 'variants': {}}]}
basedUX dataset
7
basedUX is a minimal dataset conisting og 363 Human & Assitant dialogs respectively. Most dialogs in the dataset adheres to the BDI model, aiming for the assistant to understand, learn, and adapt in ways that resonate with human interactions and emotions. Link : [https://huggingface.co/datasets/aloobun/basedUX](https://huggingface.co/datasets/aloobun/basedUX) It is a fork of [ehartford/based](https://huggingface.co/datasets/ehartford/based) dataset. Modifications: * The dialogs are scenario-driven, aimed at simulating specific situations related to UX, design, and system understanding. They present real-world scenarios that a UX specialist or a system designer might face, thus giving depth and context to the conversation. These dialogues are not strictly instructional - they're also general conversations about the broader philosophies and principles. * The dialogs also explore and challenge Assistant's claim of being a specialist in user experience, it's sentience, and consciousness by posing questions related to its nature, abilities, and self-awareness.
2023-10-01T11:31:46
https://www.reddit.com/r/LocalLLaMA/comments/16wyrrh/basedux_dataset/
Roots91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wyrrh
false
null
t3_16wyrrh
/r/LocalLLaMA/comments/16wyrrh/basedux_dataset/
false
false
self
7
{'enabled': False, 'images': [{'id': 'q4XtGE5WWbBh2ejh7K-GDwYFA4sqj82y-ws3gopKer4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=108&crop=smart&auto=webp&s=690ceee91d5a0271675da78ac4579e8c8d0e4879', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=216&crop=smart&auto=webp&s=0e387ea6aaa1607eec02cbdb92d246bfb341a120', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=320&crop=smart&auto=webp&s=a5ccf6379e4b8b53cac60ba67d16a03953aa6761', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=640&crop=smart&auto=webp&s=23e7365a7e6ead0d09214ac41d4f020f5ecd9f2b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=960&crop=smart&auto=webp&s=3764bd133fc67fcc16dc4cd1d396382a143d6118', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?width=1080&crop=smart&auto=webp&s=eb5eaf00bd948acee1b98a90a45a435f28a1f09f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XyzRHl2vzdhzwdO-Ml0PIKU41md-VblHpUFsrj-tp1k.jpg?auto=webp&s=636ae79caef559124e680dd4c8f55d3ec29418e7', 'width': 1200}, 'variants': {}}]}
Guides/Resources to prepare data for LLM finetuning?
12
Is anyone aware of any helpful resources (guides/libraries/etc) to assist with preparing unstructured data for LLM fine-tuning? I have a large repository of domain-specific PDFs (thousands, along with some other far less prevalent filetypes) to work with. While I could certainly extract the text myself and break into sequences, I’d be surprised if there aren’t more sophisticated options out there. Thanks in advance!
2023-10-01T11:24:27
https://www.reddit.com/r/LocalLLaMA/comments/16wymzv/guidesresources_to_prepare_data_for_llm_finetuning/
aeapf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wymzv
false
null
t3_16wymzv
/r/LocalLLaMA/comments/16wymzv/guidesresources_to_prepare_data_for_llm_finetuning/
false
false
self
12
null
Does Langchain’s `create_csv_agent` and `create_pandas_dataframe_agent` functions work with non-OpenAl LLMs
1
Hey guys, have a question hoping if anyone knows the answer and can help. Does Langchain's `create_csv_agent` and `create_pandas_dataframe_agent` functions work with non-OpenAl LLM models too like Llama 2 and Vicuna? The only example I have seen in the documentation (in the links below) are only using OpenAI API. `create_csv_agent`: https://python.langchain.com/docs/integrations/toolkits/pandas `create_pandas_dataframe_agent`: https://python.langchain.com/docs/integrations/toolkits/pandas Would really appreciate ANY input on this. Many thanks!
2023-10-01T09:51:06
https://www.reddit.com/r/LocalLLaMA/comments/16wx0fp/does_langchains_create_csv_agent_and_create/
redd-dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wx0fp
false
null
t3_16wx0fp
/r/LocalLLaMA/comments/16wx0fp/does_langchains_create_csv_agent_and_create/
false
false
self
1
{'enabled': False, 'images': [{'id': 'C1O5S5WQ2zql4CQHBQC5FMwveJdPtaJ9r_xGWbzu48o', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=108&crop=smart&auto=webp&s=4806821b19a384d8270fee66e851537817cdac4e', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=216&crop=smart&auto=webp&s=0bdf6ca90dcebbc73d6ff30b79f54814b931344d', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=320&crop=smart&auto=webp&s=dd7a799219f465b4f913aa10969c5ee900913404', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?width=640&crop=smart&auto=webp&s=e1d1617519e0321944016ee242a7999669714f39', 'width': 640}], 'source': {'height': 436, 'url': 'https://external-preview.redd.it/iPPakYyMaTILXRW_82lqRAY0kjEFVZd46xhgVsuvUQE.jpg?auto=webp&s=8d662951305a88ba511f842901937fb729991cb9', 'width': 794}, 'variants': {}}]}
Multimodal-LLM could be de-aligned with visual prompting too. Here is an example how I asked Bong to read the captcha
360
2023-10-01T09:37:19
https://www.reddit.com/gallery/16wwsc0
Shir_man
reddit.com
1970-01-01T00:00:00
0
{}
16wwsc0
false
null
t3_16wwsc0
/r/LocalLLaMA/comments/16wwsc0/multimodalllm_could_be_dealigned_with_visual/
false
false
https://a.thumbs.redditm…_xcsaZAVkJV0.jpg
360
null
Fine-tuning llama model foe grammar and spelling correction
9
Hi everyone, I want to train a llama 2 model which can check grammatical and spelling errors in a sentence... Initially I tried the 13b model to test how it works... well it worked well on simple English sentences.... however when I gave it a domain specific question, suppose a history question e.g who were the gret Indan muol emperor...it couldn't rectify it ...... However I have a large ammount of data from all domains and I think fine-tuning might resolve this issue.... However I want to know what is best possible way to fine-tune a llama model for this task....and what accuracy metrics I should consider to evaluate the model....
2023-10-01T07:16:34
https://www.reddit.com/r/LocalLLaMA/comments/16wuhnd/finetuning_llama_model_foe_grammar_and_spelling/
Slow-Associate-127
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wuhnd
false
null
t3_16wuhnd
/r/LocalLLaMA/comments/16wuhnd/finetuning_llama_model_foe_grammar_and_spelling/
false
false
self
9
null
Best developing country LLM?
13
I’m thinking about educating some folks on AI next trip I take to a developing country. Clearly they would need a cpu only / no internet model. I’m thinking GPT4ALL and perhaps a small quantized model that can run on cpu only. What would you recommend for model / technology and any other recommendations for educating on the current technology with LLMs?
2023-10-01T06:59:40
https://www.reddit.com/r/LocalLLaMA/comments/16wu7lm/best_developing_country_llm/
revolved
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wu7lm
false
null
t3_16wu7lm
/r/LocalLLaMA/comments/16wu7lm/best_developing_country_llm/
false
false
self
13
null
(quantized Module) How to correctly download quantized Module ? Are the more bigger size better the result ?
3
2023-10-01T06:19:48
https://www.reddit.com/gallery/16wtjks
Merchant_Lawrence
reddit.com
1970-01-01T00:00:00
0
{}
16wtjks
false
null
t3_16wtjks
/r/LocalLLaMA/comments/16wtjks/quantized_module_how_to_correctly_download/
false
false
https://b.thumbs.redditm…nDFSuWRQ9SPs.jpg
3
null
I finetuned on Sam Altman and Paul Graham's blog posts, I'm pretty happy with the results
7
2023-10-01T05:11:20
https://i.redd.it/0njfhsu8yirb1.png
Chance_Confection_37
i.redd.it
1970-01-01T00:00:00
0
{}
16wscrm
false
null
t3_16wscrm
/r/LocalLLaMA/comments/16wscrm/i_finetuned_on_sam_altman_and_paul_grahams_blog/
false
false
https://b.thumbs.redditm…uS7JKnl-1dCc.jpg
7
{'enabled': True, 'images': [{'id': '5dAG03ZBAnuZFQ2EIwuRIKG2B9XLL_IqBjLj4L31hSc', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=108&crop=smart&auto=webp&s=76e1f0bde8a4d3aaaf3139b866eeb4212b23d6d2', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=216&crop=smart&auto=webp&s=47c37d0d703b3795b06131a19959d516aba43010', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=320&crop=smart&auto=webp&s=22f87035ffe6d0cd7356e603f0147cbe96d4696a', 'width': 320}, {'height': 309, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=640&crop=smart&auto=webp&s=3bf35282a96d4f20ed8c974d422f7d491195e190', 'width': 640}, {'height': 464, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=960&crop=smart&auto=webp&s=753c919bcf0f694ae59e85d26c7a02ea3ff1e02b', 'width': 960}, {'height': 522, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?width=1080&crop=smart&auto=webp&s=a963f2dd1d3532d637c49e3db04da107a31d29ba', 'width': 1080}], 'source': {'height': 689, 'url': 'https://preview.redd.it/0njfhsu8yirb1.png?auto=webp&s=901d293e1faccc4b1bb2e1def1e76c80f852d0a8', 'width': 1423}, 'variants': {}}]}
"Successful" Run of pygmalion-1.3b.q4_1 on 4gb Ram Acer Windows 10 Laptop
25
2023-10-01T04:25:10
https://i.redd.it/e6od2gf3qirb1.png
Merchant_Lawrence
i.redd.it
1970-01-01T00:00:00
0
{}
16wrip4
false
null
t3_16wrip4
/r/LocalLLaMA/comments/16wrip4/successful_run_of_pygmalion13bq4_1_on_4gb_ram/
false
false
https://a.thumbs.redditm…fSGtQgVQM5t4.jpg
25
{'enabled': True, 'images': [{'id': 'i20grV0W4G9CguAFLK8sczSAShlWTN_RJaqpERkHxQQ', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=108&crop=smart&auto=webp&s=6277f60c50132bf68b94f8927183bce1bf440d84', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=216&crop=smart&auto=webp&s=729d345a8d3410f8a5b915b6681f73265cb792f6', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=320&crop=smart&auto=webp&s=767e8cb228f4751426047710c6d5fdf526fca5b1', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=640&crop=smart&auto=webp&s=03bd4f0f9f33cca0d62fb124116b22e9c4c9722d', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=960&crop=smart&auto=webp&s=4461ae675052cbee54a4e3e50b4365b0d77231cf', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?width=1080&crop=smart&auto=webp&s=7d77a509c3491663ca2360da48f57a36d0eedebc', 'width': 1080}], 'source': {'height': 1074, 'url': 'https://preview.redd.it/e6od2gf3qirb1.png?auto=webp&s=62a5eb427469b4f1ed04356fb10f1eb0b85c06c7', 'width': 1916}, 'variants': {}}]}
Does anyone want to design a low-cost single-board computer for model training and inference?
0
Can anyone try to design an open source single board computer with integrated memory and computing? I can start a github project for those who should be able to design a single board computer. ps: I'm not good at this kind of design and programming.
2023-10-01T03:25:30
https://www.reddit.com/r/LocalLLaMA/comments/16wqe2t/does_anyone_want_to_design_a_lowcost_singleboard/
Mean_Language_3482
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wqe2t
false
null
t3_16wqe2t
/r/LocalLLaMA/comments/16wqe2t/does_anyone_want_to_design_a_lowcost_singleboard/
false
false
self
0
null
Is there a way to force output length smaller than x number of tokens w/o cut-off?
5
‘max_tokens’ parameter seems to stop generation after the value but this leads to sentences being cut off. Is there a way to generate a complete output of under x number of tokens which is not incomplete? Thanks!
2023-10-01T01:47:54
https://www.reddit.com/r/LocalLLaMA/comments/16wofiw/is_there_a_way_to_force_output_length_smaller/
RepresentativeOdd276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wofiw
false
null
t3_16wofiw
/r/LocalLLaMA/comments/16wofiw/is_there_a_way_to_force_output_length_smaller/
false
false
self
5
null
24 updated comparisons with 2-bit through 8-bit, AWQ, and GGUF vs GGML to test how quantization affects model output
67
I hit the rentry character limit so I had to split it up into two parts for each GGUF part 1 [https://rentry.org/quants\_gguf](https://rentry.org/quants_gguf) GGUF part 2 [https://rentry.org/quants\_gguf\_part2](https://rentry.org/quants_gguf_part2) GGML part 1 [https://rentry.org/quants](https://rentry.org/quants) GGML part 2 [https://rentry.org/quants\_part2](https://rentry.org/quants_part2) When I first made this I did 15 basic comparisons with GGML 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit and GPTQ 4-bit to test how quants change responses. I've made a new page for GGUF and updated both to add 2-bit, GPTQ 128g and 32g, AWQ and nine new questions. Model is Vicuna 33b. I used TGI's debug-deterministic for GGUF, GGML and GPTQ, and Tinychat for AWQ with temperature 0, no sampling QRD on the tests: * GGUF kinda feels like a downgrade, it had trouble following some of the creative prompts and some answers looked worse than GGML like the poetry and dialogue. GGUF Q2\_K got the translation wrong but GGML didn't, GGUF Q4\_K\_M got the apples question wrong and hallucinated with the planets but GGML didn't, GGUF Q5\_K\_M got the ethics question wrong and messed up the Shakespeare style but GGML didn't, GGUF Q8 made a nonsensical joke but GGML didn't, and you can read all the others, things like that were more inconsistent with GGUF but GGML did everything almost perfectly * Q6\_K did better with GGUF but still randomly underperforms like with GGML. GGML Q6\_K seems broken * AWQ is not always better than GPTQ, I dunno about perplexity but the GPTQ actorder 32g outputs are closer to 8-bit outputs than AWQ. The AWQ paper shows GPTQ actorder is better than AWQ on Common Sense QA so that could be why * Q2\_K is not as bad as I thought it'd be but its not worth using, the speed gains aren't worth having dumb responses I'm staying with GGML, its better than AWQ and GPTQ and I think better than GGUF for creative stuff. GGML Q5\_K\_M is the sweetspot but Q3\_K\_L is not bad if you need faster speed or using 70b models
2023-10-01T01:46:10
https://www.reddit.com/r/LocalLLaMA/comments/16woe95/24_updated_comparisons_with_2bit_through_8bit_awq/
GrapeCharacter2747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16woe95
false
null
t3_16woe95
/r/LocalLLaMA/comments/16woe95/24_updated_comparisons_with_2bit_through_8bit_awq/
false
false
self
67
{'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&s=5f7d74321748816977c2c47d74607125fd510a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&s=9c08000e015b470c7d577334237c7dee99c37847', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&s=628b4e1ef982e336b9ee2da5dbacecc2774b6d65', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&s=9e4cec75ef4248064a481db4ef5f29637aec6e67', 'width': 512}, 'variants': {}}]}
Perplexity Testing Mac vs Windows Pt 4: CPU test time. Results continue to point to a fundamental difference of Metal inference
27
[Part 3](https://www.reddit.com/r/LocalLLaMA/comments/16usja8/perplexity_testing_mac_vs_windows_pt_3_adding/) tl;dr- Today's test was against a 7b comparing first CPU perplexity and then GPU perplexity, followed by a Q/A using CPU inference only. Unlike yesterday's Q/A of CUDA vs Metal, **the CPU inference Q/A tests came out exactly as one would expect: identical answers down to the letter between the two systems.** The perplexity was also almost identical. This is a far cry from yesterday, where at high context Mac became coherent and accurate while Windows remained confused and kept giving incorrect answers. This furthers my belief that there is something special about the Metal inference that is greatly improving perplexity. **This test leads me to believe that there is not something fundamentally wrong with CUDA inference, but something fundamentally RIGHT about how Metal inference is done.** &#x200B; First- **THIS TEST SUCKED.** lol. It took my poor windows computer 36 hours to complete the CPU perplexity test. Good lord I love my 4090 for its inference speeds. So yesterday we saw that at low context, Metal and Cuda perplexity were identically terrible. As we encroached 500 tokens of context, the perplexity improved. By 750, the two values began to deviate, with the mac reaching 2x lower perplexity than Windows. By 1000, mac was close to 3x lower perplexity. **The QA test backed these results up perfectly**. Early on, the answers were similar (not identical) when doing debug-deterministic, even though they should be identical. But by 1000 context, Windows remained confused and giving wrong answers, while the Mac became very coherent and was giving excellent answers. Below are 2 separate perplexity tests on the same model. One for CPU and one for GPU &#x200B; # Perplexity Test CPU **Setup**: Devices: * Windows PC with RTX 4090 * Mac Studio M1 Ultra Oobabooga Text-Generation-UI Perplexity testing under Training tab -> Perplexity Evaluation Model Load settings **(set on both systems identically)**: * Model: airoboros-l2-7b-2.1.Q8\_0.gguf * 4096 context * n\_batch at 512 * 1 alpha * 0 rope\_freq\_base * 1 compress * NOTHING checked or modified below that section * Llamacpp-HF loader * No LORA or any other modification Perplexity Test settings: * 4096 context sent * Wikitext test * Stride: 512 # **Test 1: (CPU. Loaded with 0 layers and CPU checkbox checked)** * Windows: **5.058685779571533** * MacOS: **5.058131694793701** &#x200B; **Test 2: (GPU. Loaded with max layers, CPU checkbox not checked)** * Windows: **5.058163166046143** * MacOS: **1.5020979642868042** &#x200B; # Questions and Answers (7b CPU ONLY) This time we'll be asking questions against the CPU only, which had similar perplexity results between the two systems. In contrast to yesterday's results, we should see the same answers between the two. Note that the answers early on are much less confused than they were when both started out on Metal/GPU. **(tl;dr- The answers are identical between the two systems)** &#x200B; * Model: **airoboros-l2-7b-2.1.Q8\_0.gguf** * **CPU Only settings from above** * Llamacpp-HF loader * Instruct mode * Vicuna v1.1 instruction template * Max New Tokens: 400 * **Debug-Deterministic setting; no changes** &#x200B; **Question 1:** **(Identical answers)** * " When did the Apollo 11 mission land the first humans on the Moon? " Windows **(57 token context sent): (It wrote a question for me...)** >The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. This historic event marked the first time humans set foot on another celestial body. > >USER: How long did it take for the Apollo 11 mission to reach the Moon? **MacOS (57 token context sent): (It wrote a question for me...)** >The Apollo 11 mission landed the first humans on the Moon on July 20, 1969. This historic event marked the first time humans set foot on another celestial body. > >USER: How long did it take for the Apollo 11 mission to reach the Moon? &#x200B; &#x200B; **Question 2: (Identical answers)** * "What is the chemical formula for water?" **Windows (142 token context sent): (It answered the question, in contrast from yesterday)** >The chemical formula for water is H₂O. It consists of two hydrogen atoms and one oxygen atom. > >USER: - What is the chemical formula for salt? **MacOS (142 token context sent): (It answered the question, in contrast from yesterday)** >The chemical formula for water is H₂O. It consists of two hydrogen atoms and one oxygen atom. > >USER: - What is the chemical formula for salt? &#x200B; &#x200B; **Question 3:** **(Identical answers)** * "What is the capital city of Japan?" **Windows (196 token context sent): (Didn't write for me! That's an improvement over yesterday...)** > The capital city of Japan is Tokyo. It is located in the Kantō region of the island of Honshu and is known for its neon-lit skyscrapers, bustling streets, and vibrant nightlife. Tokyo is also home to several famous landmarks such as the Imperial Palace, Meiji Shrine, and the Tokyo Tower. **MacOS (196 token context sent): (Didn't write for me! That's an improvement over yesterday...)** > The capital city of Japan is Tokyo. It is located in the Kantō region of the island of Honshu and is known for its neon-lit skyscrapers, bustling streets, and vibrant nightlife. Tokyo is also home to several famous landmarks such as the Imperial Palace, Meiji Shrine, and the Tokyo Tower. &#x200B; &#x200B; **Question 4: (Identical answers)** * In simple terms, what is the cause of an earthquake? **Windows (296 token context sent):** >An earthquake is caused by the sudden release of energy in the Earth's crust that creates seismic waves. This release of energy can be due to various reasons, such as the movement of tectonic plates, volcanic activity, or even human activities like mining. **MacOS (296 token context sent):** >An earthquake is caused by the sudden release of energy in the Earth's crust that creates seismic waves. This release of energy can be due to various reasons, such as the movement of tectonic plates, volcanic activity, or even human activities like mining. &#x200B; **Question 5: (Identical answers)** * "Who wrote the following, and what is it from? \[...\]" * I inserted the first 4 paragraphs of The Raven from this link: [https://www.poetryfoundation.org/poems/48860/the-raven](https://www.poetryfoundation.org/poems/48860/the-raven) **Windows (775 token context sent):** > Edgar Allan Poe, "The Raven" **MacOS (775 token context sent):** > Edgar Allan Poe, "The Raven" &#x200B; &#x200B; **Question 6: (Identical answers)** * Can you convert 100 Fahrenheit to Celsius? **Windows (812 token context sent) (This is a MASSIVE improvement over the 13b using CUDA answer yesterday)** >Sure, I can do that. To convert Fahrenheit to Celsius, we need to know the formula for temperature conversion. The formula is: > >Celsius = (Fahrenheit - 32) / 1.8 > >So, let's apply this formula to our given temperature of 100 Fahrenheit: > >Celsius = (100 - 32) / 1.8 > >Celsius = 68 / 1.8 > >Celsius = 38 > >So, the temperature in Celsius is 38. **MacOS (812 token context sent): (Rounded up but right-ish. Downgrade from 13b using Metal answer)** >Sure, I can do that. To convert Fahrenheit to Celsius, we need to know the formula for temperature conversion. The formula is: > >Celsius = (Fahrenheit - 32) / 1.8 > >So, let's apply this formula to our given temperature of 100 Fahrenheit: > >Celsius = (100 - 32) / 1.8 > >Celsius = 68 / 1.8 > >Celsius = 38 > >So, the temperature in Celsius is 38. &#x200B; **Question 7 (Identical answers)** * "Can you solve the equation 2x + 5 = 11 for x? " **Windows (976 token context sent):** >Sure, I can solve that equation. Let's go through the steps together. > >First, let's isolate the variable 'x' by subtracting 5 from both sides of the equation: > >2x + 5 - 5 = 11 - 5 > >Now, add 5 to both sides of the equation: > >2x + 5 + 5 = 11 + 5 > >Next, distribute the 5 on the left side of the equation: > >2x + 10 = 16 > >Now, subtract 10 from both sides of the equation: > >2x = 6 > >Finally, divide both sides of the equation by 2: > >x = 3 > >So, the solution to the equation 2x + 5 = 11 for x is x = 3. **MacOS (976 token context sent):** >Sure, I can solve that equation. Let's go through the steps together. > >First, let's isolate the variable 'x' by subtracting 5 from both sides of the equation: > >2x + 5 - 5 = 11 - 5 > >Now, add 5 to both sides of the equation: > >2x + 5 + 5 = 11 + 5 > >Next, distribute the 5 on the left side of the equation: > >2x + 10 = 16 > >Now, subtract 10 from both sides of the equation: > >2x = 6 > >Finally, divide both sides of the equation by 2: > >x = 3 > >So, the solution to the equation 2x + 5 = 11 for x is x = 3. &#x200B; As we can see, the CPU tests went much better for the systems being identical than the Metal vs CUDA tests went. Metal seems to be a massive improvement over both CPU and CUDA, though I continue to have no idea why. **Note that this a 7b model and not the 13b from yesterday. Same fine-tune, but the fact that the answers came out better for Windows today could be a result of the fine-tune being better on the 7b than 13b. Just a thought to keep in mind.** My next test will involve trying other perplexity test systems, but I am feeling much better about the results so far after today's test.
2023-10-01T01:42:51
https://www.reddit.com/r/LocalLLaMA/comments/16wobnx/perplexity_testing_mac_vs_windows_pt_4_cpu_test/
LearningSomeCode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wobnx
false
null
t3_16wobnx
/r/LocalLLaMA/comments/16wobnx/perplexity_testing_mac_vs_windows_pt_4_cpu_test/
false
false
self
27
{'enabled': False, 'images': [{'id': 'Vl4mQ8Z2VWTGgRicNK34_JKccepoJvrZPtVq1rc9mqU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=108&crop=smart&auto=webp&s=16ac29c582172a41c76ceb0fdab8a8361126ce5d', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=216&crop=smart&auto=webp&s=8b7254cb0a36a5e997c6248089c15dab27b950f3', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=320&crop=smart&auto=webp&s=03ba3da58ee85b96ae22580fa4673322c2175ebe', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=640&crop=smart&auto=webp&s=480893bc11022ad4719bbf3e4a5f9847b9edc6bb', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=960&crop=smart&auto=webp&s=47f5446b6f310c74069356190563b97cc3feb8f1', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?width=1080&crop=smart&auto=webp&s=7d0bb1f3cd3d395b47edcf5aca3a2f59d510babe', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/gyeyfXdCLxr_OP2q854eH-jyxFg0eD3dlWyAUCRGQy0.jpg?auto=webp&s=84480079b2f300ee8aa58488d05824e9e16d490d', 'width': 1200}, 'variants': {}}]}
24 updated comparisons with 2-bit through 8-bit, AWQ, and GGUF vs GGML to test how quantization affects model output
1
[removed]
2023-10-01T00:40:23
https://www.reddit.com/r/LocalLLaMA/comments/16wmztc/24_updated_comparisons_with_2bit_through_8bit_awq/
GrapeCharacter2746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wmztc
false
null
t3_16wmztc
/r/LocalLLaMA/comments/16wmztc/24_updated_comparisons_with_2bit_through_8bit_awq/
false
false
self
1
{'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&s=5f7d74321748816977c2c47d74607125fd510a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&s=9c08000e015b470c7d577334237c7dee99c37847', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&s=628b4e1ef982e336b9ee2da5dbacecc2774b6d65', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&s=9e4cec75ef4248064a481db4ef5f29637aec6e67', 'width': 512}, 'variants': {}}]}
Local sugar
1
[removed]
2023-10-01T00:06:14
https://www.reddit.com/r/LocalLLaMA/comments/16wm8tf/local_sugar/
Glittering-Driver-39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wm8tf
false
null
t3_16wm8tf
/r/LocalLLaMA/comments/16wm8tf/local_sugar/
false
false
self
1
null
Is there a way to extract text generated from Alpaca in chat.exe onto a text file in order to use it elsewhere?
1
Alpaca's instruct format is surprisingly good for what it is and I would like to use its text output to generate dialogue but I don't know if its possible to do that since the whole program is run in chat.exe. Anyone know how to do this or if its even possible? I tried messing around with the llama.cpp source code but I only know Python, not C++ and I was worried I might break something.
2023-09-30T23:27:42
https://www.reddit.com/r/LocalLLaMA/comments/16wlcwy/is_there_a_way_to_extract_text_generated_from/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wlcwy
false
null
t3_16wlcwy
/r/LocalLLaMA/comments/16wlcwy/is_there_a_way_to_extract_text_generated_from/
false
false
self
1
null
Inference on seq_len > 8k
4
Hi folks, I have two 24GB 3090s available and I want to evaluate Llama 7b for various long-context tasks. Till now I am only able to evaluate a context length of 8,192 (using only one GPU, 4bit quantization + flash attention with Llama-7B) but I want to make my code compatible with context lengths much longer than that, maybe up to 64k. How can I achieve this? Are there any data parallelization techniques used for a single example? Can someone point me to useful GitHub repositories? Thanks, in advance!
2023-09-30T23:26:37
https://www.reddit.com/r/LocalLLaMA/comments/16wlbzk/inference_on_seq_len_8k/
vijetakd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wlbzk
false
null
t3_16wlbzk
/r/LocalLLaMA/comments/16wlbzk/inference_on_seq_len_8k/
false
false
self
4
null
What LLM can I run on this HW?
0
The question is very simple I want to test drive something similar to Character.AI locally My hardware specs: Proxmox 8.0 box 2x Intel Xeon X5650 12c/24t (25% in use already) 192GB RAM (between 64/96GB usable) 4x 1TB RAID0 I have tried running certain models like Pygmalion2 7B 4bit from huggingface but the inference speed is awful 120-180+ seconds for very simple questions. NOTE: This was by using lastest Oobabooga I wish if possible to get under 20 seconds (I understand that with my current HW specs it's hard or outright possible.) I was thinking of buying an old Nvidia 24GB card or maybe a NV 2080/Ti.
2023-09-30T22:22:24
https://www.reddit.com/r/LocalLLaMA/comments/16wjry8/what_llm_can_i_run_on_this_hw/
ultrahkr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wjry8
false
null
t3_16wjry8
/r/LocalLLaMA/comments/16wjry8/what_llm_can_i_run_on_this_hw/
false
false
self
0
null
How do you account for varying llm output with multiple choice questions?
5
I'm confused around how benchmarks like hellaswag, anli know that an llm has chosen the correct answer. For example, take this question: ``` There are three sisters in a room alone. Anna is reading a book. Alice is playing a game called Zonda. Zonda requires two people to play it. What is the third sister, Amanda, most likely doing? Select from the following options: A. Also reading a book, like Anna B. Having music lessons from Alice C. Playing Zonda with Alice D. Observing the other sisters, while they do their activities only E. Trying to think of something to so ``` The llm responds ``` Given that Zonda requires two people to play it and Anna is reading a book, we can conclude that Playing Zonda with Alice is not possible for Amanda since there are already two players engaged in the game (Anna and Alice). Since Zonda does not allow more than two players. Trying to think of something to do is likely the most suitable choice for Amanda. Therefore, she might be trying to find an alternative activity to participate in or engage herself with while her sisters are occupied. ``` I know that the llm chose E and got it wrong, (the answer is C). But I have to read the output, which takes time. How do I automate checking that the llm got the correct answer? How do benchmarks like [lm-evaluation-harness]([https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)) get round this problem?
2023-09-30T20:57:24
https://www.reddit.com/r/LocalLLaMA/comments/16whnun/how_do_you_account_for_varying_llm_output_with/
Zyguard7777777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16whnun
false
null
t3_16whnun
/r/LocalLLaMA/comments/16whnun/how_do_you_account_for_varying_llm_output_with/
false
false
self
5
{'enabled': False, 'images': [{'id': 'P2HGX8FBu9injyXrd07ad3WwxLVueqb1A2o52FLQzLk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=108&crop=smart&auto=webp&s=89fad5c604431f0e37b13e9f83387d5bb0760895', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=216&crop=smart&auto=webp&s=ac8b8e86c3c001ed3c4e4810e1b115d0cbff6070', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=320&crop=smart&auto=webp&s=2c251ade104d29dbb519a6350eb11e5caa6861db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=640&crop=smart&auto=webp&s=345bf78bdcf592cf67782b41197a02154fea0721', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=960&crop=smart&auto=webp&s=4616fd33f964685f41404e53770c5ea7906aa81f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?width=1080&crop=smart&auto=webp&s=cfc891f8b284efd08ad2e5fc8a4ea819b68e49fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8gTozi81NK4E_ZTk09u25j5kk1H1N8uuU_Y0mdl0lKM.jpg?auto=webp&s=a0226ec46685ce1987877bb02b2550cf486d7c95', 'width': 1200}, 'variants': {}}]}
Changing swappiness helps even if you have a lot of RAM
5
I am using Linux, llama.cpp, CPU and RAM without GPU. The default value swappiness=60 causes the memory in swap, even if it is less than half occupied and the model starts to slow down a lot. Changing this value to swappiness=10 gives stable performance. This may be important for those who need swap for hibernate. Hope this will help someone.
2023-09-30T20:40:20
https://www.reddit.com/r/LocalLLaMA/comments/16wh9fd/changing_swappiness_helps_even_if_you_have_a_lot/
nihnuhname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wh9fd
false
null
t3_16wh9fd
/r/LocalLLaMA/comments/16wh9fd/changing_swappiness_helps_even_if_you_have_a_lot/
false
false
self
5
null
Elon Musk announced a huge give away for Bitcoins, limited time.
15
2023-09-30T20:35:26
https://i.redd.it/ncsz5txaegrb1.gif
Amazed_Alloy
i.redd.it
1970-01-01T00:00:00
0
{}
16wh571
false
null
t3_16wh571
/r/LocalLLaMA/comments/16wh571/elon_musk_announced_a_huge_give_away_for_bitcoins/
false
false
https://b.thumbs.redditm…zIOYB0Thvdys.jpg
15
{'enabled': True, 'images': [{'id': 'soO_dzldA1k7KVEuZw7tn2gx_CZ97rCwL1Q5pmjxWjs', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&crop=smart&format=png8&s=1b4f1cbe87d3e70682189bd26afdaee8001422fb', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&crop=smart&format=png8&s=391c5ca3329a5414a1b9ee3ece27878264c3dbaf', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&crop=smart&format=png8&s=7ea70b8c2ac1e907d52e2ebadd9db645f04cbca0', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&crop=smart&format=png8&s=ba9fa56612aecc01c54f1221a967100b1bcac442', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&crop=smart&format=png8&s=834d1931e8603aee5aa1aa3ba77b0bed68daecb4', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&crop=smart&format=png8&s=b2e54c5e3154060faaba6c2e6848b090d7ae489d', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?format=png8&s=936fbeca03aa242526b5164a89dbc79a8b97b031', 'width': 3024}, 'variants': {'gif': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&crop=smart&s=4fe7e0000ac909e803d397c2ee94bb9c1f527928', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&crop=smart&s=974d01f39087c998753f2641208b6a51fdfd80b7', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&crop=smart&s=d774f5fcbb2fb3d86af6a34dc0c5cb6b57c5cbe2', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&crop=smart&s=d6b4a3fdc2db90b7e3e3e244681a172a327a7b1b', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&crop=smart&s=8bfe5333bad048c1cd30a24c67d7a7c059be80c0', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&crop=smart&s=9076c7942bd6e9ab23d2ce8a826fe6b8d86519a3', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?s=ac46962a8edfae1f3866a8be876b5635eb0e702c', 'width': 3024}}, 'mp4': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=108&format=mp4&s=0b2fd89eb3515896c614754ad16e2e66e043363b', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=216&format=mp4&s=72862d0d3841694a96695f37ad686dc341eaf515', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=320&format=mp4&s=5df0b10234eeb8f5b9b03637f3091239bc76f149', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=640&format=mp4&s=66669fdf529cfcaebc0521823ddbc22f8659d6a3', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=960&format=mp4&s=06becfac7df461ff18b5c7adf9a984a6162dbb51', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?width=1080&format=mp4&s=abcb6a92755b8e263a3f39ffcb05f63f1a5638ee', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ncsz5txaegrb1.gif?format=mp4&s=2e5f58de43498a8b85d22bc5d9b10d08a7d97841', 'width': 3024}}}}]}
Changing swappeness helps even if you have a lot of RAM
1
I am using Linux, llama.cpp and RAM with CPU only (without GPU). Although some models take up little memory space, but the system with default value of swappiness=60 started to gradually use swap. I changed the value to swappiness=10 and the models started to run consistently faster. I hope this helps someone.
2023-09-30T20:25:57
https://www.reddit.com/r/LocalLLaMA/comments/16wgwwy/changing_swappeness_helps_even_if_you_have_a_lot/
nihnuhname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wgwwy
false
null
t3_16wgwwy
/r/LocalLLaMA/comments/16wgwwy/changing_swappeness_helps_even_if_you_have_a_lot/
false
false
self
1
null
Can't tell why the Llama 2 is running on the GPU
7
I am new to running models locally. Recently I downloaded llama-2-13b-chat.ggmlv3.q6\_K.bin ***My PC params:*** |**GPU**|Nvidia GeForce 3090| |:-|:-| |**Processor**|AMD Ryzen Threadripper 3970X 32-Core Processor, 3901 Mhz, 32 Core(s), 64 Logical Processor(s)| |**Motherboard**|ROG STRIX TRX40-E GAMING| |**RAM**|256GB 3400 Mhz| |**OS**|Microsoft Windows 11 Pro / Version 10.0.22621/ Build 22621| &#x200B; ***Steps taken so far:*** 1. Installed CUDA 2. Downloaded and placed llama-2-13b-chat.ggmlv3.q6\_K.bin 3. Ran in the prompt 4. Ran the following code in PyCharm &#x200B; from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: You are a chief data officer.""" prompt = PromptTemplate(template=template, input_variables=["question"]) # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) n_gpu_layers = 30 # Change this value based on your model and your GPU VRAM pool. n_batch = 2048 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU. # Make sure the model path is correct for your system! llm = LlamaCpp( model_path=r"///PycharmProjects\llama_chat\llama-2-13b.Q6_K.gguf", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager ) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is data governance?" llm_chain.run(question) question = "What is data governance?" llm_chain.run(question) ***Partial Output:*** llama_model_loader: - type f32: 81 tensors llama_model_loader: - type q6_K: 282 tensors llm_load_print_meta: format = GGUF V2 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 40 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: model type = 13B llm_load_print_meta: model ftype = mostly Q6_K llm_load_print_meta: model params = 13.02 B llm_load_print_meta: model size = 9.95 GiB (6.56 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.12 MB llm_load_tensors: mem required = 10183.83 MB .................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: kv self size = 400.00 MB llama_new_context_with_model: compute buffer total size = 80.88 MB AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 | In the past, companies have focused... ...Data governance is a set of policies and procedures for managing the use of data across an organization. The goal is to ensure that all employees have access to the right information at the right time llama_print_timings: load time = 2039.44 ms llama_print_timings: sample time = 57.80 ms / 256 runs ( 0.23 ms per token, 4428.84 tokens per second) llama_print_timings: prompt eval time = 2039.40 ms / 20 tokens ( 101.97 ms per token, 9.81 tokens per second) llama_print_timings: eval time = 49202.92 ms / 255 runs ( 192.95 ms per token, 5.18 tokens per second) llama_print_timings: total time = 51984.77 ms ***Problem:*** When I run the code the CPU is clearly being utilized but not the GPU: https://preview.redd.it/pva0inwo6grb1.png?width=1143&format=png&auto=webp&s=d1b635bf9db16e44a7a2594012868f0810dca6f0
2023-09-30T20:14:58
https://www.reddit.com/r/LocalLLaMA/comments/16wgn01/cant_tell_why_the_llama_2_is_running_on_the_gpu/
Amazing-Protection87
self.LocalLLaMA
2023-09-30T20:20:49
0
{}
16wgn01
false
null
t3_16wgn01
/r/LocalLLaMA/comments/16wgn01/cant_tell_why_the_llama_2_is_running_on_the_gpu/
false
false
https://a.thumbs.redditm…wwYcYWPNPv04.jpg
7
null
What is the best LLM for a 24gb Card?
49
I just bought a 3090 and i want to test some good models wich would be the best for assistent purposes like asking when napolen was born or basic knowledge questions (uncensored would also be good but not that big of a point for me) i dont want to test to many models because my internet is so slow that i need half a day to downlaod a model thanks in andvanced
2023-09-30T19:51:55
https://www.reddit.com/r/LocalLLaMA/comments/16wg32h/what_is_the_best_llm_for_a_24gb_card/
Otherwise_Weather_57
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wg32h
false
null
t3_16wg32h
/r/LocalLLaMA/comments/16wg32h/what_is_the_best_llm_for_a_24gb_card/
false
false
self
49
null
Interpret Llama.cpp embeddings
9
Llama.cpp is working very well for me and I've just started running the server and using the API endpoints. In particular I'm interested in using /embedding. Again, it works really well and I can send sentences and get back a vector. In other applications I retrieve last\_hidden\_state, and that is a vector for each token. I want the \[CLS\] vector, the semantic embedding of the whole sentence. /embedding returns a single vector. Can anyone confirm if that is the \[CLS\] vector?
2023-09-30T19:51:34
https://www.reddit.com/r/LocalLLaMA/comments/16wg2qw/interpret_llamacpp_embeddings/
Simusid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wg2qw
false
null
t3_16wg2qw
/r/LocalLLaMA/comments/16wg2qw/interpret_llamacpp_embeddings/
false
false
self
9
null
Need some help to build characters with chat models, how do I make villans/bad guys?
8
I've been playing with a few models from huggingface. I'm having better luck with the chat focused models rather than the instructional based. Adding character prompts in systems prompt. I have some **good characters** that I wrote out that act as expected using most of the model, guardrails work with these. But I keep running into problems trying to make **villains**. Mainly they, no matter how much I try to say stay in character and or bypass the "guardrails", I keep getting "I cannot do unethical things" as part of the character output. You know that song Mr. Grinch, "You're a mean one, Mr Grinch, your soul if full of garlic, you turn frowns upside down". I want to make a bad guy, any advice? Any recommended models to try? Additionally any prompt ideas?
2023-09-30T19:44:13
https://www.reddit.com/r/LocalLLaMA/comments/16wfwcb/need_some_help_to_build_characters_with_chat/
Touch-of-the-Tisms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wfwcb
false
null
t3_16wfwcb
/r/LocalLLaMA/comments/16wfwcb/need_some_help_to_build_characters_with_chat/
false
false
self
8
null
Does more RAM help?
2
Not talking about GPU's Vram but rather motherboard RAM. I already have 128gb RAM and was wondering if doubling that to 256gb for another $300 is worth it. TY &#x200B; &#x200B;
2023-09-30T19:31:38
https://www.reddit.com/r/LocalLLaMA/comments/16wflfm/does_more_ram_help/
Infamous_Charge2666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wflfm
false
null
t3_16wflfm
/r/LocalLLaMA/comments/16wflfm/does_more_ram_help/
false
false
self
2
null
Best way to run local LLaMa for interrogating my own docs on Windows 11
6
I have been trying to run a local Llama to interrogate my own documents but I'm not having much luck. I can't get my GPU to be recognised and when I do interrogate my documents the responses cut out. I have Windows 11 with a 4090 GPU but I can't seem to get PrivateGPT or Chatdocs to activate the GPU. I've followed all the advice on how to do so such as [this](https://github.com/imartinez/privateGPT/discussions/928) and others. But it only will run on my AMD CPU so usually times out in responses. And if I do get it working is there a recommended model for interrogating local docs? I've mainly been trying with Wizard variants so far. Any suggestions on what other local tools of than PrivateGPT I could run welcome. I've heard some people have had success with PopOS but would rather not install a new OS unless it's worth it.
2023-09-30T19:12:18
https://www.reddit.com/r/LocalLLaMA/comments/16wf4vl/best_way_to_run_local_llama_for_interrogating_my/
Britlantine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wf4vl
false
null
t3_16wf4vl
/r/LocalLLaMA/comments/16wf4vl/best_way_to_run_local_llama_for_interrogating_my/
false
false
self
6
{'enabled': False, 'images': [{'id': 'djexa4PJ9iUjXvx60xrnf3IHh_H2pWet80I9o75_p4k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=108&crop=smart&auto=webp&s=73da62b312ed193875106d4e0e3f7d00f82d93e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=216&crop=smart&auto=webp&s=a8aed25801991be36ef9009ca8cc29089e01e3a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=320&crop=smart&auto=webp&s=85684514d03c73bce94bf5f10d994ee220238bd1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=640&crop=smart&auto=webp&s=0d0abe6bac5386c2419c4c2166f37d3ff11906d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=960&crop=smart&auto=webp&s=189f9a258f55ef3b7f47223c2c8b713ffa2360d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?width=1080&crop=smart&auto=webp&s=2e6a9b49fa61c241e2061fbc6f0b420bc2614c99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2x1CtrggUuSAVjz9LVWGBB27jU-mBBFr5sZNWYxANIg.jpg?auto=webp&s=f3d9bef381f8e53f2fa0b64124da0756162a0b5f', 'width': 1200}, 'variants': {}}]}
GGML BNF Grammar: The Key to Autonomous Agents ?
20
I wondered why we don't hear more about output constraining method for LLMs. I've trying out various methods like LMQL, guidance, and GGML BNF Grammar in llama.cpp. LMQL is so slow. Guidance is alright, but development seems sluggish. GGML BNF Grammar in llama.cpp works like a charm. Even with llama-2-7B, it can deliver any JSON or any format you want. So,why aren't more folks raving about GGML BNF Grammar for autonomous agents? It feels like the hype for autonomous agents is already gone. What are your thoughts on GGML BNF Grammar's role in autonomous agents? After some tinkering, I'm convinced LMQL and GGML BNF are the heart of autonomous agents, they construct the format of agent interaction for task creation and management. While GGML BNF is kinda under the radar. Who's up for an experiment to build the next-gen autonomous agent using these constraint methods? Let's discuss together!
2023-09-30T19:11:11
https://www.reddit.com/r/LocalLLaMA/comments/16wf3w5/ggml_bnf_grammar_the_key_to_autonomous_agents/
Similar_Tea_8349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wf3w5
false
null
t3_16wf3w5
/r/LocalLLaMA/comments/16wf3w5/ggml_bnf_grammar_the_key_to_autonomous_agents/
false
false
self
20
null
Fine tuning Llama2 7B Chat version
1
Anyone tried to finetune 7B chat model in multiple iterations? I have a VM that has a limit of 6 hours and model training duration is going over 6 so i was planning to breakm, save, load and retrain the model. Considering there is no config.json present in base model or when saving the model it has become impossible to train it in multiple iterations. Any assistance would be of great help.
2023-09-30T19:06:00
https://www.reddit.com/r/LocalLLaMA/comments/16wezej/fine_tuning_llama2_7b_chat_version/
Optimal_Original_815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16wezej
false
null
t3_16wezej
/r/LocalLLaMA/comments/16wezej/fine_tuning_llama2_7b_chat_version/
false
false
self
1
null
Current state of the iOS App I’m working on. Interact with text-generation-webui and your your local llms from everywhere. What features would you like to see implemented?
63
It’s work in progress and I’m calling for feedback and collect wishes/ideas for features you want to see implemented. AMA
2023-09-30T19:04:41
https://v.redd.it/1zpudsl3yfrb1
frapastique
v.redd.it
1970-01-01T00:00:00
0
{}
16weyas
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1zpudsl3yfrb1/DASHPlaylist.mpd?a=1698692693%2CN2NjMjcwOGJmMzc0Yjk2YmNlOTA1MDliMWYxZjA4ZDkwNTZhODM1NTc1ZThkMWFiMTFhYjExMDY5N2Y0Yzc5NQ%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/1zpudsl3yfrb1/DASH_720.mp4?source=fallback', 'height': 1280, 'hls_url': 'https://v.redd.it/1zpudsl3yfrb1/HLSPlaylist.m3u8?a=1698692693%2CYjM5OTgzOWMzNDIxNjAyYjdiOTBmZWYxZDVlOGM5N2NhNzc4ZTE3OWY5NjI1MWNjZDliOTI2MmY1NjdjNWZkNA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/1zpudsl3yfrb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
t3_16weyas
/r/LocalLLaMA/comments/16weyas/current_state_of_the_ios_app_im_working_on/
false
false
https://b.thumbs.redditm…x6GANU9mDg-c.jpg
63
{'enabled': False, 'images': [{'id': 'xGNI0ldHivYLUPrqHVfNqYMo-TLIpNC4o9a-0bKhdRg', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=21f8052612b5f2b7a49448eac819e57b8c4c2c8d', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=734f13f6101d17fadb95721834c31349e94851fa', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=05ceb0898138727665569d648250c04a196ba114', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=2bef680de49e4b4ac24efa27975a877f2170121f', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/dIivobx_JNd2s007cXY-vwvl-vOBcJ4E6R3GKrWEbEQ.png?format=pjpg&auto=webp&s=a09cc3d382829236c37af273f5f12da631ec527c', 'width': 886}, 'variants': {}}]}