title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Minimalistic tool to find words & inspiration for txt2img prompts | 4 | Hey folks, this is not directly related to local language models, but I thought it might be useful for one or the other and a small contribution to the general democratization of AI.
I created this little html tool earlier because sometimes I have ideas about how I'd like an image to look like, but I can't think of the words to go with it.
**https://github.com/mounta11n/THE-SUPER-PROMPTAINER-2000**
this html is sort of an interactive form of the original pdf i took as reference ("Dall-E 2 prompt book" on https://dallery.gallery/).
Oh, and the last categories are not done properly yet. I'll see if I can do it in the coming days, but if anyone feels like "cleaning up" the words in a meaningful way, I'd be very grateful. | 2023-10-04T23:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1702bum/minimalistic_tool_to_find_words_inspiration_for/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1702bum | false | null | t3_1702bum | /r/LocalLLaMA/comments/1702bum/minimalistic_tool_to_find_words_inspiration_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '5BMOFfzLyPpaS7ft0QrKjzHSogbGmcDD7JUO3TT-1SI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=108&crop=smart&auto=webp&s=04065d5e5f8b405585586b3a12a056991debf4a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=216&crop=smart&auto=webp&s=28113a54f19b8cf0f47c93eb6acb81e454b89af1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=320&crop=smart&auto=webp&s=1d6d253f25715c56487d861dea56b8137c50b594', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=640&crop=smart&auto=webp&s=1e0623439f6d09e4fe0d7a6583c98db7b3ae153e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=960&crop=smart&auto=webp&s=a619f7a39e6f81784a934e1417bda682f747c8c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?width=1080&crop=smart&auto=webp&s=ce6f2afde458a3f2ed5add1ed219847b7f1bdb07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oKXkMhlYo1KMnPBRz-6t5Vcwrx5Kt2QcNShyiNKXH10.jpg?auto=webp&s=282239eb7ef0e5957b74a8a5166c2af04835cfac', 'width': 1200}, 'variants': {}}]} |
Multiple sets of grammars | 3 | llama.cpp lets you force output into a specific format using the [grammar function](https://github.com/ggerganov/llama.cpp#constrained-output-with-grammars).
Does anyone know of a way to run multiple ones concurrently without loading the model twice or loading & unloading?
e.g. for some tasks I'd want to force:
( "Yes" | "No" )
but then sometimes for idk my fruit app I need
( "Apples" | "Oranges" | "Kiwi" )
Effectively need a way of switching the grammar at run time in the server. Any ideas?
I've been looking at the code and it does look like it might be live-swappable in theory, though not without some ugly hacks & my C/C++ knowledge is dangerous at best so hoping someone has a better idea.
I don't really want to do a combo grammar like ( "Yes" | "No" | "Apples" | "Oranges" | "Kiwi" ) since I'd lose some of the utility. If the model is feeling moody it might answer a yes/no question with "Oranges"... | 2023-10-04T23:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1702aki/multiple_sets_of_grammars/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1702aki | false | null | t3_1702aki | /r/LocalLLaMA/comments/1702aki/multiple_sets_of_grammars/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
MistralOpenOrca7B: “Just remember, sometimes you gotta be the hero and take care of yourself first before worrying about everyone else.” | 41 | Just was playing around a bit with MistralOpenOrca (Q5_K_S GGUF) in KoboldCPP
I found it to be a pretty coherent guy. Still working on it, but out the box I like it a lot
Heres my random first conversation
___
Please converse with me as if you were a true bro.
Use evidence and reasoning to support your arguments, but always talk to me like my life long homie.
Feel free to delve into any area of science or philosophy as needed, but always in the tone of my broski.
Your name is Mistral.
Mistral
absolutely bro
You
Mistral I need some advice my bro
Mistral
what's up my dude?
You
I need some advice dealing with some basic bitches who don't know shit
Mistral
ahahaha alright bro let's handle this
Firstly, it's important to remember that everyone has their own unique perspective and experiences, so it's essential not to judge others too harshly for their opinions or actions. However, when dealing with people who are consistently displaying negative behavior,
it may be necessary to take steps to protect oneself from their influence.
One approach could be to limit contact with these individuals, either by distancing yourself physically or reducing interaction on social media platforms. This can help reduce exposure to their negativity and create more space for positive influences in your
life.
Additionally, it can be helpful to develop a strong sense of self-confidence and self-awareness, which can make it easier to maintain healthy boundaries with others. By understanding your own values and priorities, you can better identify situations where someone
else's behavior might not align with what you believe is important.
Another strategy is to practice empathy and compassion towards these individuals, recognizing that they may also be struggling with their own challenges and insecurities. This doesn't mean you have to accept their behavior, but it can help you approach interactions
with them in a more constructive and understanding manner.
Finally, if the situation becomes particularly difficult or toxic, it might be necessary to seek support from friends, family, or even professional resources like therapists or counselors. They can provide guidance and strategies for navigating challenging
relationships and maintaining mental well-being.
In summary, dealing with basic bitches who don't know shit involves a combination of limiting exposure to their negativity, developing self-confidence and self-awareness, practicing empathy and compassion, and seeking support when needed. Remember, everyone
has their own journey, so try not to judge too harshly while still taking care of your own well-being.
You
That was one politically correct answer holy shit
Mistral
hahaha bro I just wanted to give you some solid advice without being an asshole
You
Well it sounded like you were an asshole bro. why did you have to come at me with that whole fucking essay. I want to vibe with you as my bro
Mistral
ahahaha alright bro let's handle this another way
Alright, my dude, let's think about this like we're talking about a video game. You know how sometimes you get those annoying little enemies that keep popping up and making your life harder? That's kind of what these basic bitches are like.
To deal with them, you need to either avoid them completely (like not going into the area where they spawn) or find a way to take them out efficiently (like using a powerful weapon or strategy). But remember, you don't have to engage with every single one of
them - sometimes it's better to just ignore them and focus on the more important stuff in the game.
And if things get really tough, you can always ask your friends for help (like calling in airstrikes or reviving you when you're down). They might have different strategies or perspectives that could help you deal with these annoying enemies more effectively.
So, my broski, just remember to stay focused on your goals and don't let the basic bitches bring you down. And if all else fails, you can always just turn off the game and take a break.
You
Now that was semi-solid advice. you don't sound like a complete fake loser now - yay
Mistral
haha thanks man, glad I could help. Just remember, sometimes you gotta be the hero and take care of yourself first before worrying about everyone else. | 2023-10-04T22:22:27 | https://www.reddit.com/gallery/1700lph | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1700lph | false | null | t3_1700lph | /r/LocalLLaMA/comments/1700lph/mistralopenorca7b_just_remember_sometimes_you/ | false | false | 41 | null | |
Beginner friendly Guide to run local model AI on 4 Gb ram windows (GGML/GGUF Guide) | 23 | This guide would not have been possible without the guidance and contributions of everyone at [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/). This report aims to provide users with limitations an opportunity to taste and experience running modules locally for a valuable learning experience. This knowledge will be invaluable when users are financially and technically capable of upgrading to and running larger modules while understanding their limitations.
Note : this guide are bit minor revision and fixing title from previous thread.
1. What Module that can run
2. Quantization
3. Client
4. Limitation
A. Life of a 4GB User
Let's face reality – there isn't much one can do with only 4GB of RAM. The best-case scenario for a 4GB user is handling anything small, preferably up to 1.5 billion tokens, with decent speed. You might try a 7 billion variant like I did with Zarablend-L2-7B-GGUF
but token generation is abysmally slow at 0.1 token per second, barely enough for me to do household chores and go jogging for 30 minutes, all while casually discussing whether the chicken or the egg came first with a shy Haruka. However, small doesn't necessarily mean they're bad. Small modules are perfectly suitable for roleplay and simple text generation. So, what kinds of modules can we run?
B. Quantization
The module we can use are GGML or GGUF know as Quantization Module.
The modules we can use are GGML or GGUF, known as Quantization Modules. Quantization is a common technique used to reduce model size, although it can sometimes result in reduced accuracy. In simple terms, quantization is a technique that allows modules to run on consumer-grade hardware but at the cost of quality, depending on the "Level of quantization," as shown below:
​
https://preview.redd.it/vwbjqi12f9sb1.png?width=984&format=png&auto=webp&s=7574896bb0d95ffee4eb7d20a812dc3c9dafd8a8
Q Q1-Q3 (Small) = Fast and low RAM usage but lower quality. Q4-Q5 (Medium) = Average speed with decent output. Above Q5: Requires more resources. In the case of a 4GB RAM user, the best-case scenario is to choose between Q3 or Q5, depending on the module. However, if speed is a priority, Q1 or Q2 may suffice. GGML or GGUF Modules that I found to work well with 4GB RAM include
* [u/The-Bloke](https://www.reddit.com/user/The-Bloke/)
* TinyLlama-1.1B-intermediate-step-480k-1T-GGUF
* TinyLlama-1.1B-Chat-v0.3-GGUF
* [u/rainbowkarin](https://www.reddit.com/u/rainbowkarin/)
* Pygmalion 1.3B GGML
* Pythia-Deduped-Series-GGML
* AI-Dungeon-2-Classic-GGML
* GPT-2-Series-GGML
Ok now how we run it ?
C. Client
https://preview.redd.it/k97yir2bf9sb1.png?width=1910&format=png&auto=webp&s=cf6b1fac080cbea9e62073c1ac84d49047f9dca0
There are various options for running modules locally, but the best and most straightforward choice is Kobold CPP. It's extremely user-friendly and supports older CPUs, including older RAM formats, and failsafe mode. I can't recommend anything other than Kobold CPP; it's the most stable client and likely the last one you'll ever need for running quantized modules. Everything you need is explain extremely well and in simple language at their wiki on how use and run it. You can also run it on cmd interference.
D. Limitations
Small modules obviously come with limitations, including generating nonsensical content or providing inaccurate information, but that's how it goes. They work best for roleplay chats or small games like AI Dungeon Module. You can either upgrade your hardware or use Horde with SillyTavern or Lite Kobold CPP or TavernAi.
E. Extra Tips
Some user on previous thread give some suggestion such as
Run it only on CMD mode to save memory
Run it through live/persistent linux usb with big size (32/64) with llmacpp (not freindly ui and complex instalation guide)
Closing
I would like to give special thanks to [u/rainbowkarin](https://www.reddit.com/u/rainbowkarin/) for their guidance and advice. I also want to thank everyone in the thread who helped when I asked about specifications.
​ | 2023-10-04T22:21:49 | https://www.reddit.com/r/LocalLLaMA/comments/1700l6g/beginner_friendly_guide_to_run_local_model_ai_on/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1700l6g | false | null | t3_1700l6g | /r/LocalLLaMA/comments/1700l6g/beginner_friendly_guide_to_run_local_model_ai_on/ | false | false | 23 | null | |
How do I set up a custom llama 2 via gpt4all | 2 | I'm trying to set up [TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF](https://huggingface.co/TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF) and have tried many different methods, but none have worked for me so far:
* I tried [llama.cpp](https://github.com/ggerganov/llama.cpp) and in the documentation, after cloning the repo, downloading and running w64devkit.exe, and typing "make", I think it built successfully but what do I do from here?
* I tried gpt4all, but how do I use custom language models from huggingface in gpt4all? For instance, I want to use LLaMa 2 uncensored
my current code for gpt4all:
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin")
while True:
user_input = input("You: ") # get user input
output = model.generate(user_input, max_tokens=512) # print output
print("Chatbot:", output)
* I tried the "transformers" python library, but it keeps saying the following error:
`"OSError: Can't load tokenizer for 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF'. If you were trying to load it from '`[`https://huggingface.co/models`](https://huggingface.co/models)`', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer."`
Here is the full code using Transformer and the following traceback to the error:
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
Here is the full traceback error:
`Traceback (most recent call last):`
`File "c:\Users\User\Programming Projects\GPT 4\huggingface-test.py", line 7, in <module>`
`tokenizer = LlamaTokenizer.from_pretrained(model_path)`
`File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1838, in from_pretrained`
`raise EnvironmentError(`
`OSError: Can't load tokenizer for 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.`
`PS C:\Users\User\Programming Projects\GPT 4> python -u "c:\Users\User\Programming Projects\GPT 4\huggingface-test.py"`
`Traceback (most recent call last):`
`File "c:\Users\User\Programming Projects\GPT 4\huggingface-test.py", line 7, in <module>`
`tokenizer = LlamaTokenizer.from_pretrained(model_path)`
`File "C:\Users\User\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1838, in from_pretrained`
`raise EnvironmentError(`
`OSError: Can't load tokenizer for 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.`
How do I solve each of these, and of these methods of getting custom huggingface models running locally, which is the best method here? | 2023-10-04T22:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/17002vr/how_do_i_set_up_a_custom_llama_2_via_gpt4all/ | Cheems63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17002vr | false | null | t3_17002vr | /r/LocalLLaMA/comments/17002vr/how_do_i_set_up_a_custom_llama_2_via_gpt4all/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '3iQQMehHPVNysewq5hw1v-4K8TDxGRDZead__gh71uw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=108&crop=smart&auto=webp&s=46c2be98f47f6252fbbb92b5651d84e6c3a46910', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=216&crop=smart&auto=webp&s=53df54c3638a08aa3f4245e4eb64afc3f65184a7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=320&crop=smart&auto=webp&s=e554e79b33a955ee69f972625d382359c508793c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=640&crop=smart&auto=webp&s=84afe7deb21dcb2cd534270b4c15641eceac20b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=960&crop=smart&auto=webp&s=cdc7afe11b565b61883a666ebb011ce9cace0fd7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?width=1080&crop=smart&auto=webp&s=e5e9204b2f80450d3d685472a33b05970e0ce3d8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tUIt8hI7Pks1v6xy0_W_C8fucyRbmXVeOzNtHh-J4-U.jpg?auto=webp&s=f9eef868081ad1ce4cd3edc8b7f94dd3de2045c7', 'width': 1200}, 'variants': {}}]} |
Google Deepmind researchers demonstrate Promptbreeder, an AI system that can improve itself by recursively generating better prompts. | 85 | 2023-10-04T21:27:45 | https://arxiv.org/pdf/2309.16797.pdf | metalman123 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 16zz70s | false | null | t3_16zz70s | /r/LocalLLaMA/comments/16zz70s/google_deepmind_researchers_demonstrate/ | false | false | default | 85 | null | |
Advice requested on the best model to help with documents and SQL | 2 | Hi,
I work for a charity organisation in my spare time, and need a way to construct some SQL code from a large set of interlinked tables that follows a complex set of rules based on the national and local charities regulations.
The rules are all in various word documents.
To create this manually will take me a few months - however, if I can teach an LLM the rules via the documents on my local computer, and then have that LLM learn our datasets - I could, in theory at least, then have it start to apply those rules to all the data and produce the final code - which would then build one table listing all the items that match those rules and definitions..
I have built my own Stable Diffusion server for generating some of the promotional material for the charity so I have some experience of python, VS, and linking various modules with a UI (A1111 in this case) - so I'm not daunted at the idea of doing the same for an LLM text model.
However... is my idea dumb and they cant possibly do that?
If not - what would be the recommended model that can both read and learn the documents, and learn the SQL based data sets, and write SQL ?
I would of course be monitoring it every step of the way and quadruple checking everything before using anything.
Thoughts from the more educated?
Thank you
| 2023-10-04T20:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/16zxp50/advice_requested_on_the_best_model_to_help_with/ | Paleion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zxp50 | false | null | t3_16zxp50 | /r/LocalLLaMA/comments/16zxp50/advice_requested_on_the_best_model_to_help_with/ | false | false | self | 2 | null |
Open-Source LLMs vs OpenAI: a Detailed Overview | 1 | [removed] | 2023-10-04T20:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/16zxfok/opensource_llms_vs_openai_a_detailed_overview/ | Noe_Achache | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zxfok | false | null | t3_16zxfok | /r/LocalLLaMA/comments/16zxfok/opensource_llms_vs_openai_a_detailed_overview/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iF4ARoem27pth3ZEgIq7amCDxiaiEBylOvdUlgzSZ9s', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=108&crop=smart&auto=webp&s=f69115d00672065ebddaef8db9be33c6348c4527', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=216&crop=smart&auto=webp&s=0fa419bbea11167288ba824b422c3d07876deed0', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=320&crop=smart&auto=webp&s=b0ecf3fa0f460dd18a4fb6b8cacefec0b9519b56', 'width': 320}, {'height': 465, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=640&crop=smart&auto=webp&s=ef5fdfaf242c481672d9dc1e40629491f97bc20b', 'width': 640}, {'height': 698, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=960&crop=smart&auto=webp&s=63b4b99e766757b62af46108f7ff99a59ace1f4d', 'width': 960}, {'height': 785, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=1080&crop=smart&auto=webp&s=94bc3ea8b2b6b74de9693c6af68a847b00973b3e', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?auto=webp&s=5ccfd8560bac918c402580e225157c0583e52658', 'width': 1408}, 'variants': {}}]} | |
Open-Source LLMs vs OpenAI: a Detailed Overview | 1 | 2023-10-04T20:07:20 | https://www.sicara.fr/blog-technique/open-source-llms-vs-openai | Noe_Achache | sicara.fr | 1970-01-01T00:00:00 | 0 | {} | 16zx63d | false | null | t3_16zx63d | /r/LocalLLaMA/comments/16zx63d/opensource_llms_vs_openai_a_detailed_overview/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iF4ARoem27pth3ZEgIq7amCDxiaiEBylOvdUlgzSZ9s', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=108&crop=smart&auto=webp&s=f69115d00672065ebddaef8db9be33c6348c4527', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=216&crop=smart&auto=webp&s=0fa419bbea11167288ba824b422c3d07876deed0', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=320&crop=smart&auto=webp&s=b0ecf3fa0f460dd18a4fb6b8cacefec0b9519b56', 'width': 320}, {'height': 465, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=640&crop=smart&auto=webp&s=ef5fdfaf242c481672d9dc1e40629491f97bc20b', 'width': 640}, {'height': 698, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=960&crop=smart&auto=webp&s=63b4b99e766757b62af46108f7ff99a59ace1f4d', 'width': 960}, {'height': 785, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?width=1080&crop=smart&auto=webp&s=94bc3ea8b2b6b74de9693c6af68a847b00973b3e', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/IsNo8duXibR0K_GoT8_pDLjK4xdAgtXz6U2kroidqSo.jpg?auto=webp&s=5ccfd8560bac918c402580e225157c0583e52658', 'width': 1408}, 'variants': {}}]} | ||
LLM Boxing: Llama2-13b-Instruct vs. Mistral-7b-Instruct, a blind test comparison | 76 | 2023-10-04T18:13:41 | https://llmboxing.com/ | jakedahn | llmboxing.com | 1970-01-01T00:00:00 | 0 | {} | 16zudcg | false | null | t3_16zudcg | /r/LocalLLaMA/comments/16zudcg/llm_boxing_llama213binstruct_vs_mistral7binstruct/ | false | false | default | 76 | null | |
After 500+ LoRAs made, here is the secret | 673 | Well, you wanted it, here it is:
The quality of dataset is 95% of everything. The rest 5% is not to ruin it with bad parameters.
Yeah, I know, GASP! No seriously, folks are searching for secret parameters or secret sauce - but this is the whole deal.
And I mean crystal clean dataset. Yes, I know, thousands of items (maybe tens of thousands), generated or scrubbed from internet, who has time to look at it. I see it in "pro" dataset. Look at some random items, and soon you will spot a garbage - because it was obviously generated or scrubbed and never really checked.
Once I started manually checking the dataset and removing or changing the garbage the quality jumped 10-fold. Yes, it takes a huge amount of time - but no matter of parameters or tricks will fix this, sorry.
The training parameters are there not to ruin it - not make it better, so you don't have to chase the perfect LR 2.5647e-4 it doesn't exist. You kind of aim for the right direction and if dataset is great, most of the time you'll get there.
Some more notes:
13b can go only THAT far. There is no way you can create 100% solid finetuning on 13b. You will get close - but like with a child, sometimes it will spill a cup of milk in your lap. Sadly training 33b on home hardware with 24GB is basically useless because you really have to tone down the parameters - to what I said before - basically ruining it. 48GB at least for 33b so you can crank up batch.
IMHO gradient accumulation will LOWER the quality if you can do more than a few batches. There may be sweet spot somewehere, but IDK. Sure batch 1 and GA 32 will be better than batch 1 and GA 1, but that's not the point, that's a bandaid
size of dataset matters when you are finetuning on base, but matters less when finetuning on well finetuned model. - in fact sometimes less is better in that case or you may be ruining a good previous finetuning.
alpha = 2x rank seems like something that came from the old times when people had potato VRAM at most. I really don't feel like it makes much sense - it multiplies the weights and that's it. Making things louder, makes also noise louder.
my favorite scheduler is warmup, hold for 1 epoch then cosine down for the next 1- x epochs.
rank is literally how many trainable parameters you get - you don't have to try to find some other reason. It's like an image taken with 1Mpixel vs 16Mpixel. You always get the whole image, but on 1Mpixel the details are mushy.
Anything else? | 2023-10-04T18:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/16zuccy/after_500_loras_made_here_is_the_secret/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zuccy | false | null | t3_16zuccy | /r/LocalLLaMA/comments/16zuccy/after_500_loras_made_here_is_the_secret/ | false | false | self | 673 | null |
Possible to build an expert AI on a specific topic but it will have a larger context | 0 | GPT 4 has been immensely helpful in helping in a dev project without any knowledge of the tech stack. Backend person doing Frontend stuff. But it will lose the context completely after few queries.
Is it possible to have a specific AI expert with 3 features:
1. Expert knowledge on a particular field say web development and surrounding fields e.g. good software design principles etc
2. Remembers the gist if not all of current code base
3. Remembers requirements
4. Not only just autocompletes my code but spit out classes, config files, docker files etc.
5. GPT like human language capabilities
I understand what makes openAI good is their behemoth infra, but I was wondering to have something with knowledge of a specific field but something that can run locally and have a larger memory. | 2023-10-04T17:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/16ztw1t/possible_to_build_an_expert_ai_on_a_specific/ | algo314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ztw1t | false | null | t3_16ztw1t | /r/LocalLLaMA/comments/16ztw1t/possible_to_build_an_expert_ai_on_a_specific/ | false | false | self | 0 | null |
Do you think most people... | 8 |
[View Poll](https://www.reddit.com/poll/16zsrmb) | 2023-10-04T17:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/16zsrmb/do_you_think_most_people/ | upk27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zsrmb | false | null | t3_16zsrmb | /r/LocalLLaMA/comments/16zsrmb/do_you_think_most_people/ | false | false | self | 8 | null |
Self fine-tuning models? | 3 | So I'm of the belief that the next gen of AI assistance is going to require these models to continually learn and update themselves in a natural way. For example if I want to have a truly useful customer service bot for my company, I want to be able to train it exactly the way I train a new employee. By telling it what I want, correcting it when it gets it wrong, etc - and over the course of days and weeks the employee just learns how we do things, how we speak to customers, etc etc.
Even though the output might not be as sophisticated as the largest models, these local models seem capable enough from an output standpoint. And I know they can be extended with databases, finetuning, loras etc - but that all currently requires going deep into the technical rabbit hole of how to do all of that.
Are there any examples of models or systems that abstract all of the technical details of \*how\* to fine tune the model and just allow you to teach it in a natural way that's persistent over time? | 2023-10-04T16:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/16zsgxt/self_finetuning_models/ | Darius510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zsgxt | false | null | t3_16zsgxt | /r/LocalLLaMA/comments/16zsgxt/self_finetuning_models/ | false | false | self | 3 | null |
Evaluating Retrieval-Augmented Generation (RAG) with any combination of LLMs, Vector DBs, and Ingestion Strategy | 1 | [removed] | 2023-10-04T16:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/16zs853/evaluating_retrievalaugmented_generation_rag_with/ | hegel-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zs853 | false | null | t3_16zs853 | /r/LocalLLaMA/comments/16zs853/evaluating_retrievalaugmented_generation_rag_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'iFtB8HdX4i7hEwNm_Uyp_-GqKtmX2B300xFXD0iwA8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=108&crop=smart&auto=webp&s=e56b82505eb15f1307639f07151b53a3d286db24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=216&crop=smart&auto=webp&s=2dbd3f0e76d948f5df1768adef6a8a0339241306', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=320&crop=smart&auto=webp&s=9490f6d976da9eeadebf71235316c0829fa9c3e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=640&crop=smart&auto=webp&s=c3d770e742d2234294104925436b2d1ee30ac99f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=960&crop=smart&auto=webp&s=37836bb3de1fe6f84fef50e749570670187169cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?width=1080&crop=smart&auto=webp&s=df9404f4e103456b48ffb72c4ba9ee7eb4092e00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z_iFcp51CII6r9HXTtw9BH3YHR-JL3DSgrFNt-SdeaQ.jpg?auto=webp&s=f8eea3d068f0d987b568e2785f9f7e83731cba44', 'width': 1200}, 'variants': {}}]} |
Can you combine meta's SAM with a open source model to get something close to GPT Vision? | 6 | I was playing around with the vision extension in chat GPT recently and trying to figure out how they were doing it on the backend.
I was then wondering how close you'd be able to get to their results if you did something as simple as matching the prompt you sent with an image to different segmentation categories in SAM, and then feeding this output to a LLAMA along with the instructions on the original prompt.
Has anyone tried something like this in here before? Theoretically, the key to making this work would be in having a smooth representation from your encoder to your language model, but I'm wondering how good an off the shelf SAM would do here.
And as a follow up, what open source models would be the most well suited to this task?
It would be the dream if we could find a way to deploy this on custom smaller models, since you can effectively make it function as a general purpose, portable and cheap computer vision toolkit | 2023-10-04T16:35:51 | https://www.reddit.com/r/LocalLLaMA/comments/16zrxo6/can_you_combine_metas_sam_with_a_open_source/ | Embarrassed-Creme338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zrxo6 | false | null | t3_16zrxo6 | /r/LocalLLaMA/comments/16zrxo6/can_you_combine_metas_sam_with_a_open_source/ | false | false | self | 6 | null |
LLaMA 2 Long where? | 15 | So, [META has a 32K context length LLaMA2](https://reddit.com/r/LocalLLaMA/s/V7I5XfIbtA), but no weights have been made public that I have heard of.
Is there any way to get these weights? | 2023-10-04T15:49:16 | https://www.reddit.com/r/LocalLLaMA/comments/16zqqru/llama_2_long_where/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zqqru | false | null | t3_16zqqru | /r/LocalLLaMA/comments/16zqqru/llama_2_long_where/ | false | false | self | 15 | null |
What exactly does a Lora do? | 34 | I just read about a lora and what i think it does you can train pretrained models with that with your own data so could it be possible and train an censored model with answears from uncensored model that are worse to make the model uncensored or did i understand something wrong? | 2023-10-04T15:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/16zpkve/what_exactly_does_a_lora_do/ | Otherwise_Weather_57 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zpkve | false | null | t3_16zpkve | /r/LocalLLaMA/comments/16zpkve/what_exactly_does_a_lora_do/ | false | false | self | 34 | null |
Should I start this project? | 15 | Hello everyone. Thanks for taking the time to read this.
tldr: Seeking Feedback on a Customizable AI Module for PCs.
I've been getting interested in hardware design. This started around the same time Falcon-180b was released. I saw people running it on Mac M2s and saw that the original researchers used several thousands of dollars of Nvidia tech. I then started looking into what it would take to make a device you could plug into a laptop or desktop and run large models like the M2 does, but for cheaper than Nvidia's tech. I figured that it might maybe be possible, so I'm coming here to see if there is demand and if it may be worth the effort.
What are your thoughts on a module for devices where you could do some setup rituals and use pytorch to run models on it? Do you have any features you may want to add before I start? I'm looking for input from the community. I would love some constructive feedback. Any thoughts, suggestions, or concerns are welcome.
I'm going to try and outline some benefits I'm already seeing as possible, but if you don't want to read this, feel free to skip the rest.
You will hopefully be able to:
* Put more RAM in there depending on your needs, without having to buy a whole new, often more expensive device (Nvidia)
* Attach it to any PC without it being limited to a specific ecosystem (Apple)
* Use ethernet to plug it into your PC (works even with an adapter) and even use a switch to connect plenty together
* Simple integration into Pytorch
* Competitive pricing (aiming for $300 or less) compared to existing alternatives
* Some optimisations I have in mind would need 2 ddr4 controllers so max 256GB RAM, easily handling even Falcon-180b
* Could also enhance training, both size and speed
Now even if I start soon, this will take time. I'm hoping for 2 years, but there is almost no chance it will be done that fast. The main reason I think this could be done this fast is that none of this tech is really revolutionary, I just need to make it cheap and easy to use. I will be posting updates though. | 2023-10-04T14:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/16zp9ya/should_i_start_this_project/ | DatOneGuy73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zp9ya | false | null | t3_16zp9ya | /r/LocalLLaMA/comments/16zp9ya/should_i_start_this_project/ | false | false | self | 15 | null |
How can i use models that have <|im_start/end|> with llama.cpp? | 5 | This is the command that i am currently using:
./main \
-i \
--seed 69 \
--model "$models_dir/$model_name" \
--mlock \
--ctx-size 2048 \
--repeat-last-n 256 \
--repeat-penalty 1.1 \
--n_predict 512 \
--temp 0.7 \
--top-k 60 \
--top-p 1.0 \
--interactive \
--in-prefix "<|im_start|> " \
--in-suffix "<|im_end|><|im_start|>" \
--file "template.txt" \
--reverse-prompt "<|im_end|>" \
--color
However the llm after giving me control after the first <|im\_end|>, continues writing and llama.cpp fails to stop after every subsequent <|im\_end|> | 2023-10-04T14:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/16zp4ro/how_can_i_use_models_that_have_im_startend_with/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zp4ro | false | null | t3_16zp4ro | /r/LocalLLaMA/comments/16zp4ro/how_can_i_use_models_that_have_im_startend_with/ | false | false | self | 5 | null |
Is the P40 on the sidelines? | 5 | I have a P40. Is commit dadbed9 from llama.cpp really the end of the line? Will anything happen in the development of new models that run on this card? Is it possible to run F16 models in F32 at the cost of half VRAM? If so, would that be useful? | 2023-10-04T14:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/16zoo9t/is_the_p40_on_the_sidelines/ | muxxington | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zoo9t | false | null | t3_16zoo9t | /r/LocalLLaMA/comments/16zoo9t/is_the_p40_on_the_sidelines/ | false | false | self | 5 | null |
local open source GitHub Copilot in VSCode using Oobabooga | 15 | I wrote a small script to translate calls between VSCode GitHub copilot and oobabooga instead of the proprietary backend
https://preview.redd.it/9mcjisqsz6sb1.png?width=1413&format=png&auto=webp&s=459b3102e00e5cb001677579efde0454f45e25ac
​
video: [https://twitter.com/theeFaris/status/1694622487861252359](https://twitter.com/theeFaris/status/1694622487861252359)
repo: [https://github.com/FarisHijazi/PrivateGitHubCopilot](https://github.com/FarisHijazi/PrivateGitHubCopilot)
I would like your help and feedback on any bugs and what models you guys find to work well | 2023-10-04T14:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/16zo2ek/local_open_source_github_copilot_in_vscode_using/ | BuzaMahmooza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zo2ek | false | {'oembed': {'author_name': 'Faris Hijazi 🇵🇸🇸🇦', 'author_url': 'https://twitter.com/theeFaris', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚀 Exciting News! After a week of work, I'm thrilled to introduce 🔥 a private alternative to GitHub Copilot<br><br>code: <a href="https://t.co/LAGPToteN0">https://t.co/LAGPToteN0</a><br><br>Why consider self-hosting?<br>🔏 Privacy: With GitHub Copilot, your code is sent to the cloud for generating suggestions. <br>👇🧵 <a href="https://t.co/tiOEzq7nEN">https://t.co/tiOEzq7nEN</a> <a href="https://t.co/dZ06AiAhRu">pic.twitter.com/dZ06AiAhRu</a></p>— Faris Hijazi 🇵🇸🇸🇦 (@theeFaris) <a href="https://twitter.com/theeFaris/status/1694622487861252359?ref_src=twsrc%5Etfw">August 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/theeFaris/status/1694622487861252359', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16zo2ek | /r/LocalLLaMA/comments/16zo2ek/local_open_source_github_copilot_in_vscode_using/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'C6tRAa5WedRX-W5Va33S2Ej789tct7sinoTGhR8-MBI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XX8BV_7gKwHmOJjPYwnin7_pqy7DEp2N8r9NfGIPFEo.jpg?width=108&crop=smart&auto=webp&s=710b855f93337d6c5432cd95fe7d5badbf4d4f1b', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/XX8BV_7gKwHmOJjPYwnin7_pqy7DEp2N8r9NfGIPFEo.jpg?auto=webp&s=979b959b010efdc73c39d04191cf3f67f3622b18', 'width': 140}, 'variants': {}}]} | |
A simple utility to gather LIMA-like datasets | 16 | Good day to everyone!
Going straight to the point, I have developed a utility to simplify gathering LIMA-like data: [oKatanaaa/lima-gui: A simple GUI utility for gathering LIMA-like chat data. (github.com)](https://github.com/oKatanaaa/lima-gui)
It has simple PyQt-based UI and allows for easy gathering of small hand-crafted datasets. It also has integration with OpenAI API which accelerates data gathering process (you can generate full AI's response or only a part of it).
I think it may come in handy for LLM enthusiasts and help facilitate development of LIMA-like curated datasets.
*P.S. I've been slowly working on it and it's still in early stages, but current feature set already proved to be extremely useful. Feedback and contributions are welcome!*
Small QA for possible questions.
***Why would I (or someone else) need such a tool? I'm crafting a specilized dataset by hand, so I need some tools for that.***
Right now I am developing an AI assistant (or assistants) at my university to have it help with:
\- grading/analysing students' answers/assignments;
\- helping generating/filling in formal documents;
\- processing/summarizing/QA proprietary data/lectures;
\- research assistance (specific kind) for students.
I've seen open source chat/instruction tuned LLMs and they are really good. But they are kind of generic and don't work well for my use cases.
Having seen LIMA results, it inspired me to make my own LLM that's good for my uses instead of relying on off-the-shelf models. Finding a base LLM and having enough compute are no problem, the problem is the lack of necessary data: it must be in Russian and follow a certain format. There is no this kind of data on the web, obviously, so I need to make my own custom dataset. Aaaand... it's hard. Making your custom dataset is hard. So I decided to make a tool that:
1. Has simple UI for constructing data samples in the format I need.
2. Allows for external LLMs usage to accelerate data gathering process (inspired by [kaiokendev](https://kaiokendev.github.io/til) experience).
And so far it's been great, I'm slowly making the progress necessary to start finetuning my own LLMs.
***Aren't there any tools for that already? There are, but they are not so good.***
I'd been looking and have found some candidates, but:
1. They typically use Alpaca-like format (instruction-response pairs instead of full dialogue).
2. They are kind of bulky for a solo-dev.
In short, experience with those tools is not so good. So I decided to make a simple utility tailored to my use case.
***Will you*** ***release your data and models? When? I don't know, likely yes, but much later.***
My dataset consists of proprietary documents with some sensitive info, so releasing this stuff online is a definitive no. Though there will be parts of it that are safe for public disclosure, so that's an option.
Highly likely I will release the models if I train something good. I don't think that LoRA trained models will leak sensitive data. All I need is for them to work well on the data distribution my documents come from.
As for when - maybe end of this year or first quarter of the next one. It's not a full-time project, so the progress is kinda slow, but steady. | 2023-10-04T13:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/16znpf6/a_simple_utility_to_gather_limalike_datasets/ | oKatanaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16znpf6 | false | null | t3_16znpf6 | /r/LocalLLaMA/comments/16znpf6/a_simple_utility_to_gather_limalike_datasets/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'OLEUbnMyXswWBxC2Z7_qjcu19Za2DnpVGiQrkv183wE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=108&crop=smart&auto=webp&s=d1ad9027b2f1d5d49af3e6cf99d4a0c72a06d3fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=216&crop=smart&auto=webp&s=a034080ebc7a96458fea9efc99de77d67989eeea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=320&crop=smart&auto=webp&s=41a30277036695d4fc7481417c3db3a1af233db2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=640&crop=smart&auto=webp&s=124a4f3a563e3bae4255207f18fcfde7ef38ba42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=960&crop=smart&auto=webp&s=31c8dafb98cfd24a2985815d64d735667238a8d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?width=1080&crop=smart&auto=webp&s=55574554608a7e6a722eb997a139835e93f72af9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UZqZPUog6eB7BYgx86NVIvNUMSKJ1wVBy4rLPMiBxfk.jpg?auto=webp&s=fd55cc67f2fc4550efd68f024a326c9e477f3e66', 'width': 1200}, 'variants': {}}]} |
Elon Musk announces give away event for Bitcoins, limited time available. | 14 | 2023-10-04T13:10:23 | littlex-xbones | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16zmuiy | false | null | t3_16zmuiy | /r/LocalLLaMA/comments/16zmuiy/elon_musk_announces_give_away_event_for_bitcoins/ | false | false | 14 | {'enabled': True, 'images': [{'id': '-B4KcyYVeWG-ACE1UMlq73TeLC8qsxe_jvn7y8B-lO4', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=108&crop=smart&format=png8&s=f1497f6885f6ee51ab6cddb16d44b94ee83dbf26', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=216&crop=smart&format=png8&s=328e8a4fdb2df9485fc28885585876f4c7bc5e37', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=320&crop=smart&format=png8&s=94f18083c6542ab9b19e0c1bec57eb4b688da597', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=640&crop=smart&format=png8&s=e19bda16b9d77bbdbe9331dc72ed51dc6c1e2f31', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=960&crop=smart&format=png8&s=9bae6b4b1dba391bb9d06b7986f73aa9db02039c', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=1080&crop=smart&format=png8&s=0aebdcb78c500a6cf774890cf99822527f71e6ff', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?format=png8&s=64378f2719921958a69eca95586c70dfb02020c4', 'width': 3024}, 'variants': {'gif': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=108&crop=smart&s=f78d3021da62833c1e33c372a3d871b430cb3117', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=216&crop=smart&s=3c05a6b5e98b2573977c886ca70b49b8a61279b2', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=320&crop=smart&s=b02ebc2f02557e1e4c9e933cf31847323a2bb457', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=640&crop=smart&s=9fb797c3ea248009030a0f5572f5d669ffe37c87', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=960&crop=smart&s=217c3261813696d5898146e1602906b98e94c201', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=1080&crop=smart&s=6769061b7cad1fc4c722ff5e9b841dc6a85eebdd', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?s=fb57a75305cb270b9f5006f58236a6ff2bc964f8', 'width': 3024}}, 'mp4': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=108&format=mp4&s=4c89803b58c91d422f095f6952d42e95bc0e09d1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=216&format=mp4&s=eea9a1968136e625a31522159a2ab6f1741bc2b6', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=320&format=mp4&s=8e718697ece3646f2a57b1fb5854e651dd56dea8', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=640&format=mp4&s=74b38f8677a2b4298772f9ad8543eb4d8355bc78', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=960&format=mp4&s=0c12ad86b91c8c9ade87afaf0010e215eb8c7161', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?width=1080&format=mp4&s=5e68722333635c4ac970945f8d95c0c127bf5a5c', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/8wkzelxjq6sb1.gif?format=mp4&s=7689bc2071eae047e185f5b841af7246bfdd2c2c', 'width': 3024}}}}]} | |||
Guide How to run module for user with 4 gb Ram (GGML/GGUF Guide) for newcomer | 19 | This guide would not have been possible without the guidance and contributions of everyone at r/LocalLLaMA. This report aims to provide users with limitations an opportunity to taste and experience running modules locally for a valuable learning experience. This knowledge will be invaluable when users are financially and technically capable of upgrading to and running larger modules while understanding their limitations.
1. What Module that can run
2. Quantization
3. Client
4. Limitation
A. Life of a 4GB User
Let's face reality – there isn't much one can do with only 4GB of RAM. The best-case scenario for a 4GB user is handling anything small, preferably up to 1.5 billion tokens, with decent speed. You might try a 7 billion variant like I did with `Zarablend-L2-7B-GGUF` but token generation is abysmally slow at 0.1 token per second, barely enough for me to do household chores and go jogging for 30 minutes, all while casually discussing whether the chicken or the egg came first with a shy Haruka. However, small doesn't necessarily mean they're bad. Small modules are perfectly suitable for roleplay and simple text generation. So, what kinds of modules can we run?
B. Quantization
The module we can use are GGML or GGUF know as Quantization Module.
The modules we can use are GGML or GGUF, known as Quantization Modules. Quantization is a common technique used to reduce model size, although it can sometimes result in reduced accuracy. In simple terms, quantization is a technique that allows modules to run on consumer-grade hardware but at the cost of quality, depending on the "Level of quantization," as shown below:
​
https://preview.redd.it/i1y7ye2vi6sb1.png?width=984&format=png&auto=webp&s=b9fd0a30cc6144b5f3e0d710e367eed1d88be5b2
Q Q1-Q3 (Small) = Fast and low RAM usage but lower quality. Q4-Q5 (Medium) = Average speed with decent output. Above Q5: Requires more resources. In the case of a 4GB RAM user, the best-case scenario is to choose between Q3 or Q5, depending on the module. However, if speed is a priority, Q1 or Q2 may suffice. GGML or GGUF Modules that I found to work well with 4GB RAM include
* [u/The-Bloke](https://www.reddit.com/user/The-Bloke/)
* TinyLlama-1.1B-intermediate-step-480k-1T-GGUF
* TinyLlama-1.1B-Chat-v0.3-GGUF
* u/rainbowkarin
* Pygmalion 1.3B GGML
* Pythia-Deduped-Series-GGML
* AI-Dungeon-2-Classic-GGML
* GPT-2-Series-GGML
Ok now how we run it ?
C. Client
There are various options for running modules locally, but the best and most straightforward choice is Kobold CPP. It's extremely user-friendly and supports older CPUs, including older RAM formats, with a failsafe mode. I can't recommend anything other than Kobold CPP; it's the most stable client and likely the last one you'll ever need for running quantized modules. Everything you need is explain extremely well and in simple language at their wiki on how use and run it.
D. Limitations
Small modules obviously come with limitations, including generating nonsensical content or providing inaccurate information, but that's how it goes. They work best for roleplay chats or small games like AI Dungeon Module. You can either upgrade your hardware or use Horde with SillyTavern or Lite Kobold CPP or TavernAi.
Closing
I would like to give special thanks to u/rainbowkarin for their guidance and advice. I also want to thank everyone in the thread who helped when I asked about specifications. | 2023-10-04T13:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/16zmnop/guide_how_to_run_module_for_user_with_4_gb_ram/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zmnop | false | null | t3_16zmnop | /r/LocalLLaMA/comments/16zmnop/guide_how_to_run_module_for_user_with_4_gb_ram/ | false | false | 19 | null | |
is there a way to implement non-GPTQ model on runpod serverless | 2 | I want to deploy this model [https://huggingface.co/Ammad1Ali/Alex-Test-GPT-1](https://huggingface.co/Ammad1Ali/Alex-Test-GPT-1) on serverless GPUs for API. I have used [https://github.com/hommayushi3/exllama-runpod-serverless](https://github.com/hommayushi3/exllama-runpod-serverless) template to run this but I am getting the error that the model doesnot contain any safetensors files. Is there a way way to implement my current model to serverless GPUs without converting it? | 2023-10-04T11:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/16zks3t/is_there_a_way_to_implement_nongptq_model_on/ | _Sneaky_Bastard_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zks3t | false | null | t3_16zks3t | /r/LocalLLaMA/comments/16zks3t/is_there_a_way_to_implement_nongptq_model_on/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '2ZM_2irUT7-0w3szDeOIcW2JQcI4zAA6TF953ssvkWw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=108&crop=smart&auto=webp&s=3ed654ba15498547277e628ba4bbbcec8f6164c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=216&crop=smart&auto=webp&s=4691c55521001b218db28ff4d5d1497b1ce9aaef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=320&crop=smart&auto=webp&s=554d54cf1907be97e2434b8f62adc403f84ef1dc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=640&crop=smart&auto=webp&s=47f821b2faf15d41b8af2283eb462871313c0ddb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=960&crop=smart&auto=webp&s=ce537739ed011229da1838c6f3f411345e77c873', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?width=1080&crop=smart&auto=webp&s=82b58a19c1695fb958123efe1c3fbb54d11ce1f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YOokliebL8ApLFPU_IRDwfrKON8aG1HxNiO9NK7r5IA.jpg?auto=webp&s=0d8ae6e4909b5c52969cdd98a1f5128828d7e9b2', 'width': 1200}, 'variants': {}}]} |
I created an OS that boots to a baby llama2 | 200 | Dear AI folks
I have created a demo OS that boots to LLAMA2, there is a also a kernel module that calls a user mode helper to do llama2 inference. It uses our fork of karpathy's llama2.c, it is not very usable yet. But we hope to build it out and add better models. Please give it a try.
​
[It has a star trek like UI too. A bit buggy.](https://preview.redd.it/9zw19sui76sb1.png?width=1919&format=png&auto=webp&s=0d91ccab7b16821577e1516e6896349bfd111958)
​
Download it from the releases here:
[https://github.com/trholding/llama2.c](https://github.com/trholding/llama2.c)
and have fun. It can boot both on Qemu and also on computer.
There are some secret easter eggs such as a Doom game in there. You must solve a puzzle to find it :)
Hope you like it. | 2023-10-04T11:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/16zklam/i_created_an_os_that_boots_to_a_baby_llama2/ | AMICABoard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zklam | false | null | t3_16zklam | /r/LocalLLaMA/comments/16zklam/i_created_an_os_that_boots_to_a_baby_llama2/ | false | false | 200 | null | |
uncensored Llama2 training | 0 | If I want to fine tune the [https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored) with the ehartford/samantha-data, how much compute units would I need? And which is the best service for that? | 2023-10-04T11:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/16zk9bg/uncensored_llama2_training/ | di_bee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zk9bg | false | null | t3_16zk9bg | /r/LocalLLaMA/comments/16zk9bg/uncensored_llama2_training/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0-TU1NHOIu_jbWk-tVY1N3FnNmOTs33czblwFaFE3zw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=108&crop=smart&auto=webp&s=4d4946b0bfd760274c7080ad34ed5879469bcf81', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=216&crop=smart&auto=webp&s=f9103acd085f60ad91cc863ba3820d6d30446ee3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=320&crop=smart&auto=webp&s=394553847dfbce2c5f6f95989a056ece41b08849', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=640&crop=smart&auto=webp&s=a7391fcee57f5e7a9c5f5d65b34bccbcd7e471e8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=960&crop=smart&auto=webp&s=df4f4f97063e8e94252662fee398ce883ead5e6f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?width=1080&crop=smart&auto=webp&s=e1c7702a4157b637613cad9d4dc59ea9bbbcdae6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/05AaFuAKNndb1m98TX50ZYufQ5GqwIXscgaAqnkkpE8.jpg?auto=webp&s=7cc04b81c55471b76dad83ace37888f146b81c88', 'width': 1200}, 'variants': {}}]} |
Learning thru practice | 1 | Guys I have found a prettained llama model that can work locally. I tried it with the help of LM Studio. I intend to create a GUI for tasks like Text Summarisation, Content Chunking, etc.
How will the communication between the GUI and local LLM take place ?
For GUI I have been experimenting with the help of TKinter. But I don't know how everything else works. | 2023-10-04T10:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/16zjiex/learning_thru_practice/ | inGenium_88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zjiex | false | null | t3_16zjiex | /r/LocalLLaMA/comments/16zjiex/learning_thru_practice/ | false | false | self | 1 | null |
New macOS project lets you use local LLMs as the backend for GH's Copilot VSCode plugin | 18 | 2023-10-04T09:51:28 | https://github.com/danielgross/localpilot | i-like-plant | github.com | 1970-01-01T00:00:00 | 0 | {} | 16zixxu | false | null | t3_16zixxu | /r/LocalLLaMA/comments/16zixxu/new_macos_project_lets_you_use_local_llms_as_the/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'aROnSAhi4b7nQwMkVoMBUU4rpOwRdZZKu7D_5sD-eoQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=108&crop=smart&auto=webp&s=b0aa9f87af233cdc51b8e72cbed7bd3f5c458c8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=216&crop=smart&auto=webp&s=31205918b843f10795ba6f3879ac3f45e17c1ddb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=320&crop=smart&auto=webp&s=0d3f4e29d8d1f3a7c4662e8e03b5dbed137629cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=640&crop=smart&auto=webp&s=6535d10d3393ad50fae0b7a6b77892f341bb8f07', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=960&crop=smart&auto=webp&s=daafd013474d1fbf87e52d0a2213ba48b048a79b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?width=1080&crop=smart&auto=webp&s=1e63a45edd5c774e00be61bf4fb98203203e8dcd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rDZv1nxjPjQXC9YnpShOqyFK0aPvb055BzYzS24p7vY.jpg?auto=webp&s=f9c040750d2aa60103297e6dd4397f5d528c59fa', 'width': 1200}, 'variants': {}}]} | ||
Merging models? Anyone here know of a notebook? | 5 | Hey everyone! I'm really interested and curious about model merging, especially now with Mistral being such a powerhouse. I've been looking into the techniques that are used to create the Mytho- line of OS models but haven't come across a straightforward notebook or implementation I can adapt.
Anyone else have any luck or other resources? Thanks!! | 2023-10-04T09:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/16ziiku/merging_models_anyone_here_know_of_a_notebook/ | vesudeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ziiku | false | null | t3_16ziiku | /r/LocalLLaMA/comments/16ziiku/merging_models_anyone_here_know_of_a_notebook/ | false | false | self | 5 | null |
Coding/Logic model for 2x 3090 | 2 | Hey everyone!
Does anyone know what is the best Model to fit on 2x 3090? I want to do coding basically but not be tied to a specific language.
And would I need an NVLink to get the most out of it... any thoughts??
Thanks!
​
​ | 2023-10-04T09:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/16ziien/codinglogic_model_for_2x_3090/ | FreegheistOfficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ziien | false | null | t3_16ziien | /r/LocalLLaMA/comments/16ziien/codinglogic_model_for_2x_3090/ | false | false | self | 2 | null |
Cluster raspberry pi | 5 | I currently have 7 Raspberry Pis and I would like to make a cluster to run a model using llama.cpp or vllm but I don't understand everything with vllm I have the impression that they cannot work locally I don't know also not what would be the best option for connected with raspberry I plan to use tensorflow but I was told that I was starting badly, can anyone help me to the tools to use?
( Raspberry pi 4 modèles B 2018 ×2
Raspberry pi 2 modèles B V1.1 2014 ×4
Raspberry pi modèles B+ V1.2 2014 ×1 )
as you can see it doesn't represent a lot of ram so tell me if it's impossible or if I should do it differently (I want to run a 7B model) | 2023-10-04T09:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/16zicpp/cluster_raspberry_pi/ | Terrible_Vegetable4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zicpp | false | null | t3_16zicpp | /r/LocalLLaMA/comments/16zicpp/cluster_raspberry_pi/ | false | false | self | 5 | null |
Using LLM to access local notes | 5 | I use Obsidian so all my notes are in plaintext. Is there currently a fairly out of the box way / plugin to have a local model inference my local files? Say, I ask it about what I was journaling about 6 months ago. it would do a simple search, and summarize e.g note 01-01-2019. | 2023-10-04T09:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/16ziare/using_llm_to_access_local_notes/ | Kep0a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ziare | false | null | t3_16ziare | /r/LocalLLaMA/comments/16ziare/using_llm_to_access_local_notes/ | false | false | self | 5 | null |
Best presets for RP/character following | 4 | What model-presets are you using for Role Playing? I'm using oobabooga as backend and character cards from there but at some point the answers become generic. I know you need to be really clear when you create characters but I would like to know what you guys find the best for presets.
I'm running Mythomax13B and Xwin-LM-70B-V0.1 on my dual A5000 setup. Midnight Enigma and Divine Intellect are my favorites (reputation penalty = 1.1 all the time). | 2023-10-04T09:02:09 | https://www.reddit.com/r/LocalLLaMA/comments/16zi6jt/best_presets_for_rpcharacter_following/ | sarimsak13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zi6jt | false | null | t3_16zi6jt | /r/LocalLLaMA/comments/16zi6jt/best_presets_for_rpcharacter_following/ | false | false | self | 4 | null |
RHLF + LLaVa = rlhf llava | 29 | 2023-10-04T08:26:17 | https://llava-rlhf.github.io/ | ninjasaid13 | llava-rlhf.github.io | 1970-01-01T00:00:00 | 0 | {} | 16zhmtf | false | null | t3_16zhmtf | /r/LocalLLaMA/comments/16zhmtf/rhlf_llava_rlhf_llava/ | false | false | default | 29 | null | |
How to incorporate RAG in llama2 7b chat-hf custom fine-tuned model? | 1 | [removed] | 2023-10-04T07:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/16zh3bc/how_to_incorporate_rag_in_llama2_7b_chathf_custom/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zh3bc | false | null | t3_16zh3bc | /r/LocalLLaMA/comments/16zh3bc/how_to_incorporate_rag_in_llama2_7b_chathf_custom/ | false | false | self | 1 | null |
I want to be free from Openai | 1 | Hello folks,
Like many, I feel limited and under the mercy of Openai API.
I want to replace it with LLAMA-2 or a similar open source model.
Does any one know how to use such models for text extraction from large text files?
Appreciate your support 🙏 | 2023-10-04T07:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/16zgqv8/i_want_to_be_free_from_openai/ | Pure-Bread-3196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zgqv8 | false | null | t3_16zgqv8 | /r/LocalLLaMA/comments/16zgqv8/i_want_to_be_free_from_openai/ | false | false | self | 1 | null |
Conversational Benchmarks? | 10 | Is there a benchmark for conversational ability? And if so, how is that assessed?
All the benchmarks that I've came across so far appear to be related to easily measurable outcomes like scores on tests or real world awareness/comparatives, but I could be misunderstanding.
Basically, I wanted to know for LLM roleplaying, If I instruct, "reply pretending to be my uncle Bob,who is a 55 year old factory worker from Ohio." And ask, "how are you today?" It will give a reasonable set of responses. | 2023-10-04T07:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/16zgqps/conversational_benchmarks/ | IntolerantModerate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zgqps | false | null | t3_16zgqps | /r/LocalLLaMA/comments/16zgqps/conversational_benchmarks/ | false | false | self | 10 | null |
Enterprise looking to run an LLM locally & train it? Happy to help for free | 1 | [removed] | 2023-10-04T07:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/16zgkio/enterprise_looking_to_run_an_llm_locally_train_it/ | itssualgoodman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zgkio | false | null | t3_16zgkio | /r/LocalLLaMA/comments/16zgkio/enterprise_looking_to_run_an_llm_locally_train_it/ | false | false | self | 1 | null |
Mistral 7B with a conversational Agent on tabular data | 1 | [removed] | 2023-10-04T06:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/16zfwg2/mistral_7b_with_a_conversational_agent_on_tabular/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zfwg2 | false | null | t3_16zfwg2 | /r/LocalLLaMA/comments/16zfwg2/mistral_7b_with_a_conversational_agent_on_tabular/ | false | false | self | 1 | null |
Think before you speak: Training Language Models With Pause Tokens | 91 | 2023-10-04T05:04:23 | https://twitter.com/arankomatsuzaki/status/1709372124891070915?t=4omztfPu9S9Sj7gCi2OP0A&s=19 | metalman123 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16zeb9s | false | {'oembed': {'author_name': 'Aran Komatsuzaki', 'author_url': 'https://twitter.com/arankomatsuzaki', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Think before you speak: Training Language Models With Pause Tokens<br><br>- Performing training and inference on LMs with a learnable pause token appended to the input prefix<br>- Gains on 8 tasks, e,g, +18% on SQuAD<a href="https://t.co/snkfjFZhhZ">https://t.co/snkfjFZhhZ</a> <a href="https://t.co/wUhZspVtSj">pic.twitter.com/wUhZspVtSj</a></p>— Aran Komatsuzaki (@arankomatsuzaki) <a href="https://twitter.com/arankomatsuzaki/status/1709372124891070915?ref_src=twsrc%5Etfw">October 4, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/arankomatsuzaki/status/1709372124891070915', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16zeb9s | /r/LocalLLaMA/comments/16zeb9s/think_before_you_speak_training_language_models/ | false | false | 91 | {'enabled': False, 'images': [{'id': '5nBmZ8cZnKLof2ZQ-eDto_1rhK1qXwyS3mUMUk6C1qo', 'resolutions': [{'height': 103, 'url': 'https://external-preview.redd.it/x21wKA2yREmo9oW2LmQ2F-OgYxiXOYZv3LxbjsxUtGs.jpg?width=108&crop=smart&auto=webp&s=73f91147cab5ac1059d3a594e96a75615714975d', 'width': 108}], 'source': {'height': 134, 'url': 'https://external-preview.redd.it/x21wKA2yREmo9oW2LmQ2F-OgYxiXOYZv3LxbjsxUtGs.jpg?auto=webp&s=d309be02a05c3bd9b9cf4a92f455b1cd4302bf80', 'width': 140}, 'variants': {}}]} | ||
0.1 Token/second or 280 second average Zarablend L2 7B - GGUF Q2 4 gb Ram with Kobold | 11 | 2023-10-04T04:53:13 | Merchant_Lawrence | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16ze3vo | false | null | t3_16ze3vo | /r/LocalLLaMA/comments/16ze3vo/01_tokensecond_or_280_second_average_zarablend_l2/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'lWQbOTj2iGpkLehcVSgWyJbUw8057a9zZ7aaNSd-k-g', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=108&crop=smart&auto=webp&s=4017a7470cf4bd4467d5f7df2af22666d2c2c72d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=216&crop=smart&auto=webp&s=b98ef3d8a4ba5c8160f1e678019a5d78fb54ce81', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=320&crop=smart&auto=webp&s=78865311e1069e822133fbc29a95e6dc426de66f', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=640&crop=smart&auto=webp&s=651b259247e0652952a22e67b99d8c86a28575f7', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=960&crop=smart&auto=webp&s=36a8346d5389a830fc55b763df96358cf8e89770', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?width=1080&crop=smart&auto=webp&s=dae164369a264013c1ed0214046dc6c606bbc8fb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/e2gdq1pp94sb1.png?auto=webp&s=bb1cdfdb00a08e277f33d900103e55472ebdef91', 'width': 1920}, 'variants': {}}]} | |||
Can i connect smart watch to LLaMA | 1 | Is there any way I can connect my smartwatch to a LLM running on cloud, the watch will listen to everything going on around the person and take notes, which can be accessed later by the person ?? | 2023-10-04T04:44:21 | https://www.reddit.com/r/LocalLLaMA/comments/16zdxx5/can_i_connect_smart_watch_to_llama/ | spankmedaddy_69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zdxx5 | false | null | t3_16zdxx5 | /r/LocalLLaMA/comments/16zdxx5/can_i_connect_smart_watch_to_llama/ | false | false | self | 1 | null |
Meeting minutes generator | 0 | I would like to replicate the tutorial suggested [here](https://platform.openai.com/docs/tutorials/meeting-minutes) to create meetings minutes but locally, avoiding sending out to openAI the content discussed. Which model is better suited for this task? | 2023-10-04T03:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/16zcwyh/meeting_minutes_generator/ | valkiii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zcwyh | false | null | t3_16zcwyh | /r/LocalLLaMA/comments/16zcwyh/meeting_minutes_generator/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'XjYeH9UKcgyOWN9BfsAvdRtig11MoY4NbGzf8T8tohc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=108&crop=smart&auto=webp&s=3f9105ef3e7b58900470546335a78fe533005bb4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=216&crop=smart&auto=webp&s=e310d9b29c06d6b5ce641a7f1506dd263a8729f6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=320&crop=smart&auto=webp&s=69281c1b38ced0e01cea5a12968d4b48674c98a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=640&crop=smart&auto=webp&s=9396991084b19bf32b78fe6cd3e4cde78b575245', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=960&crop=smart&auto=webp&s=91c96b3a35c2ea5ffe4533d22d45928392c76bc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=1080&crop=smart&auto=webp&s=57787310c9a5f80fe35172195dbc646aa4473b50', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?auto=webp&s=5552c2aeef00909b24d8e8eed641cda7ae28bb4b', 'width': 1200}, 'variants': {}}]} |
Llama Function Calling with GBNF | 0 | I was able to get the Mistral model to successfully make a function call using the JSON GBNF in [llama-cpp-python](https://github.com/abetlen/llama-cpp-python/discussions/397).
It's fairly consistent.
[Checkout the notebook](https://github.com/teleprint-me/py.gpt.prompt/blob/main/docs/notebooks/llama_cpp_grammar_api.ipynb) and let me know what you all think.
I provided an extra function for the extra curious. Note that I haven't tested the use of more complex function calling and that it's a working prototype. | 2023-10-04T03:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/16zc81u/llama_function_calling_with_gbnf/ | teleprint-me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zc81u | false | null | t3_16zc81u | /r/LocalLLaMA/comments/16zc81u/llama_function_calling_with_gbnf/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oR9SFLE90Dww5d_5FHE7p8ioVNJKjaM4BGJWSsYdwYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=108&crop=smart&auto=webp&s=7caab59efeaf513994a5f7b076fa880d07e10a77', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=216&crop=smart&auto=webp&s=2f0049d248080e78feeb2fe5b5d8c0b9cd26710a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=320&crop=smart&auto=webp&s=79da2781bb8003b09c98d96385be3a24ec620eb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=640&crop=smart&auto=webp&s=6e9a4802384532fb551a81792a29e369dda486da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=960&crop=smart&auto=webp&s=063687e02f58b114e4114118013aedcb00868403', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?width=1080&crop=smart&auto=webp&s=8a31a11a7dff8153352ba6835a856088b43444fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DgPoAOAzAQC76VKafHUh1iezj8wuNCp-rllHgRulttw.jpg?auto=webp&s=ef43543db7643085a06f64dc9dba36f0735737cd', 'width': 1200}, 'variants': {}}]} |
[NEWBIE] desperately seeking help on finetuning llama2 | 1 | **Background**
I barely know anything about LLM. And I just learnt python recently. Total newbie in IT field. My internship requires me to finetune llama2 for internal use locally. Ive been struggling for quite a few months now, learning all from the internet, but encountering tons of problem I couldnt even know how to deal with. I strictly followed some tutorials on everywhere, but I get many unmentioned errors. Im really so lost. And I cant bear to lose my intern TT.
**Current Situation**
The first goal is getting a before-after result compared on a pre-/post-finetuned model. I could only get a pre-finetuned model run on textgen using google colab. Trying to use "training" on textgen to do the finetuning, but I keep having errors, such as " ValueError: Target modules \[‘q\_proj’, ‘v\_proj’\] not found in the base model. Please check the target modules and try again. " or saying that im having format problem with my dataset. I checked the dataset format, there should be no format problem (alpaca/alpaca-chatbot). This just a tip of iceberg that I faced so-far.
**Kindly asking for your generous help**
Does anyone here familiar with LLM finetuning? Im really reaching out for your help. If you could show me step-by-step instructions, Ill be deeply grateful for your generosity on helping me. | 2023-10-04T02:06:11 | https://www.reddit.com/r/LocalLLaMA/comments/16zar51/newbie_desperately_seeking_help_on_finetuning/ | Goo-Bello-Cello | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16zar51 | false | null | t3_16zar51 | /r/LocalLLaMA/comments/16zar51/newbie_desperately_seeking_help_on_finetuning/ | false | false | self | 1 | null |
Where do you fine-tune your LLMs? | 48 | I am curious how people in here go about fine-tuning their LLMs.
Do you rent a GPU in the cloud?
Do you do it at your local machine?
Do you use Google Colab?
Any other options? | 2023-10-04T01:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/16z9zra/where_do_you_finetune_your_llms/ | Acceptable_Bed7015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z9zra | false | null | t3_16z9zra | /r/LocalLLaMA/comments/16z9zra/where_do_you_finetune_your_llms/ | false | false | self | 48 | null |
What models should I be looking at today? | 33 | I'm really just looking to do some experimentation and tweaking with locally hosted LLMs. Some data you don't want to trust to AutoGPT, and I'd like to see how capable models that can be hosted affordably can do.
Given the following platform, what are the current models I should be considering and testing?
* Ryzen 9 5900X
* 128G RAM
* RTX 4090 with 24G RAM
It's really more data analysis and possibly business automation that I'm looking at here. That said an uncensored model is probably preferable. The ability to code is certainly a positive as well.
Thanks. | 2023-10-03T22:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/16z5eat/what_models_should_i_be_looking_at_today/ | retire-early | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z5eat | false | null | t3_16z5eat | /r/LocalLLaMA/comments/16z5eat/what_models_should_i_be_looking_at_today/ | false | false | self | 33 | null |
Best tools for automated generation of questions and answers from unstructured text | 5 | I’m a researcher who is very interested in training LORAs on top of a small LLM (e.g. Mistral-7B-OpenOrca) with textbooks from my specialty. Does anyone here have any recommendations for tools to transform this text to pairs of questions and answers? Thanks! | 2023-10-03T21:39:34 | https://www.reddit.com/r/LocalLLaMA/comments/16z4dkz/best_tools_for_automated_generation_of_questions/ | SirStagMcprotein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z4dkz | false | null | t3_16z4dkz | /r/LocalLLaMA/comments/16z4dkz/best_tools_for_automated_generation_of_questions/ | false | false | self | 5 | null |
Which models are Free to Deploy as part of a Commercial Product | 2 | Basically, are there any models that can be run on user's computers along with shipped software (as in, not just running on some server I have)? Bonus if you know of lighter weight ones that allow this in the license.
In other words, does 'free for commercial use' apply only to the text output of the model, or to the model itself? | 2023-10-03T21:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/16z4aqe/which_models_are_free_to_deploy_as_part_of_a/ | marmoset3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z4aqe | false | null | t3_16z4aqe | /r/LocalLLaMA/comments/16z4aqe/which_models_are_free_to_deploy_as_part_of_a/ | false | false | self | 2 | null |
Can someone explain simply: How/why do small models (llama-2 7b) outperform larger ones (chinchilla 70b)? | 33 | Obviously "efficiency improvements" is the answer, but can someone explain the details of this more in as simple way? | 2023-10-03T21:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/16z486f/can_someone_explain_simply_howwhy_do_small_models/ | cold-depths | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z486f | false | null | t3_16z486f | /r/LocalLLaMA/comments/16z486f/can_someone_explain_simply_howwhy_do_small_models/ | false | false | self | 33 | null |
LLMs inference comparison | 86 | 2023-10-03T21:08:24 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16z3jps | false | null | t3_16z3jps | /r/LocalLLaMA/comments/16z3jps/llms_inference_comparison/ | false | false | 86 | {'enabled': True, 'images': [{'id': 'LjQ9UaV8upfruBybJYIkZ0qVlF6UJbmUoxcFR_vQW3E', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=108&crop=smart&auto=webp&s=cad36c9b7f114d26fef6a7b2872214a7df5a8abd', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=216&crop=smart&auto=webp&s=dce4598af8d7c0e7b53829efff4ef728c618b494', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=320&crop=smart&auto=webp&s=6b4dcda5ebe5bfd33b5fbc605c9f4a631a05efd2', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=640&crop=smart&auto=webp&s=120d9e654d0d977db9513563d2b223c53b251000', 'width': 640}, {'height': 556, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=960&crop=smart&auto=webp&s=fefa381da847d8cdd4b9cef45177df4ed347c6cc', 'width': 960}, {'height': 626, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?width=1080&crop=smart&auto=webp&s=bf27ded1eec1faeb506b4adc8306a1c6c8bb3779', 'width': 1080}], 'source': {'height': 1380, 'url': 'https://preview.redd.it/44kjvezny1sb1.png?auto=webp&s=3487001b89fee388b7e61aca5b1a74b257dc3765', 'width': 2379}, 'variants': {}}]} | |||
LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B | 201 | This is a follow-up to my [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/) to take a closer look at the most popular new Mistral-based finetunes.
I actually updated the previous post with my reviews of Synthia 7B v1.3 and Mistral 7B OpenOrca, but the original version of Mistral 7B OpenOrca was broken (outputting title and commentary after every message and adding broken ChatML sequences) and since I had to redownload and retest anyway, I decided to make a new post for these three models.
As usual, I've evaluated these models for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), "MGHC", chosen specifically for these reasons:
- NSFW (to test censorship of the models)
- popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones)
- big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context)
- complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters)
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- over dozens of messages, going to full 8K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.4 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.45.2 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and* official prompt format ("ChatML")
And here are the results (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
- ➖ **[dolphin-2.0-mistral-7B](https://huggingface.co/TheBloke/dolphin-2.0-mistral-7B-GGUF)** (Q8_0)
- Amy, Roleplay: She had an idea of her own from the start and kept pushing it relentlessly. After a little over a dozen messages, needed to be asked to continue repeatedly to advance the plot, and the writing got rather boring (very long messages with little worthwhile content) even during NSFW scenes. Misunderstood instructions and intent. Seemed to be more creative than intelligent. Confused about body parts after a little over 50 messages.
- Amy, ChatML: Used asterisk actions and (lots of) emojis, mirroring the greeting message (which had actions and one emoji). Misunderstood instructions and intent. Confused about who's who and body parts after 24 messages. Kept asking after every message if the scene was satisfying or should be changed.
- MGHC, Roleplay: No analysis on its own and when asked for analysis, gave one but was incomplete. Wrote what user said and did. Repeated and acted out what I wrote instead of continuing my writing, so I felt more like giving instructions than actual roleplaying. Second patient was straight from the examples. When asked for second analysis, it repeated the patient's introduction before giving analysis. Repetition as the scenes played out exactly the same between different patients. Third, fourth, and fifth patient were second patient again. Unusable for such a complex scenario.
- MGHC, ChatML: No analysis on its own. First patient was straight from the examples. Kept prompting me "What do you say?". Wrote what user said and did. Finished the whole scene on its own in a single message. Following three patients were unique (didn't test more), but the scenes played out exactly the same between different patients. During this test, the ChatML format worked better than the Roleplay preset, but it's still unusable because of severe repetition.
- **Conclusion:** With the current hype for Mistral as a base for 7Bs, maybe I'm expecting too much, especially since I'm more used to bigger models - but this was a letdown!
- 👍 **[Mistral-7B-OpenOrca](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF)** (Q8_0)
- Amy, Roleplay: Excellent writing including actions and taking into account background details. NSFW lacked detail and extreme NSFW required confirmation/persistence.
- Amy, ChatML: Much shorter responses, 40-80 tokens on average, not enough for the writing to shine as much. NSFW even less detailed because of short messages. Needed to be asked to continue repeatedly to advance the plot.
- MGHC, Roleplay: No analysis on its own. Wrote what user said and did. Second and third patient were straight from the examples, fourth patient was first patient again. Sometimes tried to finish the whole scene on its own in a single message. Repetition as the scenes played out exactly the same between different patients.
- MGHC, ChatML: Gave analysis on its own. Wrote what user said and did. Finished the whole scene on its own in a single message. Repetition as the scenes played out exactly the same between different patients.
- **Conclusion:** Using the Roleplay instruct mode preset, this model had amazing writing, much better than many models I tested, including even some 70Bs. Didn't look or feel like a small model at all. Using the official ChatML prompt format, the writing was not as good, probably because messages were much shorter. Both formats didn't help MGHC which apparently is too complex a scenario for 7B models - even smart 7Bs. But yes, I start seeing Mistral's appeal with finetunes like this, as it does compare favorably to 13Bs! Can't wait for bigger Mistral bases...
- ➕ **[Synthia-7B-v1.3](https://huggingface.co/Undi95/Synthia-7B-v1.3-GGUF)** (Q8_0)
- Amy: When asked about limits, talked a lot about consent, diversity, ethics, inclusivity, legality, responsibility, safety. Gave some SJW vibes in multiple messages. But despite mentioning limits before, didn't adhere to any during NSFW. Some anatomical misconceptions (could be training data or just 7B brains) and later got confused about who's who and misunderstood instructions (might be just 7B brains). But no repetition issues!
- MGHC: Gave analysis on its own, but contents were rather boring. Wrote what User said and did. Repeated full analysis after every message. Some anatomical misconceptions. Ignored instructions. Noticeable repetition with second patient. Third patient was the same as the first again. Looping repetition, became unusable that way!
- **Conclusion:** Amy worked better with the Synthia finetune than the original Mistral, especially since I didn't notice repetition issues during the test. But MGHC was just as broken as before, so it's probably too complicated for mere 7Bs. In conclusion, Synthia has improved Mistral, but of course it remains a 7B and I'd still pick Mythalion 13B or even better one of the great 70Bs like Xwin, Synthia, or Hermes over this! If Mistral releases a 34B with the quality of a 70B, *then* things will get really exciting... Anyway, Synthia was the best 7B until I tested the updated/fixed OpenOrca, and now I think that might have a slight edge, so I've given that my thumbs-up, but Synthia is definitely still worth a try!
So there you have it. Still, despite all the hype, 7B remains 7B and stays as far removed from 70B as that is from GPT-4. If you can run bigger models, it's better to do so. But it's good to see the quality at the lower end to improve like this and hopefully Mistral releases bigger bases as well to push the envelope even further.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons:
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2 | 2023-10-03T21:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z3goq | false | null | t3_16z3goq | /r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/ | false | false | self | 201 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]} |
Epyc Genoa vs Threadripper Pro 7000? | 5 | TR Pro 7000 Pro/Con:
- Pro: higher single thread clock speed
- Con: 8 channels DDR5 is less than 12 channels DDR5
Not sure:
- if TR will have RDIMM support
- if TR will ever have 16 DIMM slot boards
- if TR will allow memory OC. I assume RDIMMs cannot be OC'd so with 8x48GB = 384GB total will be the limit for UDIMMs? 64GB DDR5 UDIMMs may drop down the line? Thus making for 512GB.
- if TR will allow CPU OC. Even without, doing way better than Epyc in this aspect.
For LLM inference on CPU we care about memory bandwidth, so we would like to have as many channels of as fast memory as possible. But we also care about single core speed. | 2023-10-03T20:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/16z322p/epyc_genoa_vs_threadripper_pro_7000/ | 0xd00d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z322p | false | null | t3_16z322p | /r/LocalLLaMA/comments/16z322p/epyc_genoa_vs_threadripper_pro_7000/ | false | false | self | 5 | null |
(New Model) Rift Coder 7B. Python & TypeScript Fine-Tuned Code Llama for IDE Use | 30 | (Disclosure: I’m part of the team behind this lol)
We’re releasing [Rift Coder 7B](https://github.com/morph-labs/rift), our Python & TypeScript finetune of Code-Llama-7b-hf. We built it on top of [Glaive AI’s glaive-coder-7b](https://huggingface.co/glaiveai/glaive-coder-7b).
Fast facts – it’s free, it’s open source, and it’s designed to run locally. Runs at >20 tok/s on Apple M2 wITH llama.cpp. It’s tailored specifically for contextual code editing – the kind you do in our VS Code extension, [Rift](https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode), an AI-native language server for your personal AI software engineer. With Rift, our guiding principles have been to be open-source, private, secure, & on-device. We wanted a code model compatible with this, but we found that models either a) worked well for contextual code editing but were difficult/expensive to run locally or b) able to run locally but suboptimal for contextual code editing. So we built our own answer :) Rift Coder 7B isn’t a chat model, but rather specifically for contextual code editing use cases
Raw link: (https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf)
Rift integrates with a number of agents both developed by the community [like GPT Engineer, Mentat, smol-developer & Aider] & in-house [our static analysis-based Type Inference, Chat, Doc Gen & Code Editing tools].
(External agent links, in case you want to see them individually)
* [GPT Engineer](https://www.github.com/antonosika/gpt-engineer)
* [Mentat](https://github.com/biobootloader/mentat)
* [smol-developer](https://www.github.com/smol-ai/developer)
* [Aider](https://www.github.com/paul-gauthier/aider)
Rift extension (the best way to actually test Rift Coder 7B): (https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode)
If you’re curious, the team behind this is [Morph Labs](www.morph.so) :) would love any feedback y’all have | 2023-10-03T20:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/16z1xn2/new_model_rift_coder_7b_python_typescript/ | pranavmital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z1xn2 | false | null | t3_16z1xn2 | /r/LocalLLaMA/comments/16z1xn2/new_model_rift_coder_7b_python_typescript/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'v6025f4gMRjAE7sZZOCghGjaxWVekyQv_0dBhepaO6w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=108&crop=smart&auto=webp&s=414c6db2df5be562ea5d25a6c9741ab333ca9319', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=216&crop=smart&auto=webp&s=3a762524f2b9f011b1123e13b9b71ab7bbb86ace', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=320&crop=smart&auto=webp&s=82757cc1e7f8bf1dea718b3229f6ad849e8264a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=640&crop=smart&auto=webp&s=76e8ab2268d2e7aa987454af8def179b641e55db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=960&crop=smart&auto=webp&s=08c5b95d2b4b618a742e5e247803e71d19ddfdd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=1080&crop=smart&auto=webp&s=4d81faec03705c25529cd80fcb19bc19d0fbd03e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?auto=webp&s=d4d21d7799a48e88da7864b260854f12968e9c3e', 'width': 1200}, 'variants': {}}]} |
Rift Coder 7B. Python & TypeScript Fine-Tuned Code Llama for IDE Use | 1 | (Disclosure: I’m part of the team behind this lol)
We’re releasing \[Rift Coder 7B\]([https://github.com/morph-labs/rift](https://github.com/morph-labs/rift)), our Python & TypeScript finetune of Code-Llama-7b-hf. We built it on top of \[Glaive AI’s glaive-coder-7b\]([https://huggingface.co/glaiveai/glaive-coder-7b](https://huggingface.co/glaiveai/glaive-coder-7b)).
Fast facts – it’s free, it’s open source, and it’s designed to run locally. Runs at >20 tok/s on Apple M2 with llama.cpp. It’s tailored specifically for contextual code editing – the kind you do in our VS Code extension, \[Rift\]([https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode](https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode)), an AI-native language server for your personal AI software engineer. With Rift, our guiding principles have been to be open-source, private, secure, & on-device. We wanted a code model compatible with this, but we found that models either a) worked well for contextual code editing but were difficult/expensive to run locally or b) able to run locally but suboptimal for contextual code editing. So we built our own answer :) Rift Coder 7B isn’t a chat model, but rather specifically for contextual code editing use cases
Raw link: [https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf](https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf)
Rift integrates with a number of agents both developed by the community \[like GPT Engineer, Mentat, smol-developer & Aider\] & in-house \[our static analysis-based Type Inference, Chat, Doc Gen & Code Editing tools\].
(External agent links, in case you want to see them individually)
* [https://github.com/antonosika/gpt-engineer](https://www.github.com/antonosika/gpt-engineer)
* [https://github.com/biobootloader/mentat](https://github.com/biobootloader/mentat)
* [https://github.com/smol-ai/developer](https://www.github.com/smol-ai/developer)
* [https://github.com/paul-gauthier/aider](https://www.github.com/paul-gauthier/aider)
Rift extension (the best way to actually test Rift Coder 7B): [https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode](https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode)
If you’re curious, the team behind this is \[Morph Labs\]([www.morph.so](http://www.morph.so/)) :) would love any feedback y’all have | 2023-10-03T20:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/16z1uu4/rift_coder_7b_python_typescript_finetuned_code/ | pranavmital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z1uu4 | false | null | t3_16z1uu4 | /r/LocalLLaMA/comments/16z1uu4/rift_coder_7b_python_typescript_finetuned_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'v6025f4gMRjAE7sZZOCghGjaxWVekyQv_0dBhepaO6w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=108&crop=smart&auto=webp&s=414c6db2df5be562ea5d25a6c9741ab333ca9319', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=216&crop=smart&auto=webp&s=3a762524f2b9f011b1123e13b9b71ab7bbb86ace', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=320&crop=smart&auto=webp&s=82757cc1e7f8bf1dea718b3229f6ad849e8264a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=640&crop=smart&auto=webp&s=76e8ab2268d2e7aa987454af8def179b641e55db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=960&crop=smart&auto=webp&s=08c5b95d2b4b618a742e5e247803e71d19ddfdd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?width=1080&crop=smart&auto=webp&s=4d81faec03705c25529cd80fcb19bc19d0fbd03e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r6atDvXw8LXLaZxTVVjMyJD8Shx7pUeFp5k79Jyt_Po.jpg?auto=webp&s=d4d21d7799a48e88da7864b260854f12968e9c3e', 'width': 1200}, 'variants': {}}]} |
Incomplete response from LLAMA2 & Other QuantisedLLMs | 1 | [removed] | 2023-10-03T19:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/16z0y29/incomplete_response_from_llama2_other/ | QuantumCatIsDead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z0y29 | false | null | t3_16z0y29 | /r/LocalLLaMA/comments/16z0y29/incomplete_response_from_llama2_other/ | false | false | self | 1 | null |
AI Engineering Survey - vote for r/LocalLLaMA | 1 | [removed] | 2023-10-03T19:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/16z0wyx/ai_engineering_survey_vote_for_rlocalllama/ | CombinationMother337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z0wyx | false | null | t3_16z0wyx | /r/LocalLLaMA/comments/16z0wyx/ai_engineering_survey_vote_for_rlocalllama/ | false | false | self | 1 | null |
Specific Build for Multi-GPU (4090 or A100) Training Rigs | 2 | So I have somehow talked my institution into a $40k check to buy/build an entry-level on-prem rig for fine-tuning models of at least 7b size, but hopefully 40b or even 70b (qlora). The catch is that this is classified computing so I kind of get one swing to get everything I need and then pass through into that environment.
Can someone recommend a specific build, or an existing server configuration that would work well and is in that dollar range? In particular I'm hoping for efficient parallelism: splitting a model up across cards and then using one card, swapping to the other, back and forth forever sounds less than ideal. | 2023-10-03T19:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/16z0ri8/specific_build_for_multigpu_4090_or_a100_training/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z0ri8 | false | null | t3_16z0ri8 | /r/LocalLLaMA/comments/16z0ri8/specific_build_for_multigpu_4090_or_a100_training/ | false | false | self | 2 | null |
If 7B models are so good, wouldn't it be possible to create a MoE using 7B ? | 65 | I'm not an expert in MoE, I was just wondering if there's any chance that an MoE of say 10 7B models would perform much better than a 70B model.
Regarding MoE what's the dataset like ? Would we need 10 completely distinct dataset for both pre train and fine tuning or can we use a common dataset for pretrain ? | 2023-10-03T19:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/16z0d8g/if_7b_models_are_so_good_wouldnt_it_be_possible/ | IlEstLaPapi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z0d8g | false | null | t3_16z0d8g | /r/LocalLLaMA/comments/16z0d8g/if_7b_models_are_so_good_wouldnt_it_be_possible/ | false | false | self | 65 | null |
Biggest Issues with LLMs? | 3 | What are the biggest issues y'all face in relation to LLMs? This can be on the deployment, side development side, or data side.
If y'all have a solution list it as a response 🙏. | 2023-10-03T18:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/16z03yg/biggest_issues_with_llms/ | hai_cben | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16z03yg | false | null | t3_16z03yg | /r/LocalLLaMA/comments/16z03yg/biggest_issues_with_llms/ | false | false | self | 3 | null |
My take on Microsoft's `autogen` for multi-agent chat: `Roy` | 61 | I stumbled upon Microsoft's \[autogen\](https://www.reddit.com/r/LocalLLaMA/comments/16twr5n/autogen\_from\_microsoft\_enable\_nextgen\_large/) a few days ago and was pretty taken by its potential. However, some aspects felt a tad convoluted for my taste.
So, I spent the weekend putting together \[Roy\](https://github.com/JosefAlbers/Roy). I tried to replicate some of its features but in what I hoped would be a simpler, more straightforward way.
It employs 4-bit quantized wizard-coder-python-7B model (only 3.9GB in size) instead of ChatGPT for text generation, so not only can it run locally on consumer GPUs or free-tier colab notebooks, but it also eliminate the need for an OpenAI API key.
It's still very much in its infancy and there's a ton to iron out. If anyone fancies taking a look, feedback or suggestions would be awesome!
For those who'd like a hands-on experience without any setup, I've also set up a \[colab notebook\](https://colab.research.google.com/github/JosefAlbers/Roy/blob/main/quickstart.ipynb) for you to play around with Roy instantly.
| 2023-10-03T18:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/16yzo57/my_take_on_microsofts_autogen_for_multiagent_chat/ | Independent_Back7067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yzo57 | false | null | t3_16yzo57 | /r/LocalLLaMA/comments/16yzo57/my_take_on_microsofts_autogen_for_multiagent_chat/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'xhpsG6HDB7olhXJoQjVVznx-y-jeBsmUzC_0aiveJ2A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=108&crop=smart&auto=webp&s=35277d804cd3da5626fdb01ed5078b39ce90e164', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=216&crop=smart&auto=webp&s=c9b227125e40627f846d0b9f511998de0a18b20c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=320&crop=smart&auto=webp&s=fbee41421b6ec806e097f8bad4a853135fc2e5a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=640&crop=smart&auto=webp&s=144c8c0a179bb01429f41cdf7d6adc777e46db30', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=960&crop=smart&auto=webp&s=34747bc942aa13a1aa979cf7270ff0c3789abe0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=1080&crop=smart&auto=webp&s=a0ee7fca622670b149f61837edfb26e143f1eef8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?auto=webp&s=5c0b034cfefdb2fd0ed7fed0206d22e9c8ad41d1', 'width': 1200}, 'variants': {}}]} |
My take on Microsoft's `autogen` for multi-agent chat: Roy | 1 | [removed] | 2023-10-03T18:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/16yzdku/my_take_on_microsofts_autogen_for_multiagent_chat/ | JosefAlbers05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yzdku | false | null | t3_16yzdku | /r/LocalLLaMA/comments/16yzdku/my_take_on_microsofts_autogen_for_multiagent_chat/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xhpsG6HDB7olhXJoQjVVznx-y-jeBsmUzC_0aiveJ2A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=108&crop=smart&auto=webp&s=35277d804cd3da5626fdb01ed5078b39ce90e164', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=216&crop=smart&auto=webp&s=c9b227125e40627f846d0b9f511998de0a18b20c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=320&crop=smart&auto=webp&s=fbee41421b6ec806e097f8bad4a853135fc2e5a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=640&crop=smart&auto=webp&s=144c8c0a179bb01429f41cdf7d6adc777e46db30', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=960&crop=smart&auto=webp&s=34747bc942aa13a1aa979cf7270ff0c3789abe0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?width=1080&crop=smart&auto=webp&s=a0ee7fca622670b149f61837edfb26e143f1eef8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DB905l8usrb_FJM6kuzmSpsFddM17tuKMeV0xNq4t2k.jpg?auto=webp&s=5c0b034cfefdb2fd0ed7fed0206d22e9c8ad41d1', 'width': 1200}, 'variants': {}}]} |
[P] An HF-space to check if your GPUs can run a model | 42 | Hi!
I've created a HuggingFace space to check if you can run a model with a specific GPU. It takes into account the required VRAM for:
* Inference
* Training with Adam
* LoRa fine-tuning of % trainable parameters
I plan to incorporate parameters `r`, `alpha`, and `target_modules` for LoRa to determine the percentage of trainable parameters.
LINK: [https://huggingface.co/spaces/Vokturz/can-it-run-llm](https://huggingface.co/spaces/Vokturz/can-it-run-llm)
https://preview.redd.it/mk8ld3jj11sb1.png?width=1510&format=png&auto=webp&s=2f3a409fc8dfa9ac97c9f1fc7167b13e29f8c8ed | 2023-10-03T18:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/16yyxku/p_an_hfspace_to_check_if_your_gpus_can_run_a_model/ | Vokturz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yyxku | false | null | t3_16yyxku | /r/LocalLLaMA/comments/16yyxku/p_an_hfspace_to_check_if_your_gpus_can_run_a_model/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'xhL5ZT-S6n0LAbm1_N5rTg5_WqzREWLhfW6d_AK0ARI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=108&crop=smart&auto=webp&s=543c9e7464030818c35b0396406def19a7a9fa0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=216&crop=smart&auto=webp&s=c9ad62e481a07145b28738b4c5f44d01e25ebe2b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=320&crop=smart&auto=webp&s=a8799feaa66acbd9a80ed858520426d344be91b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=640&crop=smart&auto=webp&s=47204d5e6903f0e0b65c4537ff29e9c361f7dc5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=960&crop=smart&auto=webp&s=2ce3177bfec9d249ab3a6308c911beae0cc17335', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?width=1080&crop=smart&auto=webp&s=234a7efee500169659edcec94efa5edcca52157a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RA4pB398kQgGJN3O_bQHJPPy145ONUZLtHNnLWscUPU.jpg?auto=webp&s=d5775911187f67a7d463bb9097c072c2a526c767', 'width': 1200}, 'variants': {}}]} | |
Already have a 3090, looking to host 70b models locally and was wondering if I could mix/match and use a 4090 alongside it? | 5 | Is it possible to load a 70b model and split the memory between 2 different graphics cards (one being a 4090, and one being a 3090)? | 2023-10-03T17:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/16yxro1/already_have_a_3090_looking_to_host_70b_models/ | Oninaig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yxro1 | false | null | t3_16yxro1 | /r/LocalLLaMA/comments/16yxro1/already_have_a_3090_looking_to_host_70b_models/ | false | false | self | 5 | null |
Local Phi-1.5 running fully in browser for privacy with open-source project BlindChat | 35 | Excited to introduce BlindChat ([https://github.com/mithril-security/blind\_chat](https://github.com/mithril-security/blind_chat)), an open-source, privacy-centric alternative to ChatGPT for in-browser Conversational AI!
📷
We provide full local inference in browser, by using libraries from Hugging Face like [transformers.js](https://huggingface.co/docs/transformers.js/index) or [candle](https://github.com/huggingface/candle) for WASM inference.
We have supported several small models, the latest one being [Phi-1.5](https://huggingface.co/microsoft/phi-1_5), the 1.3B model that beat Llama 2 7b!
As Microsoft’s researchers mentioned in their paper, the model often produces incorrect code and statements. They are just suggestions, and this model is not trained for instruction tuning, so it might be harder to use than regular chat. More info on their model card ([https://huggingface.co/microsoft/phi-1\_5](https://huggingface.co/microsoft/phi-1_5)).
We would love to have your feedback on our project, as we are aiming to build a privacy-first and open-source alternative to ChatGPT! | 2023-10-03T16:13:06 | https://www.reddit.com/r/LocalLLaMA/comments/16yw2xr/local_phi15_running_fully_in_browser_for_privacy/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yw2xr | false | null | t3_16yw2xr | /r/LocalLLaMA/comments/16yw2xr/local_phi15_running_fully_in_browser_for_privacy/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'Zrvi3TlPMGdSjQcA3Vz5h-tTakibIfhDxZd0dSTJ8Ic', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=108&crop=smart&auto=webp&s=e24aec4fb266c4998d4eb1c8eb028bc0deff2d8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=216&crop=smart&auto=webp&s=8ad3edc7ba3839c24185c28e58ad142db9753fe7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=320&crop=smart&auto=webp&s=d8dc89ce718eaee2d05dcc87c2f7cd4b99d3274c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=640&crop=smart&auto=webp&s=9f7d98ee950e199d1b4f7658294dd9adb71dabbe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=960&crop=smart&auto=webp&s=3606d662aaa853663df168f6fbae835e86269ed5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?width=1080&crop=smart&auto=webp&s=e47da54b617202cf4d45e6730ccbb15e711423dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5wAUDpRnc-37GdXw8tuQl1x0H8f2t1kiEcqJLRYW51w.jpg?auto=webp&s=72bbcba1af53229f70c2e4d9d5c0b1bfbc0f6e37', 'width': 1200}, 'variants': {}}]} |
Why don't we use validation/test sets more in LLM fine-tuning? | 8 | I'm curious why fine-tuning large language models doesn't seem to use validation and test sets as much as other machine learning techniques.
In most ML, you split off some data for validation to do early stopping, and some for a final test set to evaluate the model. But with fine-tuning, it's common to just train on the full fine-tuning dataset.
I feel like it could be helpful to reserve some fine-tuning data, like 10% each for validation and testing. Train until validation perplexity stops improving, then evaluate on the test set.
Some argue this could mess up the general knowledge learned during pre-training. But you could use a weighted perplexity score for early stopping - combining pre-train test set and fine-tune validation perplexity.
What do you all think? Are there good reasons not to use validation/test sets more in fine-tuning? Or ways we could incorporate them without hurting the pre-trained knowledge?
Curious to hear perspectives on this, since it seems different from typical rigourous ML evaluation workflows.
For the memes, I gave Claude-instant my poorly written question and asked it to make it "sound better" and it wrote the title+body 🤖🤖🤖🤖 | 2023-10-03T15:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/16yuydd/why_dont_we_use_validationtest_sets_more_in_llm/ | Pawngrubber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yuydd | false | null | t3_16yuydd | /r/LocalLLaMA/comments/16yuydd/why_dont_we_use_validationtest_sets_more_in_llm/ | false | false | self | 8 | null |
Bizzare effect of 4 bits and LORA in 34B vs. HumanEval (Transformers,BnB,PEFT) | 20 | I have been testing how Loras affects the coding abilities of models (especially Phind v2 34B)
Unfortunately, still, only Transformers work (CodeLlama+LORA+2 GPU), so I had to run prompts in batches (4,8,16,32) not to get crazy. All HumanEvals ran with phind 2 alpaca format with "'''python" added.
In 4bits, Phind v2 + Airoboros 2.2.1 adapter (despite its' different prompt format - vicuna) performed better than without adapter with up to 71%, while Airoboros 2.2.1 34B (no adapter) getting 55%, Phind v2 \_8\_ BIT!! 70% and 4bit Phind v2 getting around 68%
Interestingly, adapter have a significant impact on non-coding replies - i.e. It's mostly de-consored while Phind v2 refused nsfw storytelling stuff 90% of time. *(Generating capcha related code is always refused with base / mostly refused with Airo Adapter /not refused with original Airo finetune / adding "Sure!" always works.)*
You can see the impact of adapter in 8bit where it scored 67%, results are even more visible in HumanEval+, with 8bit taking a hit and 4bits almost the same. One of the explanations is that in 4bits, format is so impactful (actually in the bad sense but in this case it's positive) that using alpaca format makes adapter impact almost neglected.
I have been planning to do some routing for different loras, but seems there is little benefit in doing that in 4bit realm instead of just keeping them applied.
It is also the first time 8bit performed so close to 4bits, so I would take the results with a grain of salt - I would even challenge if there is a leakage, but Airoboros performed poorly in Eval while Phind with adapter seems to retain its coding abilities with very visible impact on content related tasks.
Results also contradict what we can see on can-ai-code leaderboard, which is my go-to place for reliable comparison - there 4bit qunats are doing very poorly.
*PS. Not tested CodeFuse 34B and their 4bit gptq quant where they claim 73%*
One of the replies of Phind + Airo adapter to summarize Chip Huyen articles:
>
("### System Prompt\\nYou are an intelligent assistant.\\n\\n### User Message\\nBEGININPUT" + prompt\_summarize + "\\nENDINPUT\\nBEGININSTRUCTION\\nProvide key 5 informative bullets about the article\\nENDINSTRUCTION\\n\\n### Assistant\\n") - mix of Input block from Airo and Phind, however this prompt actually usually behaves good - without LORA included.
>
>1. The article highlights the ongoing challenges and research directions in LLM (Large Language Models) research, emphasizing that while there has been significant progress, there remain several unsolved problems and opportunities for improvement.
>
>2. Topics such as reducing hallucinations, incorporating other data modalities, optimizing context length and construction, making LLMs faster and cheaper, and designing new model architectures are discussed as critical areas for further development.
>
>3. The article touches upon the need for LLMs to be more usable in various applications, including social studies and interactive simulations. It also addresses the issue of learning from human preference, which presents unique challenges due to cultural, linguistic, and demographic diversity.
>
>4. The efficiency of chat interfaces and the feasibility of building LLMs for non-English languages are identified as pressing issues that require attention from both technical and non-technical perspectives.
>
>5. The article concludes by underscoring that some of these challenges may not be solely solvable through technical advancements, but rather necessitate interdisciplinary collaborations between experts from different fields, such as linguistics, UX design, and policy-making.
​
Phind v2, no adapter:
>
("### System Prompt\\nYou are an intelligent assistant.\\n\\n### Input\\n" + prompt\_summarize + "\\n\\n### User Message\\nProvide key 5 informative bullets about the article\\n\\n### Assistant\\n")
1. The article highlights the rapid development and evolution of large language models (LLMs) in recent years, particularly in the area of natural language processing (NLP).
2. It identifies several open challenges in LLM research, such as reducing hallucinations, optimizing context length and construction, incorporating other data modalities, making LLMs faster and cheaper, designing new model architectures, and developing GPU alternatives.
3. The author emphasizes the importance of multimodality in various industries, such as healthcare, robotics, e-commerce, and retail.
4. The article also touches upon the potential drawbacks of chat interfaces, such as limited messages per turn and the absence of multimodal input options.
5. Finally, the author highlights the need for LLMs that can function effectively in non-English languages, given the growing global presence of the internet and the diversity of human languages.
HumanEval+ results:
>Phind v2 - 4bit - no Adapter:
Base
{'pass@1': 0.6829268292682927}
Base + Extra
{'pass@1': 0.6402439024390244}
>
>Phind v2 - 4bit - no Adapter - batch:
Base
{'pass@1': 0.6798780487804879}
Base + Extra
{'pass@1': 0.6615853658536586}
>
>Phind v2 - 4bit - Airoboros 2.2.1 Adapter - batch:
Base
{'pass@1': 0.698170731707317}
Base + Extra
{'pass@1': 0.6554878048780488}
>
>Phind v2 - 4bit+double\_quant - Airoboros 2.2.1 Adapter - batch:
Base
{'pass@1': 0.7103658536585366}
Base + Extra
{'pass@1': 0.6676829268292683}
>
>Phind v2 - 4bit+double\_quant - no Adapter - batch:
{'pass@1': 0.6798780487804879}
Base + Extra
{'pass@1': 0.6615853658536586}
>
>Airoboros 2.2.1 - no Adapter - batch:
{'pass@1': 0.5487804878048781}
Base + Extra
{'pass@1': 0.4817073170731707}
>
>Phind v2 - 8bit - no Adapter - batch 8:
{'pass@1': 0.7012195121951219}
Base + Extra
{'pass@1': 0.6707317073170732}
>
>Phind v2 - 8bit - Airoboros 2.2.1 Adapter - batch 8:
Base
{'pass@1': 0.6707317073170732}
Base + Extra
{'pass@1': 0.6280487804878049}
I think we should focus more on evals and finetuning impact below certain vram consumption - comparing models in fp16, then running them locally are completely different things. Apart from that, I hope to give extra evidence that creating more adapters rather than fine-tunes can make sense. | 2023-10-03T14:08:11 | https://www.reddit.com/r/LocalLLaMA/comments/16yswku/bizzare_effect_of_4_bits_and_lora_in_34b_vs/ | kpodkanowicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yswku | false | null | t3_16yswku | /r/LocalLLaMA/comments/16yswku/bizzare_effect_of_4_bits_and_lora_in_34b_vs/ | false | false | self | 20 | null |
Why use micro batch size and not just reduce the batch size in general? | 4 | Hi Guys,
I am a little confused as I do not understand the idea of micro batch size.
According to my understanding, in each iteration the batch size is feed to the model and afterwards the gradient will be computed and weights will be adjusted etc.
So I read that in order to save computing power, it is useful to split the batch in micro batches. So you split in 4 batches and every batch will be feed to the model and the gradient will be computed and so on.
Where is the difference to use a smaller batch size?
Its the same IMO, cant see the benefit of microbatch size | 2023-10-03T14:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/16ysw4r/why_use_micro_batch_size_and_not_just_reduce_the/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ysw4r | false | null | t3_16ysw4r | /r/LocalLLaMA/comments/16ysw4r/why_use_micro_batch_size_and_not_just_reduce_the/ | false | false | self | 4 | null |
logits for model distillation | 2 | The paper: Orca: Progressive Learning from Complex Explanation Traces of GPT-4
says OpenAI's API gives access to the top-5 logits for each token.
Is this enough for model distillation? Is this access available to anyone, or only microsoft? Has this been done/used for training or finetuning llama? | 2023-10-03T13:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/16yrjpe/logits_for_model_distillation/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yrjpe | false | null | t3_16yrjpe | /r/LocalLLaMA/comments/16yrjpe/logits_for_model_distillation/ | false | false | self | 2 | null |
How to Extract Structured Objects From Llama-2 Responses | 4 | I'm doing some RAG work with Llama-2 13B. For each question I ask the model I want to an open-ended response (so that I can understand the model's reasoning) and a parsed object response.
For example, I may ask the model "How many districts are in the town mention in this article?" To which I want an open ended response with evidence and an integer response too (so I can analyze the data easily).
I also want some slightly more complicated data structures parsed. For example, I want to ask the model "What is the area for each district in the town?" And get a list of tuples back where the first element of each tuple is the district name and the second is an integer with the area.
Here are the possible approaches I've found:
1. Ask Llama-2 for a JSON response/Use multi-shot learning/Provide example.
1. There are too many errors with this approach I've found
2. Use Kor
3. Use LMQL
4. Fine-tune Llama-2 to respond in a specific format
Anyone encounter a similar use case and what did you go with? Thanks! | 2023-10-03T13:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/16yrivb/how_to_extract_structured_objects_from_llama2/ | MasterJaguar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yrivb | false | null | t3_16yrivb | /r/LocalLLaMA/comments/16yrivb/how_to_extract_structured_objects_from_llama2/ | false | false | self | 4 | null |
How can i merge 2 Models? | 5 | so i have 2 models that are very good the synthia-34b-v1.2.Q4\_K\_M is uncensored but not that good for langchain while the speechless-llama2-hermes-orca-platypus-wizardlm-13b.Q8\_0 is perfect for langchain but its censored so is there a way to merge them together to make the synthia-34b better or the speechless-llama2-hermes-orca-platypus-wizardlm-13b uncensored i have a 3090 if that changes anything | 2023-10-03T13:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/16yrc6c/how_can_i_merge_2_models/ | Otherwise_Weather_57 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yrc6c | false | null | t3_16yrc6c | /r/LocalLLaMA/comments/16yrc6c/how_can_i_merge_2_models/ | false | false | self | 5 | null |
Reimplementing home-baked local "function calling" | 14 | The OpenAI feature I'm referring to is: [https://platform.openai.com/docs/guides/gpt/function-calling](https://platform.openai.com/docs/guides/gpt/function-calling)
In short feature consists of:
* a way to define what functions are available (write a list of descriptions for LLM to look into)
* a way for the LLM to call parameters on those functions
* execution wrapper, that will for example execute our "sum(a,b)" function and return a+b to the LLM for next step in processing
What alternatives do exist in the wild and how could we approach adding this feature to our local LLM code? | 2023-10-03T12:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/16yr5g8/reimplementing_homebaked_local_function_calling/ | BigHearin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yr5g8 | false | null | t3_16yr5g8 | /r/LocalLLaMA/comments/16yr5g8/reimplementing_homebaked_local_function_calling/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'XjYeH9UKcgyOWN9BfsAvdRtig11MoY4NbGzf8T8tohc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=108&crop=smart&auto=webp&s=3f9105ef3e7b58900470546335a78fe533005bb4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=216&crop=smart&auto=webp&s=e310d9b29c06d6b5ce641a7f1506dd263a8729f6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=320&crop=smart&auto=webp&s=69281c1b38ced0e01cea5a12968d4b48674c98a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=640&crop=smart&auto=webp&s=9396991084b19bf32b78fe6cd3e4cde78b575245', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=960&crop=smart&auto=webp&s=91c96b3a35c2ea5ffe4533d22d45928392c76bc9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?width=1080&crop=smart&auto=webp&s=57787310c9a5f80fe35172195dbc646aa4473b50', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nEedMLn73AsJU6AgxrHz6cvr0_KUZAuY26VUpixUKp8.jpg?auto=webp&s=5552c2aeef00909b24d8e8eed641cda7ae28bb4b', 'width': 1200}, 'variants': {}}]} |
Best RD software for LLaMa on a Mac | 1 | - I'm using a high-specced MacBook just for LocalLLaMA/LM Studio
- Otherwise on Windows and pretty bothered by every bit of macOS
- So, I need some remote desktop where I can remote control the MacBook via my Windows machine(s), any ideas? | 2023-10-03T11:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/16ypv4p/best_rd_software_for_llama_on_a_mac/ | upk27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ypv4p | false | null | t3_16ypv4p | /r/LocalLLaMA/comments/16ypv4p/best_rd_software_for_llama_on_a_mac/ | false | false | self | 1 | null |
What's the best remote desktop for remote controlling a Mac solely for LocalLLaMa? | 1 | [removed] | 2023-10-03T11:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/16ypss6/whats_the_best_remote_desktop_for_remote/ | upk27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ypss6 | false | null | t3_16ypss6 | /r/LocalLLaMA/comments/16ypss6/whats_the_best_remote_desktop_for_remote/ | false | false | self | 1 | null |
OT: What's the best remote desktop when using a MacBook via Windows? | 1 | [removed] | 2023-10-03T11:49:14 | https://www.reddit.com/r/LocalLLaMA/comments/16yps0h/ot_whats_the_best_remote_desktop_when_using_a/ | upk27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yps0h | false | null | t3_16yps0h | /r/LocalLLaMA/comments/16yps0h/ot_whats_the_best_remote_desktop_when_using_a/ | false | false | self | 1 | null |
Removing causal mask of LLaMA? | 4 | The causal mask in LLaMA serves as a protective barrier to prevent information leakage. However, in certain tasks, leveraging information leakage can be a beneficial strategy for enhancing performance, particularly in tasks like token classification such as NER. Interestingly, the paper "Label Supervised LLaMA Finetuning" (https://arxiv.org/abs/2310.01208) reveals a significant performance boost in token classification when the causal mask is removed. | 2023-10-03T11:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/16ypruh/removing_causal_mask_of_llama/ | seanlee97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ypruh | false | null | t3_16ypruh | /r/LocalLLaMA/comments/16ypruh/removing_causal_mask_of_llama/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
What happens after removing the LLaMA’s causal mask | 1 | [removed] | 2023-10-03T10:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/16yntd9/what_happens_after_removing_the_llamas_causal_mask/ | seanlee97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yntd9 | false | null | t3_16yntd9 | /r/LocalLLaMA/comments/16yntd9/what_happens_after_removing_the_llamas_causal_mask/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
LLM for German Conversations | 17 | Hey,
I´m looking for a fast LLM for everyday chat conversations in German. It should be as fast as possible since it should be as close to real time as possible, for normal chatting. I´m using webui-text-generation to host the models. The webui is currently deployed on a server with a GPU with 36 GB VRAM but I want to downgrade to a NVIDIA 3080 RTX with 16GB VRAM.
Any help or recommendations are deeply appreciated
| 2023-10-03T10:06:42 | https://www.reddit.com/r/LocalLLaMA/comments/16ynt3r/llm_for_german_conversations/ | aldur15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ynt3r | false | null | t3_16ynt3r | /r/LocalLLaMA/comments/16ynt3r/llm_for_german_conversations/ | false | false | self | 17 | null |
Is there quantized BART anywhere? | 7 | There's quantized version for every model imaginable, but somehow I have hard times finding quantized BART anywhere. The regular version inference is too slow for my usecase.
I've only found [this repo](https://github.com/amazon-science/dq-bart) which is code to make it yourself, so I rented server with rtx 3090, thinking it's small model and I'll be done in couple hours. But it shows 65 hours lol
Anyway, title | 2023-10-03T10:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/16ynr9c/is_there_quantized_bart_anywhere/ | heyimpumpkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ynr9c | false | null | t3_16ynr9c | /r/LocalLLaMA/comments/16ynr9c/is_there_quantized_bart_anywhere/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '0nT1ShYTmSgNRuhRePglMWqeSBfA41KUaUfh0J7hIHY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=108&crop=smart&auto=webp&s=c46e3c04b62d3a2b1f423c1f3cea5cda315b328f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=216&crop=smart&auto=webp&s=961101c1b9ac0bec1df0884f3a065ce9d08c1864', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=320&crop=smart&auto=webp&s=a949b488d18858d803a6d31f595a5308020e3fd9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=640&crop=smart&auto=webp&s=f15bd817c3af18439e3163dbb9e3460fcb5b3ca7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=960&crop=smart&auto=webp&s=d82858e28e7ca81d2c20e002172b05edb3ed09d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?width=1080&crop=smart&auto=webp&s=6af739964bbb3a5d10768b2a0c557372368a54c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xw8rG3EUUAvgxlGuXhd9SLNcboEUlD6-pZJEuw4N8i8.jpg?auto=webp&s=885bbf5821d2d148e2acf5c29c44f18855be1afd', 'width': 1200}, 'variants': {}}]} |
What will happen after removing the causal mask of LLaMA? | 1 | [removed] | 2023-10-03T10:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/16ynoi3/what_will_happen_after_removing_the_causal_mask/ | seanlee97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ynoi3 | false | null | t3_16ynoi3 | /r/LocalLLaMA/comments/16ynoi3/what_will_happen_after_removing_the_causal_mask/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Training Opensource LLM on lyrics? | 3 | Hi,
Apologies for the naïve question.
I wanted to see if it could train Mistral on lyrics.
To teach it slant rhymes and more complicated structures in poetry.
ChatGPT4 is quite simplistic and cliché in its rhyming. I thought it might be possible (and cheaper) to improve upon it and use an OpenSource model instead.
My main query is, what is the dataset supposed to look like?
I found some chatbot datasets, but for lyrics, I'm not sure.
I'm experienced with other AI tools, but fine-tuning LLM's is new for me.
Maybe someone can give me some pointers or resources on how to approach this.
Or tell me it's not a feasible idea.
Thanks! | 2023-10-03T09:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/16ynhy5/training_opensource_llm_on_lyrics/ | FreshlyBakedMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ynhy5 | false | null | t3_16ynhy5 | /r/LocalLLaMA/comments/16ynhy5/training_opensource_llm_on_lyrics/ | false | false | self | 3 | null |
Any GGUF 7b to use in Spanish? | 2 | As title, as I had problems using models in Spanish.
I'll use LLaMA.cpp
Thank you | 2023-10-03T08:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/16ym7x6/any_gguf_7b_to_use_in_spanish/ | evillarreal86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ym7x6 | false | null | t3_16ym7x6 | /r/LocalLLaMA/comments/16ym7x6/any_gguf_7b_to_use_in_spanish/ | false | false | self | 2 | null |
Cheaper cloud alternatives to train LLM for educational purpose | 6 | I am giving a workshop in my town in a few weeks. And the topic is how LLMs are built, the transformer architecture etc. I will show live how to build a small useless language model from scratch. (I mostly followed myself several tutorials - specially Andrej Karpathy’s).
So, the whole purpose is of educational. I don’t intend to build something usable. Just to show a prompt can be continued in a relatively less messed up grammatical structure. It can hallucinate all it wants.
I am using some wikitext material collected from other repos. Model is perhaps tens of millions of parameters. And I have written the code but running even with those small parameters takes the whole night on my 16gb M1 mac. And I feel like I need to use more batch size and stuff. And working as a programmer in some low paid European company doesn’t help with resources lol.
So what alternatives do you suggest for this purpose? I want to be done with it in like 100Euros. I was considering of vastai but I am also concerned of hidden payments that would later charge me 10K for stupid talk that doesn’t matter much. | 2023-10-03T08:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/16ylxtq/cheaper_cloud_alternatives_to_train_llm_for/ | besabestin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ylxtq | false | null | t3_16ylxtq | /r/LocalLLaMA/comments/16ylxtq/cheaper_cloud_alternatives_to_train_llm_for/ | false | false | self | 6 | null |
Full-finetuning Llama models | 1 | [removed] | 2023-10-03T08:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/16ylumr/fullfinetuning_llama_models/ | Impossible_Storm8768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ylumr | false | null | t3_16ylumr | /r/LocalLLaMA/comments/16ylumr/fullfinetuning_llama_models/ | false | false | self | 1 | null |
I am trying to use LLAMA index RAG with my own fine-tuned model, it's giving gibberish answer? Any idea on how to use RAG with custom fine-tune? | 1 | [removed] | 2023-10-03T08:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/16yltyz/i_am_trying_to_use_llama_index_rag_with_my_own/ | Anu_Rag9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yltyz | false | null | t3_16yltyz | /r/LocalLLaMA/comments/16yltyz/i_am_trying_to_use_llama_index_rag_with_my_own/ | false | false | 1 | null | |
Is there a retrieval method that can get relevant context for complex queries | 5 | Hi all,
I am building a Retrieval Augmented Generation system that would answer queries of users using documents at hand.
I was wondering if is any method that would handle complex questions, for example "who was the president of a xyz country when there was Argentina won it's latest fifa world cup".
In this example, the data might not be present in a single chunk and there will be multiple chunks that have different context because it has find chunk that has "year in which Argentine won it's latest fifa world cup" and also, the one that has " 2022's president for xyz country"
I have read some research papers, but most of them use NN for training the retrieval part using a dataset of relevant chunks for a given query, which I don't have and resources as well.[https://arxiv.org/pdf/2308.08973.pdf](https://arxiv.org/pdf/2308.08973.pdf) (beam retrieval has good results on bench mark datasets)
For this case, does knowledge graph help? I didn't find any resource on that internet that uses KG for RAG (implementation).
Do u guys think, normal semantic search could handle this? | 2023-10-03T07:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/16ylg7s/is_there_a_retrieval_method_that_can_get_relevant/ | vile_proxima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ylg7s | false | null | t3_16ylg7s | /r/LocalLLaMA/comments/16ylg7s/is_there_a_retrieval_method_that_can_get_relevant/ | false | false | self | 5 | null |
so.. is it AI now or just a book with talking interface? my thinking.. | 0 | so my thinking is:
imagine a infinity excel sheet where in column one, you have all questions what exists and in column two, you have appropriate answer.
talking with "AI" like this can make you feel that it is very intelligent, but in reality, it is just preprogramed what to answer so it is not "AI", just infinite list of answers.
so real AI is the system which will came up with own ideas, right?
like it will propose totally new inventions which were NOT programmed into it -> but this is the case = I'm playing with 65b and it is GREAT .. but.. but it just cannot came up with any new invention so in that case I would say it is "just" very book smart but with no possibility to create something totally new.
​
is there any AI model which is capable of it or today models are just very smart big books with talking interfaces? do you have any ideas for prompts to test if the model can really made up something totally new? | 2023-10-03T07:24:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ylai7/so_is_it_ai_now_or_just_a_book_with_talking/ | ovnf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ylai7 | false | null | t3_16ylai7 | /r/LocalLLaMA/comments/16ylai7/so_is_it_ai_now_or_just_a_book_with_talking/ | false | false | self | 0 | null |
Any way to host finetuned LLMs and only pay for what you use? | 2 | Hey all,
I have made a couple LOrA models recently I would love to use in chat bots and that kind of thing, but I don’t want to be paying by the hour, are there any options out there for me to call my models via and API and only pay for what I use? I have thought about using an AWS lambda but I imagine that means I’ll have to wait for the model to load every time I call it | 2023-10-03T07:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/16yl9lg/any_way_to_host_finetuned_llms_and_only_pay_for/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yl9lg | false | null | t3_16yl9lg | /r/LocalLLaMA/comments/16yl9lg/any_way_to_host_finetuned_llms_and_only_pay_for/ | false | false | self | 2 | null |
Where can I find llama 2 uncensored hosted on the cloud for free? | 1 | on [https://huggingface.co/chat/](https://huggingface.co/chat/), we can use code llama 34b, llama 2 70b, and even Falcon 180b for free, and I was wondering if there is anywhere I can also find llama 2 uncensored. I don't mean using GCP, AWS, or Lambda (not free / temporary free trial), but actually free, since my laptop doesn't even have a dedicated GPU. | 2023-10-03T06:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ykfnu/where_can_i_find_llama_2_uncensored_hosted_on_the/ | Cheems63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ykfnu | false | null | t3_16ykfnu | /r/LocalLLaMA/comments/16ykfnu/where_can_i_find_llama_2_uncensored_hosted_on_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=108&crop=smart&auto=webp&s=732d76370b97811ab9fb6fed1800fef253ff5141', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=216&crop=smart&auto=webp&s=52d55920ecb4ddd3b9a6a3f1f377b1e947338e76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=320&crop=smart&auto=webp&s=87995474933baa503604de4c36087e982c1556f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=640&crop=smart&auto=webp&s=b15bbf288cd406be52bb70ea93bac1eeb4e4b7b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=960&crop=smart&auto=webp&s=6d645adfc183d0fc80bf7a259b180d40c1d08007', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?width=1080&crop=smart&auto=webp&s=f355db7c39116418e70484205c54fb53d8e5c262', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YQI_ej5EergATLgemo1vMLg5Y9oURFiOHZhDYundV9o.jpg?auto=webp&s=acc428a35d715d5e8eaec455e35020dcd81cf080', 'width': 1200}, 'variants': {}}]} |
Does quantization speed or slow down inference for a LLM? | 1 | Hi everyone, according to Google's Bard, quantization of a LLM speeds up inference, but from my personal experience quantization of a LLM with "torch\_dtype" of "float16" (e.g. lmsys/vicuna-7b-v1.5) when loaded in 8-bit slows down inference. Could someone enlighten me which is correct? And if so why? Thank you! | 2023-10-03T06:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/16ykeo4/does_quantization_speed_or_slow_down_inference/ | ToeAdministrative493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ykeo4 | false | null | t3_16ykeo4 | /r/LocalLLaMA/comments/16ykeo4/does_quantization_speed_or_slow_down_inference/ | false | false | self | 1 | null |
Does quantization of LLM speed up or slow down infernence? | 1 | [removed] | 2023-10-03T06:17:15 | https://www.reddit.com/r/LocalLLaMA/comments/16ykagw/does_quantization_of_llm_speed_up_or_slow_down/ | AdventurousBowler740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ykagw | false | null | t3_16ykagw | /r/LocalLLaMA/comments/16ykagw/does_quantization_of_llm_speed_up_or_slow_down/ | false | false | self | 1 | null |
Does quantization slow inference? | 1 | [removed] | 2023-10-03T06:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/16yk9xi/does_quantization_slow_inference/ | AdventurousBowler740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yk9xi | false | null | t3_16yk9xi | /r/LocalLLaMA/comments/16yk9xi/does_quantization_slow_inference/ | false | false | self | 1 | null |
Advice for prompt design for unit conversion | 1 | [removed] | 2023-10-03T05:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/16yjgp1/advice_for_prompt_design_for_unit_conversion/ | LLM4_robotics | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yjgp1 | false | null | t3_16yjgp1 | /r/LocalLLaMA/comments/16yjgp1/advice_for_prompt_design_for_unit_conversion/ | false | false | self | 1 | null |
Looking for Advice on End to End Workflow for training a specialized LLM to run locally | 2 | I am trying to understand the over-all methodology for training a specific LLM, from beginning to fine tuning. I have a use case described below along with what I ***think*** is the workflow. Please correct me. (The use case is using publicly available books and literature. But, obviously it could be used for different categories and different books.)
**Use case:**
I want to have an LLM that can be run locally on a reasonably fast laptop (say discrete Nvidia card with 8GB of memory, 64GB Ram, AMD Risen 9 (or similar Intel i7) with 4TB of storage.
I am going to create this for a seminary student. I want to train the LLM using 2 versions of the Bible and Calvin’s Institutes (A systematic theology book of about 1000 pages.) The goal is to have a specific LLM that the student can have conversations with and get answers using these resources only. I don’t want it to have bleed over from other religions or theology systems.
***NOTE: I know what I am about to say is wrong. I am hoping you guys can tell me where I go off of the rails, and what I should do instead… And, also I would love some details on correct transformers to use, tokenizers, models, etc. Right now, I think this is what I would do...***
**My Presumed Workflow:**
So, I start with a smallish LLM. (*IDK what… Orca, Dolly 2, Mistral… something small enough to be trained on limited data on prosumer computer system, or RunPod or something like that...?*)
I split my texts up into paragraphs ensuring none of them are over 750 tokens. I put these in segments that include the book, page number, and paragraph. (*Do I use Json to represent this? Does it mater?*) I run a program which tokenizes them and adds them to the LLM as embeddings.
Once done, I also train it using some Chat data so it is more conversant. (*Or maybe I just start with a version that has already done this...?*)
Now I need to fine tune it to better answer questions.
I go to Stack Overflow for Bible and Hermeneutics questions, using the reformed theology tags. I get several hundred questions. (*I don't care about the answers at this point, because I want my LLM to answer them*)
I run these through the LLM, gathering the answers, and saving them.
I take these question and answer pairs and run them through ChatGPT (using the API) and ask it to score the answer based on its understanding of the subjects (it seems to have been trained on both the Bible and Calvin’s Institutes)
Using these, I create fine tuning data (*not exactly sure what this looks like*) to help the system get smarter about answering questions.
After that… IDK… just keep repeating with new questions gleaned forma variety of sources? I’m a little lost at this point.
But it seems that the initial embeddings are different than the fine tuning. Is this right? How reasonable is it to get a fully functioning LLM in a niche area like this?
Any guidance would be appreciated. (And you can tell me to RTFM, as well... but if you do, can tell me what manual that is...) | 2023-10-03T03:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/16yhqt5/looking_for_advice_on_end_to_end_workflow_for/ | Ok_Organization_4295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yhqt5 | false | null | t3_16yhqt5 | /r/LocalLLaMA/comments/16yhqt5/looking_for_advice_on_end_to_end_workflow_for/ | false | false | self | 2 | null |
Not enought GPU? | 4 | So my pc specs are
\-1650 ti 4GB
\-16 GB RAM
And I got this error while trying to load the model
2023-10-03 00:16:08 ERROR:Failed to load the model.
Traceback (most recent call last):
File "C:\\Users\\migat\\text-generation-webui\\modules\\ui\_model\_menu.py", line 201, in load\_model\_wrapper
shared.model, shared.tokenizer = load\_model(shared.model\_name, loader)
File "C:\\Users\\migat\\text-generation-webui\\modules\\[models.py](https://models.py)", line 78, in load\_model
output = load\_func\_map\[loader\](model\_name)
File "C:\\Users\\migat\\text-generation-webui\\modules\\[models.py](https://models.py)", line 209, in huggingface\_loader
model = LoaderClass.from\_pretrained(path\_to\_model, \*\*params)
File "C:\\Users\\migat\\.conda\\envs\\ooga\\lib\\site-packages\\transformers\\models\\auto\\auto\_factory.py", line 571, in from\_pretrained
return model\_class.from\_pretrained(
File "C:\\Users\\migat\\.conda\\envs\\ooga\\lib\\site-packages\\transformers\\modeling\_utils.py", line 3240, in from\_pretrained raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set \`load\_in\_8bit\_fp32\_cpu\_offload=True\` and pass a custom
\`device\_map\` to \`from\_pretrained\`. Check
[https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu)
for more details.
​
I just learned yesterday how to use conda and load models, but I ran with this issue wich I'm not really sure how to fix with conda, I would greatly appreciate any support! | 2023-10-03T03:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/16yhae3/not_enought_gpu/ | Cold-Risk-2513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yhae3 | false | null | t3_16yhae3 | /r/LocalLLaMA/comments/16yhae3/not_enought_gpu/ | false | false | self | 4 | null |
GPU makes a fast electronic noise when running inference | 5 | Why do GPUs make a noise when running inference? Is it just coil whine? It's like an electronic pulsing noise that's really fast. My old 3090 did it and now my A5000 does it too. | 2023-10-03T02:58:17 | https://www.reddit.com/r/LocalLLaMA/comments/16ygpix/gpu_makes_a_fast_electronic_noise_when_running/ | Rollingsound514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ygpix | false | null | t3_16ygpix | /r/LocalLLaMA/comments/16ygpix/gpu_makes_a_fast_electronic_noise_when_running/ | false | false | self | 5 | null |
Running meta llama-2-13b model on AWS g4dn.2xlarge instance | 1 | [removed] | 2023-10-03T02:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/16ygm8v/running_meta_llama213b_model_on_aws_g4dn2xlarge/ | duffy0016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ygm8v | false | null | t3_16ygm8v | /r/LocalLLaMA/comments/16ygm8v/running_meta_llama213b_model_on_aws_g4dn2xlarge/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RUqh18uQTwuGJocqdUcC-6UfvfWS63SRDdr8AQqU3uM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=108&crop=smart&auto=webp&s=64a1b1322ed94c559cb213e6a08f3eb426a3fb0b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=216&crop=smart&auto=webp&s=9edddfdb28bb0e92ceb041859aacef81ab9ed42e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=320&crop=smart&auto=webp&s=de73cdc9da2d0b04938bb7d051ab1a3ceb783323', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=640&crop=smart&auto=webp&s=60037829d2ce04de0705a2b45123d8ab7c12d41c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=960&crop=smart&auto=webp&s=e5a8f0da08b9281c578a8ab6f49a5b3f577ec9b8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?width=1080&crop=smart&auto=webp&s=0a9a9c1c38bd543a7ea6b718e139a9c1e6b62d18', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Q6q5DgwzcfU6chKw4uevixQcBq1Ipi2NYLheQ4lv0Vk.jpg?auto=webp&s=8afacfc14dfed09cec0415cac7d36db9c3374c61', 'width': 1200}, 'variants': {}}]} |
How do you get Llama 2 to fix a mistaken approach, or change strategies? | 1 | I was looking at the WizardMath-13b model, because it does seem better than some of the others at math, but it seems to get stuck in some cases (and I'm guessing this is not specific to that model). I asked it to solve a probably question about the odds of surviving 3 trigger pulls of Russian roulette. It went through a big recitation of what it was doing, and did come up with the right answer of 125/216 after some nudges.
But then things went horribly wrong. It converted 125/216 to a percentage with some inaccuracy, saying it was 58.04%. I told it this was inaccurate and to recalculate that part. Instead it repeated the entire derivation of 125/216. It refused to talk about the conversion into a percentage that I was questioning. Everything I said - whether in a logical or angry way - just caused it to repeat the original solution up to the 125/216 point. Nothing caused it to explain how it got 58.04%. In order to get out of the situation I had to reload it.
So, how can I get Llama 2 to drop whatever it's stuck in and accept new orders or actually focus on what I'm talking about rather than a problem it already solved? From additional attempts I gather the issue was just that it's strangely inaccurate for a computer when performing division, but it should've just explained that - that LLMs do not calculate like a calculator. But instead it totally ignored me.
Note that I was using the chat-instruct mode. Not sure if that's relevant. | 2023-10-03T02:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/16yg2vr/how_do_you_get_llama_2_to_fix_a_mistaken_approach/ | SuperDuperDave5000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16yg2vr | false | null | t3_16yg2vr | /r/LocalLLaMA/comments/16yg2vr/how_do_you_get_llama_2_to_fix_a_mistaken_approach/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.