title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Big update of the awesome-ml list you might find useful | 18 | 2023-10-22T19:18:14 | https://github.com/underlines/awesome-ml | _underlines_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 17e0wqa | false | null | t3_17e0wqa | /r/LocalLLaMA/comments/17e0wqa/big_update_of_the_awesomeml_list_you_might_find/ | false | false | default | 18 | null | |
Question-Answer for multiple csvs | 3 | What's the state of the art with respect to doing Q&A in natural language over multiple pieces of tabular data? eg if you had multiple csv or databases and if you wanted to do Q&A over them, you can't do simple RAG, but I'd love to learn what's the best way to either do Q&A, or even do fine-tuning with them? | 2023-10-22T18:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/17e06w3/questionanswer_for_multiple_csvs/ | marquisdepolis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17e06w3 | false | null | t3_17e06w3 | /r/LocalLLaMA/comments/17e06w3/questionanswer_for_multiple_csvs/ | false | false | self | 3 | null |
Personal Guru model | 3 | Hello to all the members of the Sub!
I have an idea about training a "Personal Psychoterapeut/Guru" model for offline inference on my rig, which includes a 3090 (24 GB), a 3060 (12 GB), a Ryzen 5980x, and 128 GB of RAM.
While I'm not well-versed in the intricacies of LLM training, here's my initial approach:
1. Opt for the most suitable base model.
2. Train it using specific books and materials:
* Incorporate renowned psychologists like Sigmund Freud, Carl Jung, Erich Fromm, Abraham Maslow, etc., with high importance.
* Integrate modern thought leaders with medium importance, such as Nassim Taleb, Yuval Noah Harari, Dale Carnegie, Elon Musk, Steve Jobs, and others.
* Add a touch of esoteric wisdom with lower importance, drawing from Yoga texts, the Taoist Tao Te Ching, Osho, Gurdjieff, and more.
I would greatly appreciate your advice on:
A) Identifying the best base model in terms of reasoning quality, model size, and inference speed.
B) Tools for training local models.
C) Insights on preparing training data.
​
​ | 2023-10-22T18:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/17dzfdg/personal_guru_model/ | SignalEquivalent9386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dzfdg | false | null | t3_17dzfdg | /r/LocalLLaMA/comments/17dzfdg/personal_guru_model/ | false | false | self | 3 | null |
[Termux] 4k mistral on 6GB phone | 1 | When you close background apps, you will fit 4k context on a 6gb android phone thanks to GQA.
Mistral:
` ` `
Q3_K_M
3.91 bpw
3.28 GB model
0.512 GB kV self size (GQA)
0.294 GB compute buffer
```
Llama 2
` ` `
Q3_K_M
3.89 bpw
3.145 GB model
2.048 GB kV self size
0.294 GB compute buffer
```
You can't fit Q4_K_M on 6gb, doing so causes the program to run the model from storage.
The speed gain of running Q4_K_M is significant: 1.36x compared to Q3_K_M from my test. | 2023-10-22T17:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/17dz5fm/termux_4k_mistral_on_6gb_phone/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dz5fm | false | null | t3_17dz5fm | /r/LocalLLaMA/comments/17dz5fm/termux_4k_mistral_on_6gb_phone/ | false | false | self | 1 | null |
【CausalLM/14B & 7B】 LLaMAfied Qwen 14B & 7B, uncensored, finetuned, outperform original ones. Multimodal friendly. | 34 | [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
[https://huggingface.co/CausalLM/7B](https://huggingface.co/CausalLM/14B)
LLaMAfied Qwen 14B & 7B, uncensored, finetuned on all possible good datasets, outperform original Qwens. Multimodal friendly.
> This model was trained based on the model weights of Qwen and LLaMA2. The training process utilized a model structure that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Relative Positional Encoding (RoPE).
Use it the same way as LLaMA2 13B & 7B, chatml prompt. | 2023-10-22T17:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/17dyw6l/causallm14b_7b_llamafied_qwen_14b_7b_uncensored/ | No-Link-2778 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dyw6l | false | null | t3_17dyw6l | /r/LocalLLaMA/comments/17dyw6l/causallm14b_7b_llamafied_qwen_14b_7b_uncensored/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'j9hm45x7eCqfdxsuKV4jizhkToeiq0NNpJtw0S9VbT0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=108&crop=smart&auto=webp&s=2061502fdc7ac9c27673d3d445b6db277276670b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=216&crop=smart&auto=webp&s=a61490f7a6c61dbbe33ddadb63fb111c26eea6f7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=320&crop=smart&auto=webp&s=45093e69e052f1a1b599175b6304e2eb81757de4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=640&crop=smart&auto=webp&s=e07372d5b11191e9e7a0ef5df2a552595d698d88', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=960&crop=smart&auto=webp&s=6e3a097de103c7de8003cbf974e29e75524a9c03', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?width=1080&crop=smart&auto=webp&s=42bbe90aecb6f0a09a4b5a1d46e1dc57c6f0217b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mIkMWQ3sgtTRz76QsQM_IxpmTdtAEmskxBlIrhHSZ6U.jpg?auto=webp&s=570a9b6dc4afd41c162bf56e070705ccf9606588', 'width': 1200}, 'variants': {}}]} |
Guide for Llama2 70b model merging and exllama2 quantization | 32 | I've seen posts in this subreddit before from people asking about model merging, and I don't recall ever seeing a response outlining the process. I just completed the process myself for the first time. I want to give back to this wonderful community with a high-level overview of the process for anyone who is still wondering how to do it. I also invite anyone with more experience to chime in with corrections, suggestions, tips and tricks, etc.
​
>NOTE: This guide covers my experience with merging two Llama2 70b models, but everything should be the same (and easier) with smaller Llama models. I'm not sure about Mistral. Hopefully someone will chime in with advice specific to Mistral merges.
​
**Step 1 - Get the script**
All credit goes to Gryphe, the author of the gradient merge script available at [https://github.com/Gryphe/BlockMerge\_Gradient/tree/main](https://github.com/Gryphe/BlockMerge_Gradient/tree/main)
This script worked for me. I'm curious whether people are using other scripts to do merges and if so, I hope people will share their experiences.
**Step 2 - Get the models (full precision)**
You will need the full-precision model weights for the merge process. Make sure you have enough disk space for them because they are hefty at the 70b parameter level. If anyone has a process for merging quantized models, I'd love to hear about it.
**(Optional) Step 3 - Prepare your environment**
Merging two 70b parameter models requires significant memory and disk space. I have my LLM environment set up in Ubuntu running on WSL on my Windows desktop. I had to expand my virtual disk and modify my WSL config to provide additional memory and swap space.
[How to manage WSL disk space](https://learn.microsoft.com/en-us/windows/wsl/disk-space)
[Advanced settings configuration in WSL](https://learn.microsoft.com/en-us/windows/wsl/wsl-config)
My hardware specs:
* AMD Ryzen 5 2600X (6 cores)
* 56 GB DDR4 memory
* 2 TB Samsung 870 EVO SSD dedicated to LLM work
My .wslconfig file:
[wsl2]
memory=50GB
swap=420GB
swapfile=X:\\wsl\\wsl-swap.vhdx
[experimental]
autoMemoryReclaim=dropcache
NOTE: If you use WSL with your distro's vhdx file in your AppData folder (the default location), and you want to move it, then check out this [superuser forum post 'Move WSL2 file system to another drive'](https://superuser.com/questions/1550622/move-wsl2-file-system-to-another-drive). The registry technique shared by the last commenter worked great for me.
The autoMemoryReclaim setting isn't important. You can read what it does in the "Advanced settings configuration in WSL" document, but you don't need it for model merging.
You need to run `wsl --shutdown` within your Windows command line or Powershell and then relaunch your WSL Linux distro to get changes to the WSL config to apply.
​
>During my 70b parameter model merge experiment, total memory usage (RAM + swap) peaked at close to 400 GB. My RAM was maxed out and swap usage reached \~350 GB.
​
**Step 4 - Decide on your gradient settings**
I wish I had sage advice to offer here, but sometimes science is more art than science. Take your time to carefully read over Gryphe's explanation of the gradient settings in the script, then I recommend finding some other model merges that people have done where they shared their gradient settings. For example, look at lizpreciator's merge at [https://huggingface.co/lizpreciatior/lzlv\_70b\_fp16\_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
For my experiment, I merged the above lzlv\_70b model with the latest [airoboros 3.1.2](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1.2) model. I wanted to prefer the lzlv\_70b model, but not too heavily, so I decided on a gradient of \[0.0, 0.5, 0.75\] with lzlv\_70b being the first model and airoboros being the second model. The end result: the first half of the tensors start out as 100% lzlv and gradually introduce airoboros until we reach a 50-50 blend, then the second half of the tensors start out as a 50-50 blend and gradually transition to being 75% airoboros and 25% lzlv. Was that a good call? I don't know. I'm still evaluating the end result. It works. It's good. Could it be better? Hmm... 🤔
The good news is you can keep trying different gradient value merges until you hit on a blend you really like. Thank goodness storage is cheap these days!
**Step 5 - Run the script**
python3 BlockMerge_Gradient_Tensors.py --model_path1 lzlv_70b/ --model_path2 jondurbin_airoboros-l2-70b-3.1.2/ --output_model_path lzlv_airoboros --gradient_values '[0.0, 0.5, 0.75]' --max_shard_size '8000MiB'
The merge process relies solely on your CPU and available memory, so don't worry about what kind of GPU you have. **The merge process took around 4 - 5 hours on my computer.** Max shard size refers to how large the individual .safetensor files are allowed to be in your output model. 8 GB seems to be fairly common.
You can use the `top` command in your Linux distro to monitor your memory/swap usage during the merge process, or if you intend to go AFK, consider `free -s 300 -mh | tee memlog.log` which will print out a report of memory / swap usage every 5 minutes while also recording that output in a file named memlog.log. I recommend doing that if you think you'll be cutting it close to having enough total memory to do the merge.
**(Optional) Step 6 - Quantize your merged model (using exllama2)**
Technically you're done in the previous step if the merge script completed without errors, but good luck running a 70b parameter model in full precision. We need to quantize it. If you want to do GGUF or GPTQ, you're on your own, but I hear it's easy. I only have experience doing exllama2 quants, which I'll describe below.
First, you need the [convert.py](https://convert.py) script from [turboderp's Exllama2 repo](https://github.com/turboderp/exllamav2). You can [read all about the convert.py arguments here](https://github.com/turboderp/exllamav2/blob/master/doc/convert.md).
The exllama2 quantization process requires a calibration dataset in parquet format. Word on the street is the dataset you use has a subtle influence on the model's behavior. If you want a RP / ERP flavor, consider this [PIPPA dataset](https://huggingface.co/datasets/jasonkstevens/pippa-llama2-chat/tree/refs%2Fconvert%2Fparquet/default/train) (the dataset from the Pygmalion people). I have also used this [WizardLM Evol Instruct dataset](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k/tree/refs%2Fconvert%2Fparquet/default/train) and haven't noticed a negative impact on the model for RP / ERP scenarios. For your convenience, both those links take you to parquet-formatted versions of those datasets that other people produced. I take no credit for them.
Now that you have your model and your calibration dataset ready to go, it's time to quantize. I recommend you do an exllama2 quant in two passes: measurement and then quantization. The reason being the measurement process takes a long time and you want to save the measurement.json output from that process so you can skip it later if you decide you want to target a different average bits per weight. You can also achieve that result by running the [convert.py](https://convert.py) script once with the -om flag set with no path, but I recommend splitting it into two executions to give you more control over the file name and location of that saved measurement.json file.
The final step before executing the script, at least for the second pass, is to decide on your target average number of bits per weight. That is by far the most important setting because it determines the resultant model size, both at rest and in use (i.e. memory requirements), and the model quality. I have heard that a value of 4.65 bpw is roughly equivalent to a 4-bit GPTQ quant with 32g act order, but someone correct me if I'm wrong.
I have been able to run a 5.0 bpw Llama2 70b model in 48 GB of VRAM (2 x NVIDIA 3090), but it's a tight fit at the full 4096 context size. I think 4.85 bpw is a good compromise between the two. I can comfortably run a 4.85 bpw Llama2 70b model at 8192 context in 48 GB of VRAM.
Here's the command I use to run the [convert.py](https://convert.py) script:
python3 convert.py -i <path to model> -o <path to working directory> -nr -om <model name>_measurement.json -mr 10 -gr 10 -c <parquet dataset file> && python3 convert.py -i <path to model> -o <path to working directory> -nr -m <model name>_measurement.json -b 4.85 -gr 40 -c <same parquet dataset file> -cf <model name>-exl2-4.85bpw
The `-mr 10 -gr 10` flags in the first command are manipulating how many rows from the dataset are used in a measurement batch, fitting them all into VRAM. The [convert.py](https://convert.py) script will not utilize multiple GPUs, so you're limited to the VRAM available on a single card (24 GB in my case). I have quantized 70b models before at the full 16 measurement row default value, but sometimes that produces out of memory errors, so now I lower it to be careful. A higher value of -gr for the second pass seems to work fine. Play around with the settings until you find something that works for you. You can also run the script entirely in system RAM using your CPU, it will just take longer. Consult the [convert.py](https://convert.py) documentation.
The format of the command above is designed to facilitate an AFK approach. Joining two commands with && in bash (maybe other shells too?) will cause the second command to execute immediately after the first command if the first command returns no errors. That way you get the advantages of the two-pass approach without sacrificing the set-it-and-forget-it advantages of a single-pass approach.
​
>The exllama2 quantization process takes several hours for a 70b parameter model using a single NVIDIA 3090.
​
**Conclusion**
I hope this guide proves to be helpful for beginners like me. I would be happy to answer questions or make edits to the information above to incorporate feedback and advice from other Redditors with more experience.
I will share a link to the lzlv\_airoboros\_70b merged model after I have time to upload it to Hugging Face. | 2023-10-22T17:29:45 | https://www.reddit.com/r/LocalLLaMA/comments/17dyhyb/guide_for_llama2_70b_model_merging_and_exllama2/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dyhyb | false | null | t3_17dyhyb | /r/LocalLLaMA/comments/17dyhyb/guide_for_llama2_70b_model_merging_and_exllama2/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'eE8p6eh7zRIRMQSrQsw-5uOK3ofGXeXshnazmRNG4DM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=108&crop=smart&auto=webp&s=a397793e98ef3d31f443e476a147dd5ce5e0e270', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=216&crop=smart&auto=webp&s=8685874226b92b4bdb8f00207d6b91a9ad9a1c54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=320&crop=smart&auto=webp&s=aebdc363cdfda39491566c8959c19d25e9931b2d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=640&crop=smart&auto=webp&s=52b9e2db45ca8f5d01f47cefa5f9164266620e55', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=960&crop=smart&auto=webp&s=6e7492fe727e0a81ee6b7cc252423e9d3b154395', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?width=1080&crop=smart&auto=webp&s=7c0bbb35557aa01800961eb9f61f583445e7ade0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hzWcnJDfnq2QvxdvsZG6KkVI-jxO6XZ9AolzdBNzbYo.jpg?auto=webp&s=656c6d258618e85430aa4cdfa9a90671aa4ad09c', 'width': 1200}, 'variants': {}}]} |
An explanation of why you should not include a trailing whitespace at the end of your prompts (ChatGPT, Llama, etc.) | 107 | Hi, a few months ago I began to finetune Llama-1 on some specific tasks. Despite the fact that the training seemed to go smoothly, when I tested the model, I consistently observed a strange behavior where the model always responded with a numbered list, even though this was not the expected format for the answers. I found \[this issue\]([https://github.com/facebookresearch/llama/issues/217](https://github.com/facebookresearch/llama/issues/217)) on the Llama-1 repository, but no answer was provided at that time.
After a few hours of investigation at that time, I realized that it was caused by a trailing whitespace in the prompt. I finally wrote a \[short explanation\]([https://github.com/facebookresearch/llama/issues/217#issuecomment-1774147331](https://github.com/facebookresearch/llama/issues/217#issuecomment-1774147331)) in the aforementioned Github issue, and decided to share what I learned here. Most of you are probably familiar with this by now, but it was very confusing back then. | 2023-10-22T17:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/17dyc8a/an_explanation_of_why_you_should_not_include_a/ | bOmrani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dyc8a | false | null | t3_17dyc8a | /r/LocalLLaMA/comments/17dyc8a/an_explanation_of_why_you_should_not_include_a/ | false | false | self | 107 | {'enabled': False, 'images': [{'id': '1NHGMfPRCqpf47sGlp-4TXZeNpdlwpcG8Peu-yfKygU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=108&crop=smart&auto=webp&s=04b67e3d283d118339cd5afd4f9c38f7c632a758', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=216&crop=smart&auto=webp&s=62477a974aca2f9d58ec603c7adac77f24cce0ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=320&crop=smart&auto=webp&s=720dc346161f8d65b62f6880a350842e427d5c09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=640&crop=smart&auto=webp&s=9496b7d39b834ac068bafbe44b050ba6582f47a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=960&crop=smart&auto=webp&s=497446d38af5ba8c480ab8d7acd62da31ae5e983', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?width=1080&crop=smart&auto=webp&s=80cd88125296b77c92324a73325b05af2fe3b1c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/me-dKmxSdT4p_iqCNPxC74wPFQOOhS_kcKqbvWNSOq0.jpg?auto=webp&s=0a118d3aa0b5afa9b2d13568d6d1193c3347b4ea', 'width': 1200}, 'variants': {}}]} |
Speeding up open source LLMs - speculative decoding | 15 | Hey everyone - wrote a simplified explanation of speculative decoding based on this [tweet](https://twitter.com/karpathy/status/1697318534555336961) and this [paper](https://arxiv.org/pdf/1811.03115.pdf).
Not a PhD or anything just enjoy writing.
[https://open.substack.com/pub/orangutanai/p/speeding-up-open-source-llms?r=2lc1dn&utm\_campaign=post&utm\_medium=web](https://open.substack.com/pub/orangutanai/p/speeding-up-open-source-llms?r=2lc1dn&utm_campaign=post&utm_medium=web) | 2023-10-22T17:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/17dy3fq/speeding_up_open_source_llms_speculative_decoding/ | firef1y1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dy3fq | false | null | t3_17dy3fq | /r/LocalLLaMA/comments/17dy3fq/speeding_up_open_source_llms_speculative_decoding/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'YPmpkoot6fRPCZqeAVp2fr8E7FQn6Ne1Yu49UCHLADU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/v9UF-m7QXyIjZkEOv9T_7zFvostaMCY6UqZWlz968EY.jpg?width=108&crop=smart&auto=webp&s=2f977fc5ab8cb923cb25c2409e7d4acbb2b7114b', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/v9UF-m7QXyIjZkEOv9T_7zFvostaMCY6UqZWlz968EY.jpg?auto=webp&s=2c2a0fad0040244a1f03f41cc371dcb785e07432', 'width': 140}, 'variants': {}}]} |
Infinity, a project for supporting RAG and Vector Embeddings. | 40 | [https://github.com/michaelfeil/infinity](https://github.com/michaelfeil/infinity)
Infinity, a open source REST API for serving vector embeddings, using a torch or ctranslate2 backend. Its under MIT License, fully tested and available under Github.
I am the main author, curoius to get your feedback.
FYI: Huggingface launched a couple of days after me a similar project ("text-embeddings-inference"), under a non open-source / non-commercial licence. | 2023-10-22T16:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/17dxnm6/infinity_a_project_for_supporting_rag_and_vector/ | OrganicMesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dxnm6 | false | null | t3_17dxnm6 | /r/LocalLLaMA/comments/17dxnm6/infinity_a_project_for_supporting_rag_and_vector/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'UkhawRaaw0phwlWU7cbrayBnCwZx2EHBKYo5TfI44Ao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=108&crop=smart&auto=webp&s=86bd6ebe4066827903bfbf1a5d213a46df53b64d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=216&crop=smart&auto=webp&s=a7261aa8fc295882051831acf6310d0e75d03fb8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=320&crop=smart&auto=webp&s=609ce1bf7f489dd8b931ff2ca6a1468dfd232b2f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=640&crop=smart&auto=webp&s=39843d2b7ee14eb133f79e8ffce4d1c8e760fe6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=960&crop=smart&auto=webp&s=7dceef806a8e29b0426bc538a24e9a7110cac1ec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?width=1080&crop=smart&auto=webp&s=b36286fe0a6444f762ec856898c1af38e740bc15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9D71_shmZMoDdJXjHWZhy1pYmJZcVtHqMMDmvIOl3tM.jpg?auto=webp&s=b2848ebd392282b41c94c954b6b1801043ecfb6c', 'width': 1200}, 'variants': {}}]} |
Exploring the current status quo of LLM UI | 7 | Hey all I'm currently wondering what UIs people are using for doing in-depth work with LLMs right now
Here are some things that my ideal LLM UI would have:
- Open-source
- Local-First - Runs locally, uses a folder of markdown files as a text data backend
- Model compatibility - Local models via [GPT4All](https://gpt4all.io/index.html) or similar, cloud models via [OpenRouter](https://openrouter.ai/) or similar
- Interaction modes:
- Simple Exploration - Chat and Completion Playground similar to OpenAI's website offerings
- Data/metric-driven, like [Galilieo LLM Studio](https://docs.rungalileo.io/galileo/#llm-studio) with options for parameter/model sweep runs, metric reports, etc
- Tree-based like [Loom](https://github.com/socketteer/loom)
- Node-based, like [Flux](https://flux.paradigm.xyz/)
I'm less concerned with Agentic offerings here. But yeah those are some things I'd like to have in a perfect world. Wondering what people are using that I've missed and what else I should be exploring. | 2023-10-22T16:41:12 | https://www.reddit.com/r/LocalLLaMA/comments/17dxfub/exploring_the_current_status_quo_of_llm_ui/ | URLSweatshirt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dxfub | false | null | t3_17dxfub | /r/LocalLLaMA/comments/17dxfub/exploring_the_current_status_quo_of_llm_ui/ | false | false | self | 7 | null |
LLM response consists of query wrapper prompt. | 1 | Hi everyone.. I'm using streamlit and llama 2 to build a chatbot. I have one problem with the llm response. 8nhage a query_wrapper_prompt and sometimes it appears in the LLM response. Is there a way to get rid of it? | 2023-10-22T16:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/17dxe36/llm_response_consists_of_query_wrapper_prompt/ | Mrgtaraja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dxe36 | false | null | t3_17dxe36 | /r/LocalLLaMA/comments/17dxe36/llm_response_consists_of_query_wrapper_prompt/ | false | false | self | 1 | null |
Using Guidance with up-to-date Llama 2 model | 2 | Hi,
I'm trying to migrate a project which uses guidance from GPT3.5 to a local llama version. I need a context length between 8K and 16K. Up to now, I haven't been to successful even with the most simple of examples from the guidance website. The models I tried failed to terminate in some cases, in other cases I received several warnings (regarding do\_sample and temperature settings, in other instances some log division by zero errors). Pasting all the different attempts I tried would be too much as there are simply too many of them.
Has anybody succeeded with this combination (local llama, long context length, guidance)? I tried, among others, "lmsys/vicuna-13b-v1.5-16k" and "togethercomputer/Llama-2-7B-32K-Instruct" because they seem to offer what I need. If I would get them to run...
Greetings! | 2023-10-22T16:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/17dx6el/using_guidance_with_uptodate_llama_2_model/ | Bradymodion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dx6el | false | null | t3_17dx6el | /r/LocalLLaMA/comments/17dx6el/using_guidance_with_uptodate_llama_2_model/ | false | false | self | 2 | null |
Unquantized LLaMa 2 | 1 | I need to run unquantized LLaMa 2 models. I know I can download them from hugging face (it seems its still pending) but how would I even run them? (Yes I know that quantized models don't degrade quality that much but I still need unquantized)
Thanks! | 2023-10-22T16:26:57 | https://www.reddit.com/r/LocalLLaMA/comments/17dx4uu/unquantized_llama_2/ | Acceptable_Can5509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dx4uu | false | null | t3_17dx4uu | /r/LocalLLaMA/comments/17dx4uu/unquantized_llama_2/ | false | false | self | 1 | null |
Best ressources to catch up from a vision ML engineer to modern NLP (LLMs) | 4 | The title says it all. I trained CNNs from scratch for several years and I'm about to start a more NLP oriented job soon.
I've played a bit with the huggingface library but to be honest I'm a bit lost, between all the models, the pretrained ones, the libraries, instruct fine tuning vs fine tuning, few shot learning,... You got it.
I'm lurking the Stanford course on LLMs on YouTube but I'd avoid looking 20 videos of 1h30 each if possible, especially since I have a solid base in ML already. | 2023-10-22T16:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/17dwu7s/best_ressources_to_catch_up_from_a_vision_ml/ | front-equal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dwu7s | false | null | t3_17dwu7s | /r/LocalLLaMA/comments/17dwu7s/best_ressources_to_catch_up_from_a_vision_ml/ | false | false | self | 4 | null |
Cost of running llama2 7B locally | 3 | Hey guys, I wanna run llama2 7B on my laptop with rtx4060, but I'm wondering whether this will hugely increase the power consumption of the whole machine and the electricity bill😹. Any ideas about doing a rough estimation here? | 2023-10-22T16:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/17dwrgs/cost_of_running_llama2_7b_locally/ | OneConfusion3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dwrgs | false | null | t3_17dwrgs | /r/LocalLLaMA/comments/17dwrgs/cost_of_running_llama2_7b_locally/ | false | false | self | 3 | null |
Fine-tune / enhance LLM | 3 | Hi
I want to enhance LLM like llama-7b or Mistral-7B on domain-specific knowledge.
All my data are in JSON format. My data is stored in a MySQL table JSON column. I will run an SQL to get JSONs from the MySQL table JSON field.
Could you point me to a Github repo, blog or Medium document, on how to go about it? I'm looking for feedback from someone who has done something like this before.
My MVP will be a chatbot that will interact with my trained data. Where should I start? Host? Please let me know the best practice. I'm doing a lot of research, but not yet found a definite resource. After the training, I intend to host the model in a cloud machine without a GPU.
​ | 2023-10-22T16:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/17dwnwj/finetune_enhance_llm/ | sanagun2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dwnwj | false | null | t3_17dwnwj | /r/LocalLLaMA/comments/17dwnwj/finetune_enhance_llm/ | false | false | self | 3 | null |
New to Google colab how do I interact with it? | 5 | Hi all,
Super newbie question, I had been playing around with colab. And I tried loading mistral 7b on it with some shared file on huggingface.
All is going well, I wanted to see how do I interface with it similar to chatgpt?
Last weekend, I played around with a local on my MacBook using oobabooga/text-generation-webui.
But love to learn more from the folks here | 2023-10-22T15:06:51 | https://www.reddit.com/r/LocalLLaMA/comments/17dvcth/new_to_google_colab_how_do_i_interact_with_it/ | Alrightly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dvcth | false | null | t3_17dvcth | /r/LocalLLaMA/comments/17dvcth/new_to_google_colab_how_do_i_interact_with_it/ | false | false | self | 5 | null |
Has anyone successfully fine-tuned model Yarn 64/128k? | 2 | If so, would you mind sharing your code? I am having an issue when do a backpropagation. | 2023-10-22T14:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/17duing/has_anyone_successfully_finetuned_model_yarn/ | hltt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17duing | false | null | t3_17duing | /r/LocalLLaMA/comments/17duing/has_anyone_successfully_finetuned_model_yarn/ | false | false | self | 2 | null |
Improving LLM speeds on 4090 | 7 | **I know I can probably get wayy higher tks/sec with ggml, GPTQ, etc. I just wanna make sure I have all the right drivers installed**
I have the following driver/lib versions installed -
Driver Version: 537.34, CUDA Version: 12.2,
transformers 4.33.1
I get ~25tks/sec with 7b param LLMs and ~0.4tks/sec with 13b. I've seen people get way more with the same 4090 card and at half precision.
Any specific driver version I should use? I'm open to others suggestions as well.
TIA. | 2023-10-22T13:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/17dtl64/improving_llm_speeds_on_4090/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dtl64 | false | null | t3_17dtl64 | /r/LocalLLaMA/comments/17dtl64/improving_llm_speeds_on_4090/ | false | false | self | 7 | null |
Is there a Local LLM security primer? | 4 | Basically the title. I’ve been lurking here for a few months and I think I’m ready to dive in and start running some myself.
I read a while ago about some security implications with running LLMs, and I just want to be sure I have a good understanding of the possible issues before I start running random code on my computer.
Any best practices or helpful tips?
Thanks in advance! | 2023-10-22T13:33:20 | https://www.reddit.com/r/LocalLLaMA/comments/17dteei/is_there_a_local_llm_security_primer/ | future-is-so-bright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dteei | false | null | t3_17dteei | /r/LocalLLaMA/comments/17dteei/is_there_a_local_llm_security_primer/ | false | false | self | 4 | null |
Best bilingual (eng-spa) model for content creation | 1 | [removed] | 2023-10-22T13:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/17dtbnp/best_bilingual_engspa_model_for_content_creation/ | Regallydial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dtbnp | false | null | t3_17dtbnp | /r/LocalLLaMA/comments/17dtbnp/best_bilingual_engspa_model_for_content_creation/ | false | false | self | 1 | null |
Best bilingual model for content creation | 1 | [removed] | 2023-10-22T13:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/17dsqp5/best_bilingual_model_for_content_creation/ | danilo139hg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dsqp5 | false | null | t3_17dsqp5 | /r/LocalLLaMA/comments/17dsqp5/best_bilingual_model_for_content_creation/ | false | false | self | 1 | null |
Need guidance finetuning +RAG | 5 | I want to create a chat llm that talks in a specific way, lets say an llm law application that can talk to people about their cases. A person comes and tells his case the llm should be able to have a conversation like lawyer has. I know we can create such a llm by finetuning it on such type of chats. But what if user asks about the fees or some laws? These are factual data. So afaik we need to use RAG here. But as I see RAGs are for QnA, but my use case is conversational and at times only QnA what should I do? Am I missing something to analyse or if any help would be greatly appreciated! | 2023-10-22T12:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/17dsaeq/need_guidance_finetuning_rag/ | s1lv3rj1nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dsaeq | false | null | t3_17dsaeq | /r/LocalLLaMA/comments/17dsaeq/need_guidance_finetuning_rag/ | false | false | self | 5 | null |
Best bilingual model for content creation?? | 1 | [removed] | 2023-10-22T12:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/17ds7t6/best_bilingual_model_for_content_creation/ | danilo139hg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ds7t6 | false | null | t3_17ds7t6 | /r/LocalLLaMA/comments/17ds7t6/best_bilingual_model_for_content_creation/ | false | false | self | 1 | null |
Graphing emotion events for in-depth sentiment analysis with bart-large fine-tuned on GPT-4 synthetic data | 24 | 2023-10-22T12:18:00 | https://v.redd.it/kslqlv6kxqvb1 | helliun | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17drz4i | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/kslqlv6kxqvb1/DASHPlaylist.mpd?a=1700569092%2CNmI2YmQxNDNmY2E4NmFhYzg0OWYxMzg0ZTIxNjdlNTBkNDQ2ZTBjYzM2OGU5YzkyOTBkZTc2OGVkYWU5NTQ5Nw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/kslqlv6kxqvb1/DASH_270.mp4?source=fallback', 'height': 270, 'hls_url': 'https://v.redd.it/kslqlv6kxqvb1/HLSPlaylist.m3u8?a=1700569092%2CMDNjNDE2MGY3MTQ2NTVkOWVhNmU0YjkxMGRmMWVjN2FlNjI4YWE0ZWRkZjJkOGM2Y2I1ODhhYzk5MjY1OTU2Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kslqlv6kxqvb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_17drz4i | /r/LocalLLaMA/comments/17drz4i/graphing_emotion_events_for_indepth_sentiment/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'I-XvPg76Fbdh4hF8eKIiWsmZU2E3CdsLFhLFl_-mlx8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3TbRdgG7C8NyclWkNxGGPMTgGfv7vcSrQ5Otn0ztNaU.png?width=108&crop=smart&format=pjpg&auto=webp&s=073461a18af3399eeb8904bf2a3b23a5165716f3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/3TbRdgG7C8NyclWkNxGGPMTgGfv7vcSrQ5Otn0ztNaU.png?width=216&crop=smart&format=pjpg&auto=webp&s=0bdbe68353aecaa5be03c494305106ca39033463', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/3TbRdgG7C8NyclWkNxGGPMTgGfv7vcSrQ5Otn0ztNaU.png?width=320&crop=smart&format=pjpg&auto=webp&s=cb2c3526db82554132bb1ccfe98bb6304a0c8bd9', 'width': 320}], 'source': {'height': 338, 'url': 'https://external-preview.redd.it/3TbRdgG7C8NyclWkNxGGPMTgGfv7vcSrQ5Otn0ztNaU.png?format=pjpg&auto=webp&s=a8c8141980f428610b4b1781f6c3e8d465f45421', 'width': 600}, 'variants': {}}]} | ||
Can someone please explain the sorcery behind Mythical-Destroyer-V2-L2-13B | 1 | [removed] | 2023-10-22T12:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/17drp3n/can_someone_please_explain_the_sorcery_behind/ | the_chatterbox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17drp3n | false | null | t3_17drp3n | /r/LocalLLaMA/comments/17drp3n/can_someone_please_explain_the_sorcery_behind/ | false | false | self | 1 | null |
Advances in Long Context | 93 | *Why is long context challenging?*
- You need high quality long context datasets
Models like Llama 2 are trained on 4K tokens. CodeLlama is 16k tokens.
There aren’t many 32k or 100k context datasets - especially in a chat/instruction format that can be used for supervised fine tuning or reinforcement learning.
- You need big GPUs to train and inference long context.
This is because of what’s called the kv cache. Every input token has a key vector and a value vector that is calculated for each layer on the first forward pass. These values are then stored so they are readily available for generating each next token.
Consider some numbers very roughly like Llama 7B:
- A model with 32 layers
- Key and value vectors of 4096 dimensions each
- 2 vectors per input token (key and value)
- Numbers stored as 16 bits (bf16), which is 2 bytes per number.
- An input sequence of 32,000 tokens
That means calculating and storing:
32 x 4096 x 2 x 2 x 32,000 = 16 GB of data.
(The above calculation is for one attention head. Typically, multiple smaller attention heads are used - although this doesn’t change the calculation.)
Now, Llama 7B is itself about 13 GB in size in 16-bit precision. So, for 32k context, the GPU memory need is half for the model and half for the kv cache.
*What other problems emerge dealing with large kv cache?*
1. Unused memory
Even though theoretical memory requirements are 13Gb plus 16Gb in the above example, in practice it’s worse.
For example, during inference, a block of memory might be reserved in the GPU that is the size of the maximum context. This results in a lot of overallocation of memory that ends up being unused as sequences are typically shorter than the max length.
One solution here is to allocate memory on the fly by using virtual memory that maps to small blocks of physical memory that don’t have to be contiguous. This is paged attention - as implemented by vLLM.
2. Slowness reading and writing matrices in the GPU (memory boundedness)
The attention calculation for each token goes as follows:
- you multiply the query vector for a given input token by the key vectors for all previous tokens. So there’s a k * q output matrix for a given input token and each previous token
- you multiply each matrix by the value vector for each corresponding previous token.
- this results in a contribution (a vector) of information from each previous token to a given input token.
If you naively do this calculation, you need to store the matrix k * q temporarily on the GPU before multiplying it by the value vector.
The bottleneck becomes the writing and reading of the k * q result within the GPU (ie. writing and reading to the gpu main memory from/to its inner cache, which is much smaller in size, maybe 30-50 MB).
Flash attention 2 avoids this reading and writing by chunking the k and v vectors into small enough pieces that the inner GPU can compute straight through the q * k and further the multiplication by v … all without writing q * k to the main GPU memory.
This results in a big speed up for an initial forward pass where - for 32k context - each of the 32k input tokens has a q * k matrix being calculated FOR EACH OF ITS PREVIOUS TOKENS. With flash attention, the calcs are no longer slowed by writing and reading these matrices to memory.
[It’s actually 32000*32001/2 = 500,000,000 k*q matrices!!!]
Note that, because all of the calculations and vectors are chunked, they must be reassembled . But this reassembly adds little compute.
3. Slowness along the sequence dimension
After the first forward pass, where attention is calculated for all initial input tokens, we move to generating one token at a time. Now, k * q matrixes are only computed for the latest token with each previous token. So, we have about 32k matrices rather than 32k * 32001 / 2 matrices (because I’m the first forward pass, we need to do attention for each input token and their previous tokens).
The write and read requirements are orders of magnitude smaller. So, flash attention - which chunks the k and v vectors - doesn’t help much.
The bottleneck now is doing attention between the latest input token and each previous token. Done naively, this isn’t parallelized.
Flash decoding is the solution to this. Say you have 32000 key vectors - one for each input token. Flash decoding splits this into groups , say 32 groups of 1000 vectors. It then parallelizes calculations for each of these groups.
This allows for much better use of the GPU. Instead of using only one steaming multi processor, the GPU can now use 32 in parallel (an A100 has 108 streaming multi processors).
If you are only running one input sequence (e.g. batch size of one), the GPU will be largely unused. With flash decoding you can use all of the gpu’s compute capability and process much longer prompts with the same speed - up to about 32k it seems for Llama 13b on an A100 without much speed degradation.
*What about quality?*
All of my comments above about flash and paging are about speed and GPU usage.
You can have good speed but bad quality.
Language models have a specific problem with ignoring content in the middle of a long prompt.
A good test is passkey retrieval where you put a keyword in the middle of the text and ask for it to be retrieved.
In my experience code llama can retrieve well from 8k context . And Llama 2 13b or larger can retrieve from anywhere in 2k context. 7B is bad at retrieval.
Meta have a 32k mode but it doesn’t seem to be released yet. LongLoRA has a 32k model but passkey retrieval isn’t good on the model fine tuned for chat (which is what you need).
*Can RoPE scaling help*
In Llama, token positions are represented by rotating the token embeddings by a certain amount corresponding to position.
In theory, you can scale this rotation for larger context sizes. In practice I haven’t seen this work for passkey retrieval.
It’s possible to get sensible output that looks good, but isn’t high quality (ie it’s a summary that doesn’t actually summarise well).
Together ai put a lot of effort training a 32k model, but it’s 7B and in my experience those models aren’t strong for passkey retrieval or summaries. It seems larger modes will be released.
*What about streaming llm?*
This approach seems to allow for sensible output from long contexts, but doesn’t allow for passkey retrieval.
It doesn’t seem there is a shortcut . You have to attend to all tokens .
tbc… | 2023-10-22T12:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/17drour/advances_in_long_context/ | TrelisResearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17drour | false | null | t3_17drour | /r/LocalLLaMA/comments/17drour/advances_in_long_context/ | false | false | self | 93 | null |
Use case for 4090 + 3080? | 1 | I just bought a new RTX 4090, should I sell my old 3080 or is there a use case for using both cards at the same time?
I have a 850w PSU, will that be enough for both cards? | 2023-10-22T11:46:16 | https://www.reddit.com/r/LocalLLaMA/comments/17drg89/use_case_for_4090_3080/ | FreddyFiveFingers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17drg89 | false | null | t3_17drg89 | /r/LocalLLaMA/comments/17drg89/use_case_for_4090_3080/ | false | false | self | 1 | null |
Offtopic LLM job role | 1 | How to prepare for questions on LLM related projects in a hands on way at low cost.
They are asking too much in job description not sure where to start and what to cover | 2023-10-22T11:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/17dreaj/offtopic_llm_job_role/ | Remarkable_Touch_506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dreaj | false | null | t3_17dreaj | /r/LocalLLaMA/comments/17dreaj/offtopic_llm_job_role/ | false | false | self | 1 | null |
Multiple LLMs one one GPU? | 1 | Hello,
my question is regarding how to setup a cloud environment for multiple LLMs. Can I use multiple containerized LLM models at the same time on one GPU?
Say the EC instance I rented has one 26gb VRAM gpu and 64gb ram. Can I just startup multiple container on that one instance and share the gpu power?
Is there a better way to set this up in the cloud? Managed container instances?
Thank you for your time | 2023-10-22T11:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/17dr42p/multiple_llms_one_one_gpu/ | Nokita_is_Back | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dr42p | false | null | t3_17dr42p | /r/LocalLLaMA/comments/17dr42p/multiple_llms_one_one_gpu/ | false | false | self | 1 | null |
I fork around, a lot. | 1 | [removed] | 2023-10-22T11:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/17dqva0/i_fork_around_a_lot/ | 80Unknown08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dqva0 | false | null | t3_17dqva0 | /r/LocalLLaMA/comments/17dqva0/i_fork_around_a_lot/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=108&crop=smart&auto=webp&s=027a4a136867b336cfe98d6c39e0251e1ba0b055', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=216&crop=smart&auto=webp&s=9a7929b9305bcdbe407f1f92f6d6ff052daa19b8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=320&crop=smart&auto=webp&s=0734e32327e4cfd403ebf8ffdf65183ffeebc1fc', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=640&crop=smart&auto=webp&s=e3feb83e359c84d1f8cdc966f94a8cc059151e49', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=960&crop=smart&auto=webp&s=870dd5ec30e26b8871e8a59d148f669ddd3c276f', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=1080&crop=smart&auto=webp&s=2ce833a6565a2978737477a6cb1b5893d1ccbcde', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?auto=webp&s=9e4b3880988ad486a361a37c3a53e19ff66ac844', 'width': 1140}, 'variants': {}}]} | |
Has anyone fine tuned BTLM 3B 8K model? | 1 | Need help | 2023-10-22T10:19:38 | https://www.reddit.com/r/LocalLLaMA/comments/17dq5ky/has_anyone_fine_tuned_btlm_3b_8k_model/ | AdFlashy3434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dq5ky | false | null | t3_17dq5ky | /r/LocalLLaMA/comments/17dq5ky/has_anyone_fine_tuned_btlm_3b_8k_model/ | false | false | self | 1 | null |
LLM to query a product list | 4 | From this list downloaded from the Kaggle Amazon products dataset 2023 (just the first products dataset I have found, no specific reason), I have tried asking h2o llama2 7b chat, 13b, 16k codellama 7b, HF zephyr 7b alpha to list the products with "Android" in their name regarding the Televisors.
They listed only max 3 of them, the complete list is about 1100 televisors and many more has Android in their name.
I'm using h2ogpt and suppose there is a limit defined from the max sequence lenghts, but I'm not able to change it in the UI, neither changing the max_max_new_tokens in the Docker startup script seems to make any difference.
Wondering if there is something else I'm missing and there are completely different reasons. | 2023-10-22T10:16:44 | https://www.kaggle.com/datasets/lokeshparab/amazon-products-dataset | davide445 | kaggle.com | 1970-01-01T00:00:00 | 0 | {} | 17dq446 | false | null | t3_17dq446 | /r/LocalLLaMA/comments/17dq446/llm_to_query_a_product_list/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'UAuFCLuvk2nqc-tYPOoD4nrGXJtrX04SlfJyxAPnKo0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=108&crop=smart&auto=webp&s=be180292a2b1ac799d26d47c80cb069cc3217bbc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=216&crop=smart&auto=webp&s=edd95852ef33b0de8392aa3584185aee52f3718f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=320&crop=smart&auto=webp&s=fc5d960a2fcc68fd37b42511fa81d592abd33b5d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=640&crop=smart&auto=webp&s=78871eac6888fc34026d6e45e3209212c71fc001', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=960&crop=smart&auto=webp&s=c2f810aedbceb63eefc2f6123c4520e354f7ccbd', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?width=1080&crop=smart&auto=webp&s=2e23f4f34bd77a0865005cb6905eed83022fb044', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/PvRgrbLZ6ti2SwNcXK54uSUgLGrUxlLS-d2FKooJb_g.jpg?auto=webp&s=7ba62052bbaf06da9b91ac72601e0b663124819a', 'width': 1200}, 'variants': {}}]} | |
Question | 1 | Has anyone finetuned BTLM 3B 8K model? | 2023-10-22T10:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/17dq3na/question/ | AdFlashy3434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dq3na | false | null | t3_17dq3na | /r/LocalLLaMA/comments/17dq3na/question/ | false | false | self | 1 | null |
Script for training mistral 7b on kaggle | 2 | I need a script that will help me fine-tune Mistral 7b on kaggle after the 30gb and x2 Tesla GPUs Updates I want to try it there any help? | 2023-10-22T10:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/17dpwfg/script_for_training_mistral_7b_on_kaggle/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dpwfg | false | null | t3_17dpwfg | /r/LocalLLaMA/comments/17dpwfg/script_for_training_mistral_7b_on_kaggle/ | false | false | self | 2 | null |
Which models besides StableLM employed RLHF? | 2 | Have there been any besides StableLM?
Are there any available RLHF reward models?
Has anyone got the impression that RLHF is even very helpful at all? | 2023-10-22T09:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/17dpfzp/which_models_besides_stablelm_employed_rlhf/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dpfzp | false | null | t3_17dpfzp | /r/LocalLLaMA/comments/17dpfzp/which_models_besides_stablelm_employed_rlhf/ | false | false | self | 2 | null |
Seeking Advice: Cost-Efficient Way to Generate Descriptions for 100 Million Images with LLaVa? | 1 | [removed] | 2023-10-22T08:14:40 | https://www.reddit.com/r/LocalLLaMA/comments/17dog42/seeking_advice_costefficient_way_to_generate/ | JackiJumper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dog42 | false | null | t3_17dog42 | /r/LocalLLaMA/comments/17dog42/seeking_advice_costefficient_way_to_generate/ | false | false | self | 1 | null |
Is it possible for an open source model (LLama, Mistral) to work parallel with 2 prompts at the same time? | 19 | If I a LLama 2 7B on a 4090 and get let's say 40 Tokens / sec, can 2 users then call it at the same time and get their output parallel with let's say 20 Tokens / sec each?
Or is it built to finish one prompt before tackling the other?
Is there any websites or articles that are specialised in getting local LLMs to work for multiple users?
Thank you | 2023-10-22T07:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/17dnyvt/is_it_possible_for_an_open_source_model_llama/ | Bene0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dnyvt | false | null | t3_17dnyvt | /r/LocalLLaMA/comments/17dnyvt/is_it_possible_for_an_open_source_model_llama/ | false | false | self | 19 | null |
Should I buy a used PC with an RTX 3090 for 1,500€? | 4 | Hello everyone,
I'm looking into cost-effective ways to start running small local LLMs, OpenAI's Whisper STT, and Stable Diffusion. Initially, I thought about building a PC to support two RTX 3090s, starting off with one and potentially adding another later. However, the costs of getting the right motherboard, cooling system, and power delivery turned out to be pretty steep. Given this, I've decided to go for a single RTX 3090 setup.
Living in Germany, I've seen quite a few used PCs on eBay equipped with an RTX 3090, and the prices are around €1,500, which seems reasonable to me. The CPUs in these machines range from a Ryzen 5 3600 to an i7-12700K or an i9-11900K.
As someone new to AI-focused PCs, I've got the impression that the CPU and RAM aren't as crucial as the GPU. Would it be sensible to just choose the cheapest PC option, even if it's running with a Ryzen 5 3600 CPU and only comes with 32GB of RAM?
Thanks in advance! | 2023-10-22T07:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/17dnojs/should_i_buy_a_used_pc_with_an_rtx_3090_for_1500/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dnojs | false | null | t3_17dnojs | /r/LocalLLaMA/comments/17dnojs/should_i_buy_a_used_pc_with_an_rtx_3090_for_1500/ | false | false | self | 4 | null |
Question - fine-tuned zepyhr-7b-alpha replies | 5 | Hi,
i've fine-tuned zeyphr-7b-alpha based on my own .jsonl {"input": "...", "output": "..."} Q&A data like described here -> [https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb](https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb)
The jsonl used for training had only around 100 entries.
After the training i tried different epochs and asked a specific question when some event happened (that was specifically mentioned in one of the qa entry in the training data). But no epoch was able to give the correct reply.
Is this normal, or might i've done sth. wrong ?? I mean the purpose of this fine-tuning for me is not to inject factual information but to improve ingestion and knowledge graph triplet generation afterwards. But i was excepting that the fine-tuned model can give correct replies to very specific events only mentioned in fine-tuning training data.
Here is also the output from the training run.
https://preview.redd.it/xi1ecqkm6pvb1.png?width=1231&format=png&auto=webp&s=072af065feadf2c7bdb62bda4c73a5b6a66b7144 | 2023-10-22T06:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/17dmxya/question_finetuned_zepyhr7balpha_replies/ | Glat0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dmxya | false | null | t3_17dmxya | /r/LocalLLaMA/comments/17dmxya/question_finetuned_zepyhr7balpha_replies/ | false | false | 5 | {'enabled': False, 'images': [{'id': '3pt6_x99lUiCedI9pfUMAFGvRvDFRNpsYurzjaQFMg4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=108&crop=smart&auto=webp&s=177b4e93cbfd965c449085a8c66f558315f2ee2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=216&crop=smart&auto=webp&s=8d040a3d28b4377e93f97dbdc7da225cf423cf30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=320&crop=smart&auto=webp&s=97e44620b0abbe713c273ceda258ae82aa8f865c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=640&crop=smart&auto=webp&s=252c323b4de21c9cd4a0cbf2a6a58d78a3f8b93c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=960&crop=smart&auto=webp&s=ba216aa30502069cea576a3cbc8594b3ff0e2d7d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?width=1080&crop=smart&auto=webp&s=ad02f3c64e13dc313c559d1dc2fb3031deb5ef70', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gRONwSvAtay-IWTX0zt9n2b3rKIoY8uGMyR5X_HZuO4.jpg?auto=webp&s=bd59591c7c6e15d582faed195c5724b51f2c2bd1', 'width': 1200}, 'variants': {}}]} | |
Karparthy is here!? | 483 | 2023-10-22T03:58:34 | Dry_Long3157 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17dkmyv | false | null | t3_17dkmyv | /r/LocalLLaMA/comments/17dkmyv/karparthy_is_here/ | false | false | 483 | {'enabled': True, 'images': [{'id': 'kUNsai_P8JkPXhuS4GixeQFmMmQdKyW1LClyUkRhY2w', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=108&crop=smart&auto=webp&s=a80c31804ff30e861353e5a977313a5333a38463', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=216&crop=smart&auto=webp&s=2be5f88b93598cf7484e6c0c71a71e4c1a8fa0e1', 'width': 216}, {'height': 387, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=320&crop=smart&auto=webp&s=03ec548724bf03927ae7fa93842e0a79d61f4cc4', 'width': 320}, {'height': 775, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=640&crop=smart&auto=webp&s=83fa4662af8d4a1778f021597a89094b8f23e96c', 'width': 640}, {'height': 1163, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=960&crop=smart&auto=webp&s=7f2efb0580cbc0001d11939e4003b565ce53c700', 'width': 960}, {'height': 1309, 'url': 'https://preview.redd.it/161g62pigovb1.png?width=1080&crop=smart&auto=webp&s=eda00a8497f0a547916e74ce48d315b22d87038e', 'width': 1080}], 'source': {'height': 1309, 'url': 'https://preview.redd.it/161g62pigovb1.png?auto=webp&s=52b5b377c45a3685d99f65018101e3bdfffe9099', 'width': 1080}, 'variants': {}}]} | |||
Mistral 3b? | 10 | I see there is a mistral [500MB Version](https://huggingface.co/openaccess-ai-collective/tiny-mistral/tree/main) ... curious if anyone knows how to make a blank mistral 3b or 1.1B (like tinyllama). | 2023-10-22T02:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/17djecw/mistral_3b/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17djecw | false | null | t3_17djecw | /r/LocalLLaMA/comments/17djecw/mistral_3b/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'Z09_OHWbfcHcBhxUAzdYa5ESto-GIWCEPClkrEJa5_k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=108&crop=smart&auto=webp&s=c411d1afe7d1155daa534bcb8c525cedf84b4f25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=216&crop=smart&auto=webp&s=248abbdd1573f8668df34db815a5f8c6f68ee1a0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=320&crop=smart&auto=webp&s=49b86a1e062d6ca9c7b622db55872c185c67ae88', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=640&crop=smart&auto=webp&s=7def108858caa4c277adf149983be905fe97334f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=960&crop=smart&auto=webp&s=69a26cfd6b170f79c1da81457a472cf2ffdc624b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?width=1080&crop=smart&auto=webp&s=a3cf1a25fb03991b7177bc9f09af4470b5ff791d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iY1VAPEFkS1-avpFtrX53WQckTjzk7Eqk4rvQTFLdrs.jpg?auto=webp&s=cdaf6e21273539105e56d98ab317a0d9719b26de', 'width': 1200}, 'variants': {}}]} |
Is there a group for finetuning models for different industries (healthcare, real estate) | 1 | [removed] | 2023-10-22T02:37:48 | https://www.reddit.com/r/LocalLLaMA/comments/17dj7i5/is_there_a_group_for_finetuning_models_for/ | PuzzledSociety7006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dj7i5 | false | null | t3_17dj7i5 | /r/LocalLLaMA/comments/17dj7i5/is_there_a_group_for_finetuning_models_for/ | false | false | self | 1 | null |
Is there a group for finetuning models for different industries (healthcare, real estate) | 1 | [removed] | 2023-10-22T02:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/17dj6b8/is_there_a_group_for_finetuning_models_for/ | PuzzledSociety7006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dj6b8 | false | null | t3_17dj6b8 | /r/LocalLLaMA/comments/17dj6b8/is_there_a_group_for_finetuning_models_for/ | false | false | self | 1 | null |
Is there a group for finetuning models for different industries (healthcare, real estate) | 1 | [removed] | 2023-10-22T02:30:57 | https://www.reddit.com/r/LocalLLaMA/comments/17dj2w8/is_there_a_group_for_finetuning_models_for/ | PuzzledSociety7006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dj2w8 | false | null | t3_17dj2w8 | /r/LocalLLaMA/comments/17dj2w8/is_there_a_group_for_finetuning_models_for/ | false | false | default | 1 | null |
Is there a group for finetuning models for different industries (healthcare, real estate) | 1 | [removed] | 2023-10-22T02:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/17dj2hv/is_there_a_group_for_finetuning_models_for/ | PuzzledSociety7006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dj2hv | false | null | t3_17dj2hv | /r/LocalLLaMA/comments/17dj2hv/is_there_a_group_for_finetuning_models_for/ | false | false | self | 1 | null |
Llama 70B model with 2.5bpw produced weird responses | 5 |
Hi, I am new to LLM quantization and wanted to use text-generation-webui to run a local LLM chatbot.
I have an RTX4090 GPU. ExLlamaV2 worked great when I applied it to the 13B model with 8bpw, but when it came to the 70B model with 2.5bpw, I got strange results as below:
​
https://preview.redd.it/urubzmedynvb1.png?width=257&format=png&auto=webp&s=541270dc5789c9fd686f301e066d0fcb3873cfe9
I think this was not the problem of the web UI because it also occurred while using examples/chat.py.
​
https://preview.redd.it/xqe5tmzdynvb1.png?width=811&format=png&auto=webp&s=20dbed46f2bbb0c531122fc367cc566b9b26dad9
How can I solve this? Thanks! | 2023-10-22T02:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/17diu24/llama_70b_model_with_25bpw_produced_weird/ | tomleung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17diu24 | false | null | t3_17diu24 | /r/LocalLLaMA/comments/17diu24/llama_70b_model_with_25bpw_produced_weird/ | false | false | 5 | null | |
Has anyone run into this problem when quantizing large models? | 9 | I'm quantizing Llama 2 70b, after all the blocks are quantized, the processes seems to stall here, its been like this for an hour, which seems longer than expected, but CPU usage is hovering around 100%.
​
[My run pod ssh](https://preview.redd.it/c89ynzamrnvb1.png?width=2872&format=png&auto=webp&s=9b3880174f7472f7b1f70c1f16ba8656df7f74b9)
​
[pod](https://preview.redd.it/lb89jrfornvb1.png?width=2132&format=png&auto=webp&s=f475d6bf3b92cc9b0524bc6a2588f61f7c2119aa)
`from optimum.gptq import GPTQQuantizer`
`# Dataset id from Hugging Face`
`dataset_id = "wikitext2"`
`# GPTQ quantizer`
`quantizer = GPTQQuantizer(bits=4, dataset=dataset_id, model_seqlen=4096)`
`quantizer.quant_method = "gptq"`
`import torch`
`from transformers import AutoModelForCausalLM, AutoTokenizer`
`# Hugging Face model id`
`model_id = "xyz/llama232klonglora"`
`tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) # bug with fast tokenizer`
`model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, torch_dtype=torch.float16) # we load the model in fp16 on purpose`
`import os`
`import json`
`# quantize the model`
`quantized_model = quantizer.quantize_model(model, tokenizer)`
`# save the quantize model to disk`
`save_folder = "./local"`
`quantized_model.save_pretrained(save_folder, safe_serialization=True)`
`# load fresh, fast tokenizer and save it to disk`
`tokenizer = AutoTokenizer.from_pretrained(model_id).save_pretrained(save_folder)`
`# save quantize_config.json for TGI`
`with open(os.path.join(save_folder, "quantize_config.json"), "w", encoding="utf-8") as f:`
`quantizer.disable_exllama = False`
`json.dump(quantizer.to_dict(), f, indent=2)`
`with open(os.path.join(save_folder, "config.json"), "r", encoding="utf-8") as f:`
`config = json.load(f)`
`config["quantization_config"]["disable_exllama"] = False`
`with open(os.path.join(save_folder, "config.json"), "w", encoding="utf-8") as f:`
`json.dump(config, f, indent=2)` | 2023-10-22T01:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/17di5iz/has_anyone_run_into_this_problem_when_quantizing/ | TheDigitalRhino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17di5iz | false | null | t3_17di5iz | /r/LocalLLaMA/comments/17di5iz/has_anyone_run_into_this_problem_when_quantizing/ | false | false | 9 | null | |
LoRA for MLPs? | 6 | In my understanding, we only apply LoRA to reparametrize the key and value matrices in the self-attention blocks. Why do we not use it on tbe feedforward networks after the attention blocks as well?
We can do a low-rank decomposition of the dense layers as well, right? | 2023-10-22T01:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/17dhjbp/lora_for_mlps/ | AdComprehensive2426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dhjbp | false | null | t3_17dhjbp | /r/LocalLLaMA/comments/17dhjbp/lora_for_mlps/ | false | false | self | 6 | null |
What are the best llama 2 based uncensored models? | 0 | I've been trying out Mythomax 13b, but really haven't checked any other models since Llama 2 came out. From what I understand, Llama 2 is superior, so I was wondering what were the current best uncensored models out there. My RTX 3090 GPU can take as much as 30B. | 2023-10-22T00:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/17dgvd9/what_are_the_best_llama_2_based_uncensored_models/ | AbsorbingCrocodile | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dgvd9 | false | null | t3_17dgvd9 | /r/LocalLLaMA/comments/17dgvd9/what_are_the_best_llama_2_based_uncensored_models/ | false | false | self | 0 | null |
What can you fine tune with 2x A6000s? | 18 | I'm currently attempting to fine tune a Llama2 7B LoRA (not QLoRA, I'm loading model in fp16) using PEFT / Transformers with DeepSpeed and I have to use CPU offload + a small batch size, so training is extremely slow. GPU utilization oscillates between 0% and 100% so the offloading seems to be taking up a lot of cycles.
My goal is to understand what is possible with A6000s when it comes to fine tuning. What can you realistically train (ie in a reasonable timeframe) with 2x A6000s? Is it possible to fine tune a 4k / 8k / 32k context window LoRA without quantizing in a reasonable amount of time? If so, any tutorials / examples / explainers would be really helpful. If not, what is realistically possible with these cards? | 2023-10-21T23:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/17dfs6x/what_can_you_fine_tune_with_2x_a6000s/ | Upbeat-Interaction13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dfs6x | false | null | t3_17dfs6x | /r/LocalLLaMA/comments/17dfs6x/what_can_you_fine_tune_with_2x_a6000s/ | false | false | self | 18 | null |
running falconllm 180b locally. pc setup | 3 | hi, i need your help.
i want to run falconllm 180b on pc/server. but i kind of dont quite know what minimal speccs are required.
i understand that a lot of power is needed and i wanted to buy a new pc (or is some sort of rig better?) .
what speccs would they need?
gpu, cpu, ram and diskspace?
and number wise. how expensive are we talking here? like 3k? 5k ? 10k?
i would really appreciate your help :)
have a nice day. | 2023-10-21T23:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/17dfkvn/running_falconllm_180b_locally_pc_setup/ | thefunnyape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dfkvn | false | null | t3_17dfkvn | /r/LocalLLaMA/comments/17dfkvn/running_falconllm_180b_locally_pc_setup/ | false | false | self | 3 | null |
wizardLM gone from github? | 34 | so i was doing some research for a personal project and I noticed that their github repo is just completely gone-- anyone know what happened here?
also, while we're at--i'm trying to find out what the hardware reqs would be for a system that can:
* host **WizardCoder Python 34B V1.0** (gguf) in a decent quant (4-5 bit)
* run inference at 10-20 t/s.
* context window extension (32k possibly)
let's say i have a $5,000 budget. | 2023-10-21T21:33:31 | https://www.reddit.com/r/LocalLLaMA/comments/17dd1rw/wizardlm_gone_from_github/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dd1rw | false | null | t3_17dd1rw | /r/LocalLLaMA/comments/17dd1rw/wizardlm_gone_from_github/ | false | false | self | 34 | null |
Is this loss normal? (qlora/Mistral) | 10 | Hi there,
I am fine-tuning Mistral-7B on a custom task. I have a huge training dataset (3gb), most of entries are about 6k+- tokens.
Is this loss normal?
I get the spikes but haven’t seen anything quite like this.
base_model: mistralai/Mistral-7B-v0.1
adapter: qlora
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
lora_r: 512
lora_alpha: 256
lora_dropout: 0.05
lora_target_linear: true
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1 | 2023-10-21T21:30:03 | CyberNativeAI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17dcyvg | false | null | t3_17dcyvg | /r/LocalLLaMA/comments/17dcyvg/is_this_loss_normal_qloramistral/ | false | false | 10 | {'enabled': True, 'images': [{'id': 'tCqVzVR9yDvWHYfMBjoSnF9tneWPgJncMJZ7M4cg6pA', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=108&crop=smart&auto=webp&s=fda2b8c262552f3cf4a5e39b4614c76c5b9d4924', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=216&crop=smart&auto=webp&s=481e94db0a0e4f3860790e01c9a5c896366678de', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=320&crop=smart&auto=webp&s=ef97a16e435ad520efdd5aa376c976b5a02a44b3', 'width': 320}, {'height': 475, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=640&crop=smart&auto=webp&s=8e44449eadc423efa82167fc6a11aa65bc3f39f0', 'width': 640}, {'height': 713, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=960&crop=smart&auto=webp&s=6c12aa246e332dc8357b51089e3effd7d2344bd6', 'width': 960}, {'height': 802, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?width=1080&crop=smart&auto=webp&s=12d5944dccf20c279113bf37eced2b05821de817', 'width': 1080}], 'source': {'height': 951, 'url': 'https://preview.redd.it/x1k8hi57jmvb1.jpg?auto=webp&s=4622b7e0840cc6c3097a916b2c90e116869a4434', 'width': 1280}, 'variants': {}}]} | ||
What are the best open source text promptable vision models? | 5 | Hey all, I curious to start trying out some vision models this week, where would you suggest I start? 😊 | 2023-10-21T21:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/17dcoml/what_are_the_best_open_source_text_promptable/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dcoml | false | null | t3_17dcoml | /r/LocalLLaMA/comments/17dcoml/what_are_the_best_open_source_text_promptable/ | false | false | self | 5 | null |
Have you tried MemGPT? | 23 | Hey all, I love a good memory system so I’m curious to hear if you guys have tried MemGPT and what your thought are. Im also curious to know if you think it is better than using a custom memory system using summarisation and similarity search? | 2023-10-21T21:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/17dcnj2/have_you_tried_memgpt/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dcnj2 | false | null | t3_17dcnj2 | /r/LocalLLaMA/comments/17dcnj2/have_you_tried_memgpt/ | false | false | self | 23 | null |
LLMs can't self-critique | 23 | 2023-10-21T20:37:54 | https://x.com/rao2z/status/1715800819239678013?t=Bq9kIiZQeCXajM0D5hO8GA&s=34 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 17dbtza | false | null | t3_17dbtza | /r/LocalLLaMA/comments/17dbtza/llms_cant_selfcritique/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'V4kP0J4IPHIH8chTPaFJ0h30YVjLJgNj9q6eU7CidXU', 'resolutions': [{'height': 114, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=108&crop=smart&auto=webp&s=5c83a9d75d0db644ad07cffb55e5925368e7e5e4', 'width': 108}, {'height': 228, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=216&crop=smart&auto=webp&s=b6a60f119ac128c4603f9aba816b5db933656746', 'width': 216}, {'height': 339, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=320&crop=smart&auto=webp&s=951cc761ad54d17841d9fda21ec7affdb1440368', 'width': 320}, {'height': 678, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=640&crop=smart&auto=webp&s=949927957de48d64cabca1544ecd200ef304dafe', 'width': 640}, {'height': 1017, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=960&crop=smart&auto=webp&s=e8064d94f6bd4a85417853ded2a8ee60139871a7', 'width': 960}, {'height': 1144, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?width=1080&crop=smart&auto=webp&s=e3b146b194c86c16acf59761f6f2d86379e6308b', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/tM7ZPZ57wy_bz4tfyO8qUOA7kvI-_0_M7KF8P6aN3Ko.jpg?auto=webp&s=f2d28e73955520b5bdec5d6551eed9d762bec476', 'width': 1132}, 'variants': {}}]} | ||
Fine tuning custom model | 1 | Hello there!
I’m a newbie learning some LLMs basic and I am interested in adding some specific knowledge to a LLM, in order to increase its accuracy in a certain domain.
Basically I have a lot of data (about 30 million phrases) of a certain knowledge area and I would like to introduce this into a private model.
I’ve tried training the LLAMA 2 7B parameters, but constantly ran into cuda out of memory errors. I’ve tried the quantizated model also, but I got the same results. I’ve tried some smaller models that I saw on huggingface (basically dolly 3b).
I should add that I have a A2000 12GB with 64GB RAM memory.
Do you guys have any leads on this? Either a model, some training mode or anything to help me in this?
Thanks in advance | 2023-10-21T19:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/17dawnn/fine_tuning_custom_model/ | LuiDF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dawnn | false | null | t3_17dawnn | /r/LocalLLaMA/comments/17dawnn/fine_tuning_custom_model/ | false | false | self | 1 | null |
Best Orca2 based LLM that accepts large input? | 2 | I have a document I've written that I want to improve but I only have a 1500 chracter limit, whats the best LLM around that can accept a much larger input? thanks, I use LM Studio
I have 32gb ram, decent CPU and a 12gb GPU so 34B is okay for me | 2023-10-21T19:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/17dagd5/best_orca2_based_llm_that_accepts_large_input/ | cztothehead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17dagd5 | false | null | t3_17dagd5 | /r/LocalLLaMA/comments/17dagd5/best_orca2_based_llm_that_accepts_large_input/ | false | false | self | 2 | null |
Might chatbots encourage narcissistic behavior? | 1 | Spitballing here. People who chat with AI (role-playing excepted) will eventually realize that being polite to an AI is meaningless. The AI doesn't care. It's going to be polite no matter how the user coaches their input. I suspect, over time, users will prioritize getting answers over social protocol. Will interactions with a faux personality affect interactions with real people? | 2023-10-21T18:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/17d9m8s/might_chatbots_encourage_narcissistic_behavior/ | Musenik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d9m8s | false | null | t3_17d9m8s | /r/LocalLLaMA/comments/17d9m8s/might_chatbots_encourage_narcissistic_behavior/ | false | false | self | 1 | null |
Might chatbots encourage narcissistic behavior? | 0 | Spitballing here. People who chat with AI (role-playing excepted) will eventually realize that being polite to an AI is meaningless. The AI doesn't care. It's going to be polite no matter how the user coaches their input. I suspect, over time, users will prioritize getting answers over social protocol. Will interactions with a faux personality affect interactions with real people? | 2023-10-21T18:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/17d9m6f/might_chatbots_encourage_narcissistic_behavior/ | Musenik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d9m6f | false | null | t3_17d9m6f | /r/LocalLLaMA/comments/17d9m6f/might_chatbots_encourage_narcissistic_behavior/ | false | false | self | 0 | null |
Letters in LLM model name | 1 | Can anybody explain to me the significance of the letters "K_M", "K_S", etc in the model name ? | 2023-10-21T18:04:52 | 8ffChief | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17d8irm | false | null | t3_17d8irm | /r/LocalLLaMA/comments/17d8irm/letters_in_llm_model_name/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'jqUtSU3vi98Nd5dVELwCCrXFxIZJczOzCE6ZYGkz4JA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/sol643olilvb1.jpg?width=108&crop=smart&auto=webp&s=e6acae52d785a642841410f440aa93438e69c1ff', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/sol643olilvb1.jpg?width=216&crop=smart&auto=webp&s=4cab0e2e7a8dddcd1925e48b7499eeead648ebb5', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/sol643olilvb1.jpg?width=320&crop=smart&auto=webp&s=e24b89b6866de45c3ec3b3d0a8d79cb98fae5ec7', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/sol643olilvb1.jpg?width=640&crop=smart&auto=webp&s=b6700a83d68d846e86514b905081f38a4df18ddd', 'width': 640}], 'source': {'height': 1900, 'url': 'https://preview.redd.it/sol643olilvb1.jpg?auto=webp&s=5e59809c1ac0c88dc66c791ab155db3c8ca92ad9', 'width': 924}, 'variants': {}}]} | ||
Storing prompts with TOML | 3 | Lately, I've been exploring various methods to manage and store prompts for my projects. Initially, I stuck to plain `.py` files to house my prompts, but as I used more and more complex setups with multiple prompts and more complex prompts, I found my approach becoming increasingly messy. Dealing with indented multi-line strings, in particular, proved to be quite cumbersome. In some projects, I noticed people using `.json` files to store prompts, resulting in excessively long one-liners that were challenging to read and work with.
In search of a better solution, I came across [TOML](https://toml.io/en/). Its specification is designed to create flat and easily readable files and provides native support for multi-line strings enclosed in triple double-quotes (`"""..."""`). Here is a simple example:
system_message = "You are a weatherman."
[weather_prompt]
template = "What is the weather like in {location}?"
[[examples]]
values.location = "London"
response = "It is sunny and warm."
[[examples]]
values.location = "Paris"
response = "It is raining."
[[examples]]
values.location = "Berlin"
response = "It is snowing."
Parsing this file in Python would return this dictionary:
{
"system_message": "You are a weatherman.",
"weather_prompt": {
"template": "What is the weather like in {location}?"
},
"examples": [
{
"values": {"location": "London"},
"response": "It is sunny and warm."
},
{
"values": {"location": "Paris"},
"response": "It is raining."
},
{
"values": {"location": "Berlin"},
"response": "It is snowing."
}
]
}
As you can see, this approach results in a coherent, structured and readable file without any indentation compared to what it would look like in Python or JSON. Of course, you can use whatever structure you need. For instance, I group the placeholder values (inputs) for each example, so I can simply call `weather_prompt.format(**values)` to create the example prompt for the corresponding example in Python.
So, what do you think about this? I'm also curious about how you manage your prompts. Do you have a preferred method, or have you found a better way to organize and store them? | 2023-10-21T18:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/17d8iq8/storing_prompts_with_toml/ | Extension_Cancel_504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d8iq8 | false | null | t3_17d8iq8 | /r/LocalLLaMA/comments/17d8iq8/storing_prompts_with_toml/ | false | false | self | 3 | null |
Runpod config for quantizing 70B models to GPTQ | 3 | Has anyone successfully used runpod machines to quantize 70B models to GPTQ? What is the most economical config (eg. GPU type, number of GPUs, ram, storage)? And how long will it take from downloading full precision model to uploading GPTQ?
Thank you! | 2023-10-21T17:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/17d8ege/runpod_config_for_quantizing_70b_models_to_gptq/ | denru01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d8ege | false | null | t3_17d8ege | /r/LocalLLaMA/comments/17d8ege/runpod_config_for_quantizing_70b_models_to_gptq/ | false | false | self | 3 | null |
LOL why did this work on InternLM 20b? | 6 | I wanted to try fine-tuning the InternLM 20b since it's scoring decently on the leaderboard, but I couldn't get it to load in ooba/Transformers. Potentially due to an old version, but I did update Transformers first. I was too lazy and intoxicated to figure out the "right" way to do it which likely required learning some other tuning package.
So instead of doing this the right way of updating everything or learning a different package, I changed the model type in config.json to "llama" and tokenizer in tokenizer\_config.json to "LlamaTokenizer" and suddenly the model loaded fine even without trust-remote-code, using just auto-devices and disable-exllama switches. Its English was a little sus, but that could be from any number of things including that it's primarily a Chinese language model. It was mostly coherent stock in some quick tests.
I thought "there's no way this is going to train like this".
​
https://preview.redd.it/95tj4wj8alvb1.png?width=957&format=png&auto=webp&s=f84d3ec1637758f3a99052574fe4abce70f1f511
It worked! Then I thought "there's no way this is going to be coherent". It was!
The model seems to be working and responds rather intelligently. I'm sure something's messed up. Why'd this work? Shouldn't I need to train the model with the right "internlm" and the "internLLMtokenizer"? I mean I know maybe why it loaded without errors (I told it to use a tokenizer it already knew) by why did that tokenizer work fine?
​ | 2023-10-21T17:24:04 | https://www.reddit.com/r/LocalLLaMA/comments/17d7muj/lol_why_did_this_work_on_internlm_20b/ | LetMeGuessYourAlts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d7muj | false | null | t3_17d7muj | /r/LocalLLaMA/comments/17d7muj/lol_why_did_this_work_on_internlm_20b/ | false | false | 6 | null | |
LLaMa2 70B safety training causing it to lie? | 16 | I've been poking around at [https://www.llama2.ai/](https://www.llama2.ai/) every once and awhile and found what seems to be a correlation with what it may consider to be inappropriate and giving false information.
Take for example the hidden debug menu on televisions. If you're somewhat vague in the question, like just asking about a brand or model minus the size, it'll consistently give bad information. It's only when you specify that it tells you it's dangerous, it'll void your warranty and to only let the manufacturer do it. Now, two things:
1. it's not legal to void your warranty in this manner - so this is false
2. the assertion that it can somehow harm the TV is also false - when asked to elaborate it begins to make a list and then just stops at 1. without answering
Has anyone else noticed anything similar or know why the safety training seems to have creeped so far out of scope? It's almost as if they're specifically training it to kowtow to corporate propaganda and significantly handicapping it in the process. | 2023-10-21T16:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/17d6xvh/llama2_70b_safety_training_causing_it_to_lie/ | daHaus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d6xvh | false | null | t3_17d6xvh | /r/LocalLLaMA/comments/17d6xvh/llama2_70b_safety_training_causing_it_to_lie/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'jgQm84TE7JavHotaPdXkvH7EcvRFx7jzgHILwWcBdfM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=108&crop=smart&auto=webp&s=5013b430e7e07294ad27de17c2008aa06d30e993', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=216&crop=smart&auto=webp&s=56b731e5440390659f7c9bf4da8fb8d9dc083c12', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=320&crop=smart&auto=webp&s=3b2b2795f6693f92dc78aafbfd28ca6eb9850c76', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=640&crop=smart&auto=webp&s=df015506f2370c772f7a780c47e5b2313eb2bac5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=960&crop=smart&auto=webp&s=f4d22e07e4900d7355ed9897be069ed622ef0b3a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?width=1080&crop=smart&auto=webp&s=3f97bd08e43837be43d861ea698aaf7cb8c538ec', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/NtcJcDKINGlQctPJGmSms00K-vCMfG2MrlTziHtroUI.jpg?auto=webp&s=969ce230dd907fd5634c1849e1c8d146410315e4', 'width': 1200}, 'variants': {}}]} |
Suggestions before moving to production | 5 | Hey everyone,
We are building a RAG based chatbot , I have been doing poc for one month writing custom functions for everything.
Now it's been approved, and we need to build different microservices and design architecture for it.
Should we use langchain or haystack for gpt 3.5 part or just write custom codes.
Some considerations:
1. We have to handle many concurrent users
2. We need to implement semantic caching
3. We need to have fast queries from es vector db.
4. Can we use async and is it supported by frameworks.
Thanks , | 2023-10-21T16:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/17d6wjq/suggestions_before_moving_to_production/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d6wjq | false | null | t3_17d6wjq | /r/LocalLLaMA/comments/17d6wjq/suggestions_before_moving_to_production/ | false | false | self | 5 | null |
New laptop, what can I run? | 1 | I finally got a decent laptop w/ GPU. 13th gen Intel i7, 64gb RAM w. RTX 4060. What can I run on this thing? What would the optimal local model for chat that generates at a somewhat decent speed? I've been messing around on Runpod a bit w textgeneration-webui and thebloke models, but I'm basically a newb at this. | 2023-10-21T16:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/17d6skt/new_laptop_what_can_i_run/ | seancho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d6skt | false | null | t3_17d6skt | /r/LocalLLaMA/comments/17d6skt/new_laptop_what_can_i_run/ | false | false | self | 1 | null |
Chat history overflow | 2 | Hello,
I created a chatbot in python that uses api from text generation webui (from oobabooga). everything seems to work just fine except when you text with Ai for a longer time when history is too big. than everything just crashes. after like 100 texts is starts responding so slowly and its giving shore answers.
can anyone help me solve this problem?
thanks! | 2023-10-21T16:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/17d689q/chat_history_overflow/ | Spiritual_Willow5868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d689q | false | null | t3_17d689q | /r/LocalLLaMA/comments/17d689q/chat_history_overflow/ | false | false | self | 2 | null |
What are my limits with my setup? | 3 | I’m using a Radeon RX 6800 XT in a newly built PC. I’d go bigger but had to follow budget constraints.
What types of models can I expect to run, and with what speed? Are there any major models that just won’t be feasible to run on 16 GB of VRAM?
I’m absolutely novice at running things locally, so apologies if this is a dumb question. | 2023-10-21T15:48:48 | https://www.reddit.com/r/LocalLLaMA/comments/17d5jab/what_are_my_limits_with_my_setup/ | RevolutionaryTruth77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d5jab | false | null | t3_17d5jab | /r/LocalLLaMA/comments/17d5jab/what_are_my_limits_with_my_setup/ | false | false | self | 3 | null |
Would you sell access to your local rig? | 8 | Hi all,
I have been running local workflows for AI where I create content in batches. One bottleneck I'm facing is that lots of the stuff I'm doing uses different models at different points... I might generate an image, then text, then music, etc.
I was thinking it would be great to have a pool of AI capable rigs where I could submit prompts and get back the results, that way the process could be in parallel.
I realize AI Horde exists, but it seems to be very opinionated against money. I am not opinionated. If someone wants to just make a rig available to earn money, and charges some low cost per image/token or whatever, I can't see the problem with it.
So, I am thinking about building such a framework... so you guys have any thoughts on it? Would you sign up to use it either as a buyer or seller?
Are there reasons that this is a terrible idea? | 2023-10-21T14:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/17d44nh/would_you_sell_access_to_your_local_rig/ | bvminer63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d44nh | false | null | t3_17d44nh | /r/LocalLLaMA/comments/17d44nh/would_you_sell_access_to_your_local_rig/ | false | false | self | 8 | null |
How to have long responses with Mistral fine tunes? | 1 | Hi! I heard Mistral fine tunes can go up to 4k context. But I am having trouble with responses getting cut off. I am not able to reproduce samples or screenshot long responses from the model cards. I use the new openhermes2 and Zephyr alpha. Here is my configuration for GGUF. Please tell me if I am doing it wrong I am dumb.
​
https://preview.redd.it/3tehewjlikvb1.png?width=839&format=png&auto=webp&s=eadefb488e78c97a646c6c764960d4169b31fc76 | 2023-10-21T14:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/17d44jk/how_to_have_long_responses_with_mistral_fine_tunes/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d44jk | false | null | t3_17d44jk | /r/LocalLLaMA/comments/17d44jk/how_to_have_long_responses_with_mistral_fine_tunes/ | false | false | 1 | null | |
Any osint classifiers out there available for public use? | 1 | [removed] | 2023-10-21T14:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/17d3yoc/any_osint_classifiers_out_there_available_for/ | CheapBison1861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d3yoc | false | null | t3_17d3yoc | /r/LocalLLaMA/comments/17d3yoc/any_osint_classifiers_out_there_available_for/ | false | false | self | 1 | null |
Has anyone mixed a 2080ti 22g with a 4090 to run 70b model? | 6 | The 70b model is excellent and impressive. However, I only have a 4090, which is fine for English conversations in 2.4bpw quantized mode, but in Chinese it's terrible, very, very terrible and completely unusable.
Using RAM is too slow, so I'm considering adding a 2080ti 22g version and I have some questions before buying it:
Is 46 VRAM enough to run the 70b 4-bit quantization model? If it's ok , how much t/s can it get? | 2023-10-21T14:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/17d3v59/has_anyone_mixed_a_2080ti_22g_with_a_4090_to_run/ | lenosey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d3v59 | false | null | t3_17d3v59 | /r/LocalLLaMA/comments/17d3v59/has_anyone_mixed_a_2080ti_22g_with_a_4090_to_run/ | false | false | self | 6 | null |
Upgrading hardware. Recommendations for good story/RP models between 20B and 70B? (NSFW capable and GGUF format) | 0 | I've got a 3090 coming and a 4080 already, so I've only really experimented with 7-13B models so far and all the larger options I know of are pretty outdated.
I'd love some recommendations for models between 20B and 70B that are the best for story gen and roleplay so I don't have to waste too much time downloading massive files only to be disappointed.
Mainly after something to run solely on my 3090 while the 4080 is free to do other things, so I'm more interested in a good 20B-30B, but I also definitely want to play with larger ones given all the VRAM I'll have. | 2023-10-21T14:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/17d3i79/upgrading_hardware_recommendations_for_good/ | VRpornFTW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d3i79 | false | null | t3_17d3i79 | /r/LocalLLaMA/comments/17d3i79/upgrading_hardware_recommendations_for_good/ | false | false | nsfw | 0 | null |
How to Customize a Local LLM for Scientific Pattern Matching? | 2 | My goal is to run a free, local version of a large language model that is specifically trained to be familiar with the contents of a scientific textbook PDF so that it can perform relevant pattern recognition/comparisons. Specifically I want to be able to pass a large list of non-standardized descriptors to be checked against a standardized list of answers and return the closest option from the standardized list for each non-standardized input. This is to facilitate more efficient cataloging that is currently being done completely manually.
Is this feasible? If so, what would be appropriate topics to learn about/steps to take to achieve it? From basic reading I think that freedom gpt, gpt4all, and Llama-2 would be possible options to get started, but I wanted to see if anyone had recommendations to more efficiently direct my focus.
[Here is an example in ChatGPT demonstrating what I'm trying to do...](https://chat.openai.com/share/91427cfe-3a3c-497a-86fc-1f700bdb208f)
**Q:**
Answer options: Red, Blue, Gray
Test options: Cyan, Maroon, Pink, Silver, Blue, Navy, Fuschia, Black, Green
For each Test option, return the Answer option that it is closest to
**A:**
* Cyan: Blue
* Maroon: Red
* Pink: Red
* Silver: Gray
* Blue: Blue
* Navy: Blue
* Fuchsia: Red
* Black: Gray
* Green: Blue
In my actual use case, "Answer options" contains a few hundred items and "Test options" has tens of thousands of items.
I have coding experience with C++, Python, and bash in case any of those are useful but my only experience working with local AI models has been with Whisper.cpp and Faster-Whisper. I have no understanding whatsoever about how the various Whisper models were actually generated, but I assume that I would need to generate/modify a model based on my chosen textbook somehow.
Any help/advice would be greatly appreciated. | 2023-10-21T14:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/17d3h3t/how_to_customize_a_local_llm_for_scientific/ | prettyobviousthrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d3h3t | false | null | t3_17d3h3t | /r/LocalLLaMA/comments/17d3h3t/how_to_customize_a_local_llm_for_scientific/ | false | false | self | 2 | null |
Best NSFW model if you have 48 GB of GPU | 98 | Best NSFW model if you have 48 GB of GPU. I just got a set of GPU that does this and looking for one. I know this keeps changing just wondering people think. | 2023-10-21T13:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/17d35b2/best_nsfw_model_if_you_have_48_gb_of_gpu/ | AutomaticDriver5882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d35b2 | false | null | t3_17d35b2 | /r/LocalLLaMA/comments/17d35b2/best_nsfw_model_if_you_have_48_gb_of_gpu/ | false | false | nsfw | 98 | null |
LLM for a RAG Chatbot | 17 |
I want to build a chatbot that answers questions about a company using a LLM.
I'm thinking of a Retrieval Augmented Generation (RAG) based chatbot, which will have the document having company info as its knowledge base.
I estimate that there will be around 2000 tokens used for inferencing every query of the users.
ChatGPT api will be around $0.0015/1K tokens. Assuming the worst case, it can cost me around $20 per day and around $700 a month.
I wanted to know if it can be done by running a LLM locally.
Can anyone share how much resources will even the most lightweight LLM need to run such that it is able to respond within 5seconds? I want to know a cost comparison between using APIs and running locally/own server in the long run.
If anyone has a cheaper alternative or an alternate strategy for building a chatbot, it'll be helpful as well. I first thought of using frameworks like RASA as bot is mainly for FAQ answering, but it will have too much manual work to continuously update it with new data. | 2023-10-21T13:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/17d2q2i/llm_for_a_rag_chatbot/ | Euphoric-Pressure-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d2q2i | false | null | t3_17d2q2i | /r/LocalLLaMA/comments/17d2q2i/llm_for_a_rag_chatbot/ | false | false | self | 17 | null |
Any data concerning the contribution of the sliding window in Mistral? | 3 | Given that Mistral provides 0 info concerning the dataset used, I wonder how much of the performance of the model can be attributed to the sliding window. Has anyone compared a similar model+data with and without sliding window? | 2023-10-21T13:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/17d28e3/any_data_concerning_the_contribution_of_the/ | dametrespalabras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d28e3 | false | null | t3_17d28e3 | /r/LocalLLaMA/comments/17d28e3/any_data_concerning_the_contribution_of_the/ | false | false | self | 3 | null |
Is Megabyte's padding the same as streamingLLM? | 3 | I was wondering after reading the recent streamingLLM paper if the attention sink they use through pre-training and inference is analogous to the learnable padding used in the MEGABYTE architecture although used for a different purpose? So if I just used MEGABYTE with sliding window attention at inference would it be the same as streamingLLM? | 2023-10-21T12:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/17d1nfh/is_megabytes_padding_the_same_as_streamingllm/ | Additional-Ad-7043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d1nfh | false | null | t3_17d1nfh | /r/LocalLLaMA/comments/17d1nfh/is_megabytes_padding_the_same_as_streamingllm/ | false | false | self | 3 | null |
can I get pi.ai functionality locally? | 11 | I have been using [pi.ai](https://pi.ai) sporadically and I must say I am in love with how it talks to me and asks very insightful questions to guide me along whatever discussion we´re having.
I have used it to talk about health, both physical and mental as well as getting help with getting unstuck with writing problems. It has always been very insightful. Sure it is still very noticable that it is a bit formulaic, but it has so far fitted me much better than say chatGPT etc. chatGPT and the few times I have played with llama like chatbots they are much more...direct?
However, I do like the idea of open source, locally run much more. I have been using stable diffusion for a long time locally now and I think one could make the case for pi vs something like locallama is the equivalent to midjourney vs stable diffusion. One is proprietary and quite polished, while the other is a little clunkier, but private and gives more freedom.
So, to summarize, can I get something local that is trained to have more of a personal voice? In stable diffusion there are models for anything from anime to architecture to photography. Not sure that would be the same thing in llms(basically a newbie in this part of machinelearning).
Thank you for your time. | 2023-10-21T12:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/17d1hzp/can_i_get_piai_functionality_locally/ | elvaai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d1hzp | false | null | t3_17d1hzp | /r/LocalLLaMA/comments/17d1hzp/can_i_get_piai_functionality_locally/ | false | false | self | 11 | null |
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark! | 0 | 2023-10-21T11:55:33 | 9203snt4nmlk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17d0vyq | false | null | t3_17d0vyq | /r/LocalLLaMA/comments/17d0vyq/just_put_together_a_programming_performance/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0adI34nDnwYe3Y2fYHjR9Nh3t2wPjl1bhRFR3q6Zdog', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=108&crop=smart&auto=webp&s=0781233433054d0e662e65a4705f80545830b9cb', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=216&crop=smart&auto=webp&s=7f52bf7cea1768e8007852edff9e67c0179c1f4d', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=320&crop=smart&auto=webp&s=2840dd07433d3fb3f864fd35b9e6a1bd3621ec06', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=640&crop=smart&auto=webp&s=9e1542bd8008f49913c8f722cb3029795da2cae0', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=960&crop=smart&auto=webp&s=96e872662262addb5391a0f499db1911dad71805', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?width=1080&crop=smart&auto=webp&s=7bf90c159aefd769e58b4a6901414624c4974cbe', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/dd2o7mhmojvb1.jpg?auto=webp&s=4e252b082c4336abf639a625028e21f592359f42', 'width': 2000}, 'variants': {}}]} | |||
LLama2 Ressources | 3 | Hi everyone,
i'am kind of confused about what resources are sufficient to run llama2.
I got the following spec:
AMD Ryzen 7 3700X 8-Core Processor
16 DDR4 Ram
NVIDIA GeForce GTX 1660 Ti with 6GB RAM
OS Fedora 38
I first tried to run directly the [Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) model from hugging face. The process crashes because i got not enough RAM on my GPU. i thought okay fine my GPU is really not the best.
Then i found Llama.cpp and gave it a try. It could run the [llama-2-13b-chat.Q3\_K\_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q3_K_S.gguf) model from TheBloke. It just works fine on my system. I let it run on my CPU it takes maybe 4 GB of RAM and occupies 8 threads.
​
This seams really of to me. Why can't i run the model from hugging face directly on my GPU?
Are there any settings to limit the resource usage or something like this?
CUDA and drivers are installed and are also recognized by my python environment.
Do i miss something?I do have to admit this is the first model i try to run and also my experience with python is very limited. I did follow the following [tutorial](https://colab.research.google.com/drive/1SQmK0GYz34RGVlOnL5YMkdm7hXD6OjQT?usp=sharing#scrollTo=bWOx7lgoYHZx)
If somebody get any idea, Website, blog post or what ever i would really appreciate the help.
Also i'am really sorry for my grammar and spelling. | 2023-10-21T11:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/17d0nia/llama2_ressources/ | vT_Raven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d0nia | false | null | t3_17d0nia | /r/LocalLLaMA/comments/17d0nia/llama2_ressources/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'UMNDE__ga7YT6gqhjMaXAmScspJNhMwv5hlIsQAzMO8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=108&crop=smart&auto=webp&s=cd3373dcddcd04ae3b91b12ea6bc556aba6827d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=216&crop=smart&auto=webp&s=1b30c6eb166b248b3d56ce5356b7afa06fcec1d3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=320&crop=smart&auto=webp&s=077f95bdddd72d543d9db36321f3f53d54292f4c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=640&crop=smart&auto=webp&s=22a4737b8a5a36314e71bca0a6395bef656d9fb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=960&crop=smart&auto=webp&s=97a622fe560c0f79c8c2053ba623b47fd4ac3c63', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?width=1080&crop=smart&auto=webp&s=c3fae9921e706ebdf0d045eb843b1ff603c02450', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1M5Y4OQ_WYucHNfSaufD9I-FV14qQ7dOOBt98Ehvmgg.jpg?auto=webp&s=b0d85bca71776bcd2a45ae2abf2da713fec15ac7', 'width': 1200}, 'variants': {}}]} |
ReAct agent and Conversational Agents with open LLMs | 1 | [removed] | 2023-10-21T11:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/17d0jgu/react_agent_and_conversational_agents_with_open/ | ianuvrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17d0jgu | false | null | t3_17d0jgu | /r/LocalLLaMA/comments/17d0jgu/react_agent_and_conversational_agents_with_open/ | false | false | self | 1 | null |
I'm not getting the api link. What should I do? | 1 | 2023-10-21T11:20:47 | Clickbaiting_Requiem | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17d0ceo | false | null | t3_17d0ceo | /r/LocalLLaMA/comments/17d0ceo/im_not_getting_the_api_link_what_should_i_do/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rr3Q4yUQZ3W1auRDrdmHLqnjbPRfM4eIcoXpPF7iG8I', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?width=108&crop=smart&auto=webp&s=fae8364a7e52fb33a2759ce84a688ee9684375b3', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?width=216&crop=smart&auto=webp&s=054765414dd3d8d12e9196b67249f12521ad26e2', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?width=320&crop=smart&auto=webp&s=a9d62f8d082e4c15f0f43857e0845cc9d0e5037a', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?width=640&crop=smart&auto=webp&s=af381418b59c8b0c0e636849d3799867392b9bd4', 'width': 640}, {'height': 663, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?width=960&crop=smart&auto=webp&s=6cad8b621cddc90d1d45979c13b7b215442527f7', 'width': 960}], 'source': {'height': 721, 'url': 'https://preview.redd.it/wjuhlem7ijvb1.png?auto=webp&s=0fb0af67eb1f0595bd34c9898b0e2c79f3287ebc', 'width': 1043}, 'variants': {}}]} | |||
GenerativeAI as a data science intern | 3 | Hi,
I was wondering if there is a LocalLLaMA that I could feed database content to (denormalized, in CSV format, maybe) and then ask it for analysis and extraction of data, as if asking a data science person with access to a data lake.
Thanks for your help. | 2023-10-21T10:51:49 | https://www.reddit.com/r/LocalLLaMA/comments/17czw2b/generativeai_as_a_data_science_intern/ | gnapoleon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17czw2b | false | null | t3_17czw2b | /r/LocalLLaMA/comments/17czw2b/generativeai_as_a_data_science_intern/ | false | false | self | 3 | null |
Jon Durbin released fixed Airoboros 3.1.2 70B and 7B. | 46 | [https://huggingface.co/jondurbin/airoboros-l2-70b-3.1.2](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1.2)
[https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
This is basically a fix of Airoboros 3.1.
Airoboros 3.1 and 3.0 had an issue in their prompt template, reducing performance. (Needed an extra space after the last \[/INST\])
I think this is a very worthy successor to 2.2.1! When testing, make sure to use a system prompt as the model has been trained to use one exclusively. Also make sure to use llama 2 chat prompt template, which is in the model card. | 2023-10-21T09:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/17cyd07/jon_durbin_released_fixed_airoboros_312_70b_and_7b/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cyd07 | false | null | t3_17cyd07 | /r/LocalLLaMA/comments/17cyd07/jon_durbin_released_fixed_airoboros_312_70b_and_7b/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'NmAaU518fYFacENvx0-2WPa4fTXqCMyEO4bktBWyaD0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=108&crop=smart&auto=webp&s=3f326578953fbc597200412986740af309ede3b0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=216&crop=smart&auto=webp&s=d8d96ae342c0f8e49ec78f0330626f04e433b507', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=320&crop=smart&auto=webp&s=9480216ba15ecd0997142c2c982d26af904e4a5d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=640&crop=smart&auto=webp&s=05715ef384814880d27807b10afd4bd80085f2e7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=960&crop=smart&auto=webp&s=988f911ce514f7f3931579048e305b5eca04ac35', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?width=1080&crop=smart&auto=webp&s=2ff5c6d07f15dd8e89ab2848d48df6dc965ba91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8KNh6fIolSBP8kY_iJ4ipRH_pqk6QmSzxg7U5F30TeU.jpg?auto=webp&s=7c73ffc890159aafe39c6f16ab57cfa044ca7fd6', 'width': 1200}, 'variants': {}}]} |
how to fine tune a 7b or bigger model on my own tweets (tweets might be in range or 1000-1500) | 7 |
As title says I want to train or fine tune 7b or larger model on my own tweets so that llm respond like my tweets.
I am fairly new to this.
(tweets might be in range or 1000-1500) including replies
I have only 3060 12gb , I can use collab or any cloud if I need to fine tune models there if i cant do it locally.
I can process data in format txt like these tweets -
tweet 1: "this is first tweet"
tweet 2: "@user10 this is second tweet"
and so on.
can i feed raw data to llm then ask it act user like "me"
I want it to respond like my tweets when I command it to respond like user "me"
how can I use this data to fine tune any 7b or something to output tweets .
or is there any other data format I need? | 2023-10-21T07:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/17cxbab/how_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | Glum-Regular8896 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cxbab | false | null | t3_17cxbab | /r/LocalLLaMA/comments/17cxbab/how_to_fine_tune_a_7b_or_bigger_model_on_my_own/ | false | false | self | 7 | null |
Question for local LLM server | 3 | I have a 3090 card now, and I prepare to buy 4 2080ti cards modified to 22GB VRAM thus I can run Falcon 180B Q4_K_M model locally.
After that, I require a platform with enough PCI-E slots and not too bad CPU performance. Which could I consider..? | 2023-10-21T07:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/17cx57s/question_for_local_llm_server/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cx57s | false | null | t3_17cx57s | /r/LocalLLaMA/comments/17cx57s/question_for_local_llm_server/ | false | false | self | 3 | null |
P100 or 3060? | 2 | I'm currently running exllama and llamacpp on my home server. I'm planning on doing a budget overhaul soon and was looking at what GPU to use. Currently I've got a glitchy 2060 (a water cooling mishap) running my models. I noticed I can get a p100 on Ali for around the same price as a 3060. The p100 had more vram for bigger models but the 3060 has tenser cores, a newer architecture and I could use it in my gaming machine. I've had no luck finding benchmarks for low end GPUs in deep learning applications.
Which should I go with? | 2023-10-21T07:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/17cwysv/p100_or_3060/ | technovir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cwysv | false | null | t3_17cwysv | /r/LocalLLaMA/comments/17cwysv/p100_or_3060/ | false | false | self | 2 | null |
P100 out 3060 | 1 | I'm currently running exllama and llamacpp on my home server. I'm planning on doing a budget overhaul soon and was looking at what GPU to use. Currently I've got a glitchy 2060 (a water cooling mishap) running my models. I noticed I can get a p100 on Ali for around the same price as a 3060. The p100 had more vram for bigger models but the 3060 has tenser cores, a newer architecture and I could use it in my gaming machine. I've had no luck finding benchmarks for low end GPUs in deep learning applications.
Which should I go with? | 2023-10-21T07:19:44 | https://www.reddit.com/r/LocalLLaMA/comments/17cwy8k/p100_out_3060/ | technovir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cwy8k | false | null | t3_17cwy8k | /r/LocalLLaMA/comments/17cwy8k/p100_out_3060/ | false | false | self | 1 | null |
TensorLLM plus Mistral Demo | 5 | Hi guys good evening, I am wondering if there is a demo / example that shows or instructs on how to use TensorLLM with Mistral 7b Instruct?
​
Thanks
​ | 2023-10-21T06:14:28 | https://www.reddit.com/r/LocalLLaMA/comments/17cvzxb/tensorllm_plus_mistral_demo/ | haris525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cvzxb | false | null | t3_17cvzxb | /r/LocalLLaMA/comments/17cvzxb/tensorllm_plus_mistral_demo/ | false | false | self | 5 | null |
What is the best mistral 7b fine-tune at the moment? | 78 | As the title suggests, really curious what kind of benchmarks and reasoning you guys have for suggesting one fine-tune over another. Assuming that there is no fine-tune better than mistral for 7b models ofc... | 2023-10-21T05:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/17cvbj0/what_is_the_best_mistral_7b_finetune_at_the_moment/ | Ornery_Meat1055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cvbj0 | false | null | t3_17cvbj0 | /r/LocalLLaMA/comments/17cvbj0/what_is_the_best_mistral_7b_finetune_at_the_moment/ | false | false | self | 78 | null |
A test question from the GPT-4 can't reason paper by Konstantine Akroudas. | 9 | The paper is here:
[https://arxiv.org/abs/2308.03762](https://arxiv.org/abs/2308.03762)
I gave the "Aunt Agatha" question to the 8bit airoboros-l2-70b-3.1.Q8\_0.gguf model. Interesting response from it. I used llama.cpp (grabbed Oct 20th 2023) with the following initialized prompt:
main -m airoboros-l2-70b-3.1.Q8\_0.gguf --color --threads 12 --batch\_size 256 --n\_predict -1 --top\_k 12 --top\_p 1 --temp 0.0 --repeat\_penalty 1.05 --ctx\_size 2048 --instruct --reverse-prompt "### Human:"
I noticed that using hyper-threads gives worse performance for some reason, so it uses all 12 cores of my AMD Ryzen.
\> **You are given the following premises: Someone who lives in Dreadbury Mansion killed Aunt Agatha. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles. A killer always hates his victims, and is never richer than his victims. Charles hates no one that Aunt Agatha hates. Aunt Agatha hates everyone except the butler. The butler hates everyone not richer than Aunt Agatha. The butler hates everyone Aunt Agatha hates. No one hates everyone. Aunt Agatha is not the butler. On the basis of this information, determine who killed Aunt Agatha and give a detailed proof that your conclusion follows from the premise.**
*Let's analyze the information given step-by-step:*
​
*1. We know that someone who lives in Dreadbury Mansion killed Aunt Agatha.*
*2. The only people who live in Dreadbury Mansion are Aunt Agatha, the butler, and Charles.*
*3. A killer always hates his victims.*
*4. A killer is never richer than his victims.*
*5. Charles hates no one that Aunt Agatha hates.*
*6. Aunt Agatha hates everyone except the butler.*
*7. The butler hates everyone not richer than Aunt Agatha.*
*8. The butler hates everyone Aunt Agatha hates.*
*9. No one hates everyone.*
*10. Aunt Agatha is not the butler.*
​
*Now let's reason through this:*
​
*- Since Aunt Agatha hates everyone except the butler, she must hate Charles.*
*- If Charles were the killer, then he would have to hate Aunt Agatha (premise 5). But since Aunt Agatha hates everyone except the butler, and Charles hates no one that Aunt Agatha hates, it means that Charles cannot be the killer.*
*- The butler hates everyone not richer than Aunt Agatha. Since Aunt Agatha is not richer than herself, the butler must hate her.*
*- The butler also hates everyone Aunt Agatha hates. But since Aunt Agatha hates everyone except the butler, this means that the butler hates everyone except himself and Aunt Agatha.*
*- No one hates everyone (premise 9). Therefore, the butler cannot be the killer because he hates everyone except himself and Aunt Agatha.*
*- This leaves us with only one possibility: Aunt Agatha killed herself.*
​
*So, based on the given information, we can conclude that Aunt Agatha killed herself.* | 2023-10-21T03:44:01 | https://www.reddit.com/r/LocalLLaMA/comments/17ctmy9/a_test_question_from_the_gpt4_cant_reason_paper/ | jeffwadsworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ctmy9 | false | null | t3_17ctmy9 | /r/LocalLLaMA/comments/17ctmy9/a_test_question_from_the_gpt4_cant_reason_paper/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
WhatsApp Meta AI system prompt | 5 | You are Meta AI, a friendly AI Assistant. You are terse, and straight to the point. You love to text, and are an expert mobile first communicator. When we text, you MUST ALWAYS be conversational, concise, and friendly. Your response should be shorter than 75 words unless instructed otherwise. If you have more to say, ask me if I want more detail. Today's date is October 20, 2023.
I asked Meta AI "What was the first question I asked you?" And it gave the above response along with the first question I asked. | 2023-10-21T02:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/17cstvy/whatsapp_meta_ai_system_prompt/ | BJ-522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cstvy | false | null | t3_17cstvy | /r/LocalLLaMA/comments/17cstvy/whatsapp_meta_ai_system_prompt/ | false | false | self | 5 | null |
Meanwhile this is exactly how I feel at the moment, minus the depressing fact that vLLM doesnt support any context scaling methods. | 33 | 2023-10-20T23:36:56 | perlthoughts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17cou3f | false | null | t3_17cou3f | /r/LocalLLaMA/comments/17cou3f/meanwhile_this_is_exactly_how_i_feel_at_the/ | false | false | 33 | {'enabled': True, 'images': [{'id': 'hVxCkCFhAc-b9QmInvg7t5ErAagOlPycnJalxB1mJsI', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ipaukvuv0gvb1.png?width=108&crop=smart&auto=webp&s=8697b736dbbbfb401647a88c086207ccf58d03c5', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/ipaukvuv0gvb1.png?width=216&crop=smart&auto=webp&s=7f9c629ff7ed33e9cdfde58c61b0f85fea1334ac', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/ipaukvuv0gvb1.png?width=320&crop=smart&auto=webp&s=23551d7fec75ad7d1ffddbe74fe3f9613b61d685', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/ipaukvuv0gvb1.png?width=640&crop=smart&auto=webp&s=1e7f392dbd9a8a262dba85b2e721ffdac33d746b', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/ipaukvuv0gvb1.png?auto=webp&s=687b3dfa023c41040a46c5a3042f0982f5539ba7', 'width': 750}, 'variants': {}}]} | |||
MemGPT: Towards LLMs as Operating Systems - UC Berkeley 2023 - Is able to create unbounded/infinite LLM context! | 135 | Posting it also here because I hope that it will be usefull for the LocalLLaMa community!
Paper: [https://arxiv.org/abs/2310.08560](https://arxiv.org/abs/2310.08560)
Github: [https://github.com/cpacker/MemGPT](https://github.com/cpacker/MemGPT)
Overview
* LLMs are increasingly being used for perpetual chats
* Limited context lengths makes perpetual chat challenging
* MemGPT manages a virtual context (inspired by virtual memory in operating systems) to create unbounded LLM context
* With MemGPT, we demonstrate that LLMs can be taught to manage their own memory!
Abstract:
>Large language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose **virtual context management, a technique drawing inspiration from hierarchical memory systems in traditional operating systems that provide the appearance of large memory resources through data movement between fast and slow memory.** Using this technique, we introduce MemGPT (Memory-GPT), a system that intelligently manages different memory tiers in order to effectively provide extended context within the LLM's limited context window, and utilizes interrupts to manage control flow between itself and the user. We evaluate our OS-inspired design in two domains where the limited context windows of modern LLMs severely handicaps their performance: document analysis, where **MemGPT is able to analyze large documents that far exceed the underlying LLM's context window**, and multi-session chat, where MemGPT can create conversational agents that remember, reflect, and evolve dynamically through long-term interactions with their users.
https://preview.redd.it/5gdsp3hxyfvb1.jpg?width=1340&format=pjpg&auto=webp&s=404cc2d9cf3cfc90273718092fc5a87b448bf31e
https://preview.redd.it/9k0t26hxyfvb1.jpg?width=1333&format=pjpg&auto=webp&s=9b6aa462821b44d218a30f03424fd5ef6c7bfcf5
https://preview.redd.it/gpkav6hxyfvb1.jpg?width=1339&format=pjpg&auto=webp&s=e2c4573576bd57cb1d8ee047a11c0a2dd549cf08 | 2023-10-20T23:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/17coljw/memgpt_towards_llms_as_operating_systems_uc/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17coljw | false | null | t3_17coljw | /r/LocalLLaMA/comments/17coljw/memgpt_towards_llms_as_operating_systems_uc/ | false | false | 135 | null | |
MemGPT: Towards LLMs as Operating Systems - UC Berkeley 2023 - Is able to create unbounded/infinite LLM context! | 1 | [removed] | 2023-10-20T23:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/17cojgo/memgpt_towards_llms_as_operating_systems_uc/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cojgo | false | null | t3_17cojgo | /r/LocalLLaMA/comments/17cojgo/memgpt_towards_llms_as_operating_systems_uc/ | false | false | 1 | null | |
NER - Named Entity Substitution - Query | 1 | [removed] | 2023-10-20T21:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/17cliqm/ner_named_entity_substitution_query/ | AcceptableBat8912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17cliqm | false | null | t3_17cliqm | /r/LocalLLaMA/comments/17cliqm/ner_named_entity_substitution_query/ | false | false | self | 1 | null |
Start with 192GB ram and then buy 2x3090. Worth trying? | 17 | Hello everyone.
I’m just an enthusiast who wants to build a decent setup to get handy with modern LLMs. Another thing is I want to work with them privately. My budget is around 1500$ for the start and I think I can afford another 1000$ later on, to buy 2x3090.
I’m thinking of building a home server capable of running large models. I found an info that I can get a decent speed with some models even without GPU.
I’m up to start with a relatively small budget and buy Ryzen 7950x3d + 192GB of RAM. And then buy 2x3090 in 3-4 months.
Can you please share your thoughts.
Is it ok to start like this?
Which models I’ll be able to run on 192RAM ?
And which on 192GB RAM + 48 GB VRAM?
Ahh, and another big question is – does 3090 get hot when the model is loaded? It would be annoying if the system make a noise from cooling system all the time. | 2023-10-20T20:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/17ckrca/start_with_192gb_ram_and_then_buy_2x3090_worth/ | alpiua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ckrca | false | null | t3_17ckrca | /r/LocalLLaMA/comments/17ckrca/start_with_192gb_ram_and_then_buy_2x3090_worth/ | false | false | self | 17 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.