title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Help Needed | 1 | [removed] | 2023-12-23T02:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/18ovk3x/help_needed/ | Dragneel_passingby | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ovk3x | false | null | t3_18ovk3x | /r/LocalLLaMA/comments/18ovk3x/help_needed/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eK-F_ltJeSQkEhSuvQ1WynvJ98Mi9uD4sGowU8DVC8k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cMiPHVa-MlEHpM7I-oXDxBVnueQXxegDHU0DC_E5A4Y.jpg?width=108&crop=smart&auto=webp&s=ec3208d73ce82ea73a6324da08a556492c2af74c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/cMiPHVa-MlEHpM7I-oXDxBVnueQXxegDHU0DC_E5A4Y.jpg?width=216&crop=smart&auto=webp&s=6fcfeadae9593d68459438c057a8cc9ec78a1126', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/cMiPHVa-MlEHpM7I-oXDxBVnueQXxegDHU0DC_E5A4Y.jpg?width=320&crop=smart&auto=webp&s=768694cc6805a1de1acdd8c548afa4d5a4744af4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/cMiPHVa-MlEHpM7I-oXDxBVnueQXxegDHU0DC_E5A4Y.jpg?auto=webp&s=8f13804d765da69b5208d0422d17d790f320ed6b', 'width': 480}, 'variants': {}}]} |
My setup for using ROCm with RX 6700XT GPU on Linux | 31 | Some people have asked me to share my setup for running LLMs using ROCm, so here I am with a guide (sorry I'm late). I chose the RX 6700XT GPU for myself was because I figured it's a relatively cheap GPU with 12GB VRAM and decent performance (related discussion is here if anyone is interested: [https://www.reddit.com/r/LocalLLaMA/comments/16efcr1/3060ti\_vs\_rx6700\_xt\_which\_is\_better\_for\_llama/](https://www.reddit.com/r/LocalLLaMA/comments/16efcr1/3060ti_vs_rx6700_xt_which_is_better_for_llama/))
Some things I should tell you guys before I dive in to the guide:
\- This guide takes a lot of material from this post: [https://www.reddit.com/r/LocalLLaMA/comments/14btvqs/7900xtx\_linux\_exllama\_gptq/](https://www.reddit.com/r/LocalLLaMA/comments/14btvqs/7900xtx_linux_exllama_gptq/). Hence, I suspect this guide will also work for all commercial GPUs better and/or newer than 6700XT.
\- This guide is specific to UBUNTU. I do not know how to use ROCm on Windows.
\- The versions of drivers, OS, and libraries I use in this guide are about 4 months old, so there's probably an update for each one. Sticking to my versions will hopefully work for you. However, I can't troubleshoot version combinations different from my own setup. Hopefully, other users can share their knowledge about different version combinations they tried.
\- During the last four months, AMD might have developed easier ways to achieve this set up. If anyone has a more optimized way, please share with us, I would like to know.
\- I use Exllama (the first one) for inference on \~13B parameter 4-bit quantized LLMs. I also use ComfyUI for running Stable Diffusion XL.
Okay, here's my setup:
1. Download and install Radeon driver for Ubuntu 22.04: [https://www.amd.com/en/support/graphics/amd-radeon-6000-series/amd-radeon-6700-series/amd-radeon-rx-6700-xt](https://www.amd.com/en/support/graphics/amd-radeon-6000-series/amd-radeon-6700-series/amd-radeon-rx-6700-xt)
2. Download installer script for ROCm 5.6.1 using:
*$ sudo apt update*
*$ wget* [*https://repo.radeon.com/amdgpu-install/5.6.1/ubuntu/jammy/amdgpu-install\_5.6.50601-1\_all.deb*](https://repo.radeon.com/amdgpu-install/5.6.1/ubuntu/jammy/amdgpu-install_5.6.50601-1_all.deb)
*$ sudo apt install ./amdgpu-install\_5.6.50601-1\_all.deb*
3. Install ROCm using:
*$ sudo amdgpu-install --usecase=rocm*
4. Add user to these user groups:
*$ sudo usermod -a -G video $USER*
*$ sudo usermod -a -G render $USER*
5. Restart the computer and see if terminal command "rocminfo" works. When the command runs, you should see information like the following:
...
\*\*\*\*\*\*\*
Agent 2
\*\*\*\*\*\*\*
Name: gfx1030
Uuid: GPU-XX
Marketing Name: AMD Radeon RX 6700 XT
Vendor Name: AMD
Feature: KERNEL\_DISPATCH
...
6. (Optional) Create a virtual environment to hold Python packages. I personally use conda.
*$ conda create --name py39 python=3.9*
*$ conda activate py39*
7. Run the following to download rocm-supported versions of pytorch and related libraries:
*$ pip install torch torchvision torchaudio --index-url* [*https://download.pytorch.org/whl/nightly/rocm5.*](https://download.pytorch.org/whl/nightly/rocm5.5/)[*6*](https://download.pytorch.org/whl/nightly/rocm5.5/)[*/*](https://download.pytorch.org/whl/nightly/rocm5.5/)
8. **IMPORTANT!** Run this command in terminal:
*$ export HSA\_OVERRIDE\_GFX\_VERSION=10.3.0*
9. git clone whichever repo you want (e.g. Exllama, ComfyUI, etc.) and try running inference. if you get an error that says <cmath> missing, run:
*$ sudo apt install libstdc++-12-dev*
That's it. I hope this helps someone.
​ | 2023-12-23T01:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/18ourt4/my_setup_for_using_rocm_with_rx_6700xt_gpu_on/ | AgeOfAlgorithms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ourt4 | false | null | t3_18ourt4 | /r/LocalLLaMA/comments/18ourt4/my_setup_for_using_rocm_with_rx_6700xt_gpu_on/ | false | false | self | 31 | null |
Dual k80 performance? | 2 | When I was into cryptomining there were mining calculators that would estimate what certain cards would get you over time. Is there a database of token per second performance on dirrent gpus and models?
Interested in building a sub-500 machine for miscellansous stuff and want to be able to run big things. | 2023-12-23T01:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/18ourlm/dual_k80_performance/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ourlm | false | null | t3_18ourlm | /r/LocalLLaMA/comments/18ourlm/dual_k80_performance/ | false | false | self | 2 | null |
Repetition Sampler Settings -- Thoughts? | 5 | What does everyone prefer to use for their repetition sampler settings, especially through SillyTavern? We have repetition penalty, frequency penalty, presence penalty, and no-repeat ngram size to work with. Do you prefer to run just one or do you favor a combination? Have you found a killer combination that produces good results across various models?
I find that most models (70b+ range) come alive with a little frequency penalty. 0.01 is like adding a dash of salt to a dish. Going beyond 0.01 gets dicey, but 0.01 seems to be nice for introducing some variety.
I haven't decided what I think about presence penalty. It seems to do *something*, but I can't tell whether it's doing anything that I couldn't achieve with the repetition penalty setting. They seem like they produce the same result, even if by differing mechanisms, but maybe I'm just doing it wrong?
Finally, I haven't yet found a good use case for no-repeat ngram. I heard somewhere that a value of 16 works well, but it doesn't seem to make that much of a difference, and lower values seem to make too much of a difference.
I'm curious to learn what others are running for these crucial settings. | 2023-12-23T01:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/18oupiv/repetition_sampler_settings_thoughts/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oupiv | false | null | t3_18oupiv | /r/LocalLLaMA/comments/18oupiv/repetition_sampler_settings_thoughts/ | false | false | self | 5 | null |
mistral-ft-optimized feels like another large step up. | 126 | It's super good at everything and very obedient, my new fav:
I love how they start by saying that model merging seems like a really weird and counterintuitive idea which you would not think would work at-all well (exactly how I feel)
but then they go on to say its actually crazy good and they tried 9000 combinations and one stood right out.
I'm in love, gonna take a lot to make me switch again.
https://openpipe.ai/blog/mistral-7b-fine-tune-optimized | 2023-12-23T01:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/18ou1n7/mistralftoptimized_feels_like_another_large_step/ | Revolutionalredstone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ou1n7 | false | null | t3_18ou1n7 | /r/LocalLLaMA/comments/18ou1n7/mistralftoptimized_feels_like_another_large_step/ | false | false | self | 126 | {'enabled': False, 'images': [{'id': 'J4DgURlSpfuA2MW-ft5Zc63DOoo8QihEpKK3v1GsXCk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?width=108&crop=smart&auto=webp&s=80144a2387615e620dee421df127713c022972e2', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?width=216&crop=smart&auto=webp&s=c4449c29c168c25fb636a3ff47f2fbcf8dbb5792', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?width=320&crop=smart&auto=webp&s=88240060ee57203e73c51dcd517a117178db1edd', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?width=640&crop=smart&auto=webp&s=aa58ce35346dfd25d733efea6e19ac1ecfad11e0', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?width=960&crop=smart&auto=webp&s=35d431ec7a6538121fb7e26e5dced26f555abfe7', 'width': 960}], 'source': {'height': 528, 'url': 'https://external-preview.redd.it/KcGVocZiPjSuyLkFZ_d1WPd_6R9KLcGFq7ioUprT990.jpg?auto=webp&s=240c19a5a7ac552ef2d2a8ad03f4b6c9624f65f9', 'width': 972}, 'variants': {}}]} |
p40/p100 | 2 | what's giving more performance right now a p100 running exllama2/fp16 or p40 running whatever it is it runs?
​
also can you mix them for inferencing like you see people doing on consumer cards??
​
Thanks in advance.
​
[View Poll](https://www.reddit.com/poll/18ou0pc) | 2023-12-23T01:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ou0pc/p40p100/ | Dyonizius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ou0pc | false | null | t3_18ou0pc | /r/LocalLLaMA/comments/18ou0pc/p40p100/ | false | false | self | 2 | null |
Anyone got some good resources for working with LLama CPP on python? | 4 | The documentation here:https://llama-cpp-python.readthedocs.io/en/latest/#adjusting-the-context-window
is pretty bare bones, and I'm trying to figure out the class attributes and stuff.
I'm wondering if anyone knows of anything that could help disambiguate everything, it's fun to dive through, but there must be something out there can help accelerate the process. A written guide, a video, whatever, just something that's a little bit beyond the bare use cases here would be greatly appreciated. | 2023-12-23T01:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/18otumd/anyone_got_some_good_resources_for_working_with/ | Nano_9a9o | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18otumd | false | null | t3_18otumd | /r/LocalLLaMA/comments/18otumd/anyone_got_some_good_resources_for_working_with/ | false | false | self | 4 | null |
What tools are you using to run LLM's locally? | 16 | I set up ollama on my mac and I am running Mistral 7B and Microsoft Phi (2.7B) locally on it.
I was going to run it on my PC but also curious what other tooling is available to training/RAG so I can do more than just run the model.
What tooling are you using? LM Studio? PowerInfer? Llama.cpp? Talk about what you are using for Local LLM or similar work like RAG and fine tuning. | 2023-12-22T23:57:49 | https://www.reddit.com/r/LocalLLaMA/comments/18oskbm/what_tools_are_you_using_to_run_llms_locally/ | HarkonnenSpice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oskbm | false | null | t3_18oskbm | /r/LocalLLaMA/comments/18oskbm/what_tools_are_you_using_to_run_llms_locally/ | false | false | self | 16 | null |
Mixtral MoE ELI5: How are the responses a higher quality than a 7b? | 74 | I have two general high level understandings of Mixtral: It is 8 x 7b models, and that it uses two at a time to improve performance, so instead of running like a 56b it's really running like a 14b.
Now, I've seen lots of people saying it is producing higher quality results than Llama 2 70b... but I can't understand that. I could understand if the model produced higher results than the Mistral 7b, because it's eight of them combined and maybe uses some logic to have each trained on something specific... assuming it even works that way. But to be better than a 70b?
In general, I have found that fine-tuned 7b models perform worse than even general 70b models. If I put a coding fine-tuned Mistral against XWin 70b, which isn't coding fine-tuned, I'd likely get better results out of the 70b in coding questions. So having eight 7b models still seems to be like you'd just be getting 7b responses... right?
Clearly not right, given everyone's experience so far with these models, so that's why I'm confused lol. Could someone smarter than me break it down? =D | 2023-12-22T23:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/18osgzt/mixtral_moe_eli5_how_are_the_responses_a_higher/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18osgzt | false | null | t3_18osgzt | /r/LocalLLaMA/comments/18osgzt/mixtral_moe_eli5_how_are_the_responses_a_higher/ | false | false | self | 74 | null |
Issue converting a Falcon 7-b based model from HuggingFace to gguf with llama.cpp. | 5 | Hello everyone. Thanks for taking the time to read my post.
I'm new to this field, so please be easy on me. I was trying to use the only spanish focused model I found "[Aguila-7b](https://huggingface.co/projecte-aina/aguila-7b)" as base model for localGPT, in order to experiment with some legal pdf documents (I'm a lawyer exploring generative ai for legal work).
As I was going to run this on my PC, I am trying to convert the model to GGUF and the quantize it. And there came the problems.
*I run all this on a a virtual machine with Ubuntu server 23.04, with 12 CPU cores and 32GB RAM assigned. The host machine is a Xeon E5-2670v3 CPU with 64GB RAM.*
​
For converting the model to GGUF file I followed these steps:
\- git cloned llama.cpp, built it, and installed the requirements.
\- git cloned the model.
\- Run this to convert to gguf:
python convert-hf-to-gguf.py --outtype f32 ../models/aguila-7b
(this is the path where I cloned the model)
That generated a file named *ggml-model-f32.gguf*.
​
\- Then run this to do the quantization:
./quantize ../models/aguila-7b/ggml-model-f32.gguf ../models/aguila-7b/aguila-tb.Q8_0.gguf Q8_0
That generated a file named *aguila-tb.Q8\_0.gguf* (as it whas the name I chose)
Both convert-hf-to-gguf.py and quantizefinished with no issues. At least nothing reported on the terminal output.
​
Then I tried to run the generated model files, to check if everything was ok.
./main -m ../models/aguila-7b/aguila-tb.Q8_0.gguf -n 128
After that, I got an error.
Any ideas of what I might be doing wrong, and how to correct my process.
This is the coplete output with the error:
Log start
main: build = 1663 (799fc22)
main: built with cc (Ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0 for x86_64-linux-gnu
main: seed = 1703288271
llama_model_loader: loaded meta data with 21 key-value pairs and 195 tensors from ../models/aguila-7b/aguila-tb.Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: - tensor 0: blk.0.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 6: blk.1.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 7: blk.1.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 8: blk.1.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 9: blk.1.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 10: token_embd.weight q8_0 [ 4544, 50257, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 13: blk.2.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 14: blk.2.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 15: blk.2.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 16: blk.2.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 17: blk.2.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 18: blk.2.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 19: blk.3.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 20: blk.3.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 21: blk.3.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 22: blk.3.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 23: blk.3.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 24: blk.3.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 25: blk.4.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 26: blk.4.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 27: blk.4.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 28: blk.4.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 29: blk.4.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 30: blk.4.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 31: blk.5.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 32: blk.5.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 33: blk.5.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 34: blk.5.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 35: blk.5.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 36: blk.5.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 37: blk.6.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 38: blk.6.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 39: blk.6.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 40: blk.6.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 41: blk.6.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 42: blk.6.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 43: blk.7.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 44: blk.7.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 45: blk.7.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 46: blk.7.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 47: blk.7.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 48: blk.7.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 49: blk.8.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 50: blk.8.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 51: blk.8.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 52: blk.8.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 53: blk.10.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 54: blk.10.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 55: blk.10.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 56: blk.10.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 57: blk.10.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 58: blk.8.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 59: blk.8.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 60: blk.9.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 61: blk.9.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 62: blk.9.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 63: blk.9.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 64: blk.9.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 65: blk.9.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 66: blk.10.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 67: blk.11.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 68: blk.11.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 69: blk.11.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 70: blk.11.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 71: blk.11.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 72: blk.11.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 73: blk.12.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 74: blk.12.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 75: blk.12.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 76: blk.12.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 77: blk.12.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 78: blk.12.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 79: blk.13.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 80: blk.13.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 81: blk.13.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 82: blk.13.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 83: blk.13.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 84: blk.13.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 85: blk.14.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 86: blk.14.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 87: blk.14.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 88: blk.14.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 89: blk.14.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 90: blk.14.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 91: blk.15.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 92: blk.15.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 93: blk.15.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 94: blk.15.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 95: blk.15.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 96: blk.15.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 97: blk.16.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 98: blk.16.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 99: blk.16.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 100: blk.16.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 101: blk.16.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 102: blk.16.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 103: blk.17.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 104: blk.17.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 105: blk.17.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 106: blk.17.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 107: blk.17.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 108: blk.17.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 109: blk.18.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 110: blk.18.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 111: blk.18.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 112: blk.18.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 113: blk.18.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 114: blk.18.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 115: blk.19.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 116: blk.19.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 117: blk.19.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 118: blk.19.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 119: blk.19.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 120: blk.19.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 121: blk.20.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 122: blk.20.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 123: blk.20.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 124: blk.20.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 125: blk.20.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 126: blk.20.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 127: blk.21.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 128: blk.21.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 129: blk.21.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 130: blk.21.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 131: blk.21.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 132: blk.21.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 133: blk.22.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 134: blk.22.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 135: blk.22.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 136: blk.22.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 137: blk.22.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 138: blk.22.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 139: blk.23.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 140: blk.23.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 141: blk.23.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 142: blk.23.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 143: blk.23.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 144: blk.23.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 145: blk.24.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 146: blk.24.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 147: blk.24.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 148: blk.24.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 149: blk.24.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 150: blk.24.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 151: blk.25.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 152: blk.25.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 153: blk.25.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 154: blk.25.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 155: blk.25.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 156: blk.25.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 157: blk.26.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 158: blk.26.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 159: blk.26.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 160: blk.26.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 161: blk.26.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 162: blk.26.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 163: blk.27.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 164: blk.27.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 165: blk.27.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 166: blk.27.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 167: blk.27.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 168: blk.27.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 169: blk.28.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 170: blk.28.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 171: blk.28.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 172: blk.28.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 173: blk.28.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 174: blk.28.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 175: blk.29.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 176: blk.29.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 177: blk.29.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 178: blk.29.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 179: blk.29.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 180: blk.29.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 181: blk.30.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 182: blk.30.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 183: blk.30.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 184: blk.30.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 185: blk.30.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 186: blk.30.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 187: blk.31.attn_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 188: blk.31.attn_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 189: blk.31.ffn_up.weight q8_0 [ 4544, 18176, 1, 1 ]
llama_model_loader: - tensor 190: blk.31.attn_output.weight q8_0 [ 4544, 4544, 1, 1 ]
llama_model_loader: - tensor 191: blk.31.attn_qkv.weight q8_0 [ 4544, 4672, 1, 1 ]
llama_model_loader: - tensor 192: blk.31.ffn_down.weight q8_0 [ 18176, 4544, 1, 1 ]
llama_model_loader: - tensor 193: output_norm.bias f32 [ 4544, 1, 1, 1 ]
llama_model_loader: - tensor 194: output_norm.weight f32 [ 4544, 1, 1, 1 ]
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = falcon
llama_model_loader: - kv 1: general.name str = Falcon
llama_model_loader: - kv 2: falcon.context_length u32 = 2048
llama_model_loader: - kv 3: falcon.tensor_data_layout str = jploski
llama_model_loader: - kv 4: falcon.embedding_length u32 = 4544
llama_model_loader: - kv 5: falcon.feed_forward_length u32 = 18176
llama_model_loader: - kv 6: falcon.block_count u32 = 32
llama_model_loader: - kv 7: falcon.attention.head_count u32 = 71
llama_model_loader: - kv 8: falcon.attention.head_count_kv u32 = 1
llama_model_loader: - kv 9: falcon.attention.layer_norm_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 7
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,50257] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 13: tokenizer.ggml.token_type arr[i32,50257] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 14: tokenizer.ggml.merges arr[str,50000] = ["Ġ d", "e s", "e n", "Ġd e", "Ġ l...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 50256
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 50256
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 50256
llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 50256
llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q8_0: 129 tensors
llm_load_vocab: special tokens definition check successful ( 1/50257 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = falcon
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 50257
llm_load_print_meta: n_merges = 50000
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_embd = 4544
llm_load_print_meta: n_head = 71
llm_load_print_meta: n_head_kv = 1
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_gqa = 71
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 18176
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 2048
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 6.85 B
llm_load_print_meta: model size = 6.78 GiB (8.50 BPW)
llm_load_print_meta: general.name = Falcon
llm_load_print_meta: BOS token = 50256 '<|endoftext|>'
llm_load_print_meta: EOS token = 50256 '<|endoftext|>'
llm_load_print_meta: UNK token = 50256 '<|endoftext|>'
llm_load_print_meta: PAD token = 50256 '<|endoftext|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_tensors: ggml ctx size = 0.07 MiB
error loading model: create_tensor: tensor 'output.weight' not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../models/aguila-7b/aguila-tb.Q8_0.gguf'
main: error: unable to load model
​ | 2023-12-22T23:44:41 | https://www.reddit.com/r/LocalLLaMA/comments/18osana/issue_converting_a_falcon_7b_based_model_from/ | Christ0ph_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18osana | false | null | t3_18osana | /r/LocalLLaMA/comments/18osana/issue_converting_a_falcon_7b_based_model_from/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Gf1BvLYso-ZWo2o3aBoX2rQE2MbDdXKTyVfKwL7HnBU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=108&crop=smart&auto=webp&s=080ac2e9765b7afcb10b841eb788e05fb9d33d7a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=216&crop=smart&auto=webp&s=e9df7f9016496cfba52b2ac870f9699f19907a5e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=320&crop=smart&auto=webp&s=ffc9e7342aebc2fd624f5d14ab356d2f5a907355', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=640&crop=smart&auto=webp&s=f6153ebc74479b6dc9fae817630f41ad8ea1571a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=960&crop=smart&auto=webp&s=9b0c535952b87ef1448bb61e0250270923c87e2b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?width=1080&crop=smart&auto=webp&s=f25250f62c3451a8fddb69a109c452b86fabb605', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RQ6XtgidynTC1HY9x-n8zVQyVkzuQZvOwGDoCpDxVko.jpg?auto=webp&s=40d88eee7838d1638606f7272dbceb0f5b6fc317', 'width': 1200}, 'variants': {}}]} |
Mistral Medium on Poe | 7 | Seems to have little to no censorship (gave an audience suitability warning before writing NSFW content) [https://poe.com/Mistral-Medium](https://poe.com/Mistral-Medium) | 2023-12-22T23:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/18os9zf/mistral_medium_on_poe/ | WaterdanceAC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18os9zf | false | null | t3_18os9zf | /r/LocalLLaMA/comments/18os9zf/mistral_medium_on_poe/ | false | false | self | 7 | null |
In a test, Mixtral Q4_0 is better than Llama 2 70B, Gemini Pro, Claude 2 and is on par with GPT-4! This model has made me optimistic about the future of open-source LLMs | 6 | 2023-12-22T23:23:41 | https://www.reddit.com/gallery/18orv70 | nderstand2grow | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18orv70 | false | null | t3_18orv70 | /r/LocalLLaMA/comments/18orv70/in_a_test_mixtral_q4_0_is_better_than_llama_2_70b/ | false | false | 6 | null | ||
Anyone have experience running mistral 8x7b with 48 gb of unified memory? | 5 | For anyone with the standard M3 Max 48 gb, I was wondering if you could share your experience running mistral 8x7b using metal. From what I’ve read, Apple seems to limit the amount of vram to 75% of the total amount of unified memory, so I’m assuming 36 gb will available, which is enough to run the 5 bit quantization, but this is cutting it rather close.
Just some context, I have a friend who works for Apple and will visiting Portland (0 sales tax) soon, so they can purchase the M3 Max for me at a huge discount. Ordinarily, I would get the 64 gb version, but unfortunately it will take too long to arrive, and I don’t think it’s worth the extra $1000. | 2023-12-22T23:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/18oro1i/anyone_have_experience_running_mistral_8x7b_with/ | complexityzero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oro1i | false | null | t3_18oro1i | /r/LocalLLaMA/comments/18oro1i/anyone_have_experience_running_mistral_8x7b_with/ | false | false | self | 5 | null |
oobabooga/webui/openapi: is it possible to use "start reply with" with the API? | 1 | Hi, I'm trying to use the API from oobabooga's repo.
In the webui, there is a way to force the model to start a sentence with some tokens, it can be used in all modes (chat, chat-instruct, instruct). Do users of this API know if there's a way to easily do this with [the API](https://github.com/oobabooga/text-generation-webui/wiki/12-‐-OpenAI-API)?
It seems other people had this issue but without a clear answer: [https://github.com/oobabooga/text-generation-webui/issues/3814](https://github.com/oobabooga/text-generation-webui/issues/3814) | 2023-12-22T22:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/18or4lz/oobaboogawebuiopenapi_is_it_possible_to_use_start/ | IntelArtiGen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18or4lz | false | null | t3_18or4lz | /r/LocalLLaMA/comments/18or4lz/oobaboogawebuiopenapi_is_it_possible_to_use_start/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'f2AC_sInspxC58l52vL1a3i-BvmDYOFsEDjsMZmszcU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=108&crop=smart&auto=webp&s=7b756860693980e1386ac4e8c6778adc8f4d80af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=216&crop=smart&auto=webp&s=8e0344762949304edc95c6267f0318dc415a828f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=320&crop=smart&auto=webp&s=44288f10a1dbb64a6c60bec88c2fcfdb5ed8503c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=640&crop=smart&auto=webp&s=0136fb8531450e8859b466f63b97cce69e7108ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=960&crop=smart&auto=webp&s=e8e301500b74ad57ac074d26ad2cc0c00377e203', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?width=1080&crop=smart&auto=webp&s=6c187c8b633d658d565ba3d213b3f4dc534d9317', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9Qrgl6lEDen3-J2d3cBZBrvrwBfYHXxyacdPN-m3SCc.jpg?auto=webp&s=e60a06b0f5cff658cda30be7d3456ad8c7744f01', 'width': 1200}, 'variants': {}}]} |
LlamaCPP python server vs LlamaCPP python class | 1 | Hi all,
I noticed a significant difference in performance when between using the api of LlamaCPP python server and the llamaCPP python class (llm = llamaCPP{}) when using the same model.
Using the API is so much faster.
Why is this, or am I missing something in the configuration of the class?
Ps: I’m using CPU only. | 2023-12-22T22:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/18oqgpr/llamacpp_python_server_vs_llamacpp_python_class/ | e-nigmaNL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oqgpr | false | null | t3_18oqgpr | /r/LocalLLaMA/comments/18oqgpr/llamacpp_python_server_vs_llamacpp_python_class/ | false | false | self | 1 | null |
I downloaded dolphin-mixtral:latest on my mac, is there any way to delete it? | 1 | I downloaded it to see how it was but now that I am finished with it, I can't find a way to delete it. | 2023-12-22T21:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/18opudc/i_downloaded_dolphinmixtrallatest_on_my_mac_is/ | fungusdickhead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18opudc | false | null | t3_18opudc | /r/LocalLLaMA/comments/18opudc/i_downloaded_dolphinmixtrallatest_on_my_mac_is/ | false | false | self | 1 | null |
Does ChatGPT/GPT4 give accurate answers for the subject of LLM models and running them? Or is the subject too new & answers are unreliable? | 1 | I've been asking it a lot of questions about how models work, various concepts/things mentioned in the docs of models or in their config files - but those of you who actually know what you're talking about - does it give accurate answers on these topics or are they too new for it to be able to provide useful/reliable info? | 2023-12-22T21:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/18opoqi/does_chatgptgpt4_give_accurate_answers_for_the/ | TopRecognition9302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18opoqi | false | null | t3_18opoqi | /r/LocalLLaMA/comments/18opoqi/does_chatgptgpt4_give_accurate_answers_for_the/ | false | false | self | 1 | null |
Does it make sense to pair an RTX 4090 with a 16GB 4060TI to get 40GB of VRAM? | 13 | Exactly what it says on the tin. Has anyone tried this? How is model performance affected when split across 2 different GPUs?
Looking at the price of 3090s this seems to be the least expensive way to increase my model capacity locally. | 2023-12-22T21:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/18opntr/does_it_make_sense_to_pair_an_rtx_4090_with_a/ | antialtinian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18opntr | false | null | t3_18opntr | /r/LocalLLaMA/comments/18opntr/does_it_make_sense_to_pair_an_rtx_4090_with_a/ | false | false | self | 13 | null |
VRAM vs raw computing power vs other features - how do these impact performance? What aspects are significant and when? | 9 | I'm new to this and trying to understand the significance of various aspects of how running these local LLaMA's works.
For example is VRAM only needed for speed? OR does processing power also matter? Does processing power for each card individually matter - or only the primary one? (e.g. 3 x p40 vs 1x3090+2p40 vs 3x3090) Does CPU performance or RAM Speed (ddr4 vs 5) or SSD speed matter at all after initial loading?
I've also seen posts saying only go for Tesla P40's and not K/M40s because they're older cards that aren't supported by a lot of software or that they're really 2 gpus with 12 gb each. How does that fit in? Is that significant for running inference? Why?
I've also seen posts that p40s can't do certain floating point related operations well. I've only been running GGUF models - usually at Q4 - so far using Kobold AI - would that apply to these? Some posts also made it sound like this means they're going to be unusable or far too inefficient in the near future - is that true or is it only for certain use cases like running non-quantized models?
Also anyone got video recommendations that explain these well? I'm reading docs and forums to try and figure this out but currently a lot of it flies over my head. | 2023-12-22T21:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/18ophi2/vram_vs_raw_computing_power_vs_other_features_how/ | TopRecognition9302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ophi2 | false | null | t3_18ophi2 | /r/LocalLLaMA/comments/18ophi2/vram_vs_raw_computing_power_vs_other_features_how/ | false | false | self | 9 | null |
Are there any guides for using triton inference server with something like Oobabooga? | 2 | I’d like to try using triton to centralize my models and maybe make better use of resources but I can’t seem to find any specific info on setting it up. | 2023-12-22T21:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/18opejx/are_there_any_guides_for_using_triton_inference/ | Jcwscience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18opejx | false | null | t3_18opejx | /r/LocalLLaMA/comments/18opejx/are_there_any_guides_for_using_triton_inference/ | false | false | self | 2 | null |
Good model for code generation? | 6 | Out of the following list: codellama, phind-codellama, wizardcoder, deepseek-coder, codeup & starcoder.
​
Which model out of this list would be the best for code generation? More specifically, (modern) PHP and its Laravel framework, JavaScript and some styling (TailwindCSS and such).
​
Thanks in advance. | 2023-12-22T21:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/18ooz25/good_model_for_code_generation/ | s7mbre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ooz25 | false | null | t3_18ooz25 | /r/LocalLLaMA/comments/18ooz25/good_model_for_code_generation/ | false | false | self | 6 | null |
GPUs and Local Large Language models, Are there Possible Tricks for the big ones? AMD vs Nvidia? NVLink? Rocm? XGMI? | 3 | I'd like to know what you guys tried...
I want to someday develop an AI powered project using specifically one or a few of the mistral-based models (probably the Mixtral 8x7B at this point) and in order to develop it I wanted to figure out what to buy really.
Budget is not great, but despite that a bit of waiting and saving may just solve that since i do have a job in the end, but i also don't want to throw money to a company just because i heard that they are the cool kids of the block.
because of this I wanted to know your journey and if you figured out something more compared to simply getting the big A100 GPU because you can (good if you can though).
first of all, we are talking about hosting a model that is going to take >30GB of VRAM.
we do care about performance but honestly the model being big is right now our biggest concern as far as I know, we probably won't train the model for now because it would likely require a computer that would be double the power, and we don't really have much power for now.
So, what about getting a bunch of NVIDIA 3070s and NVLink them together? Limitations?
otherwise even the possibility of splitting a very large model in layers and letting multiple cheaper machines do the work would be good (like many Raspberry PIs, or many PCs mounting a 3050 Ti, for redundancy)
Also, since Mixtral 8x7B is made up of experts maybe... just maybe it could have multiple machines be responsible for one expert at a time? maybe by using Gigabit Ethernet cables that connect machines together could work...
Otherwise, did anyone try the slightly cheaper AMD GPUs with RocM? I know it runs on Linux only but that is fine. Can I do a NVLink like thing there?
I have so many questions but I cannot find enough answers. please tell me. | 2023-12-22T21:04:26 | https://www.reddit.com/r/LocalLLaMA/comments/18oowgq/gpus_and_local_large_language_models_are_there/ | gabrielesilinic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oowgq | false | null | t3_18oowgq | /r/LocalLLaMA/comments/18oowgq/gpus_and_local_large_language_models_are_there/ | false | false | self | 3 | null |
How To Finetune Mixtral On Consumer Hardware | 1 | [removed] | 2023-12-22T20:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/18oo4qy/how_to_finetune_mixtral_on_consumer_hardware/ | l33thaxman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oo4qy | false | null | t3_18oo4qy | /r/LocalLLaMA/comments/18oo4qy/how_to_finetune_mixtral_on_consumer_hardware/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5TBLrZK3iIM1cLJevLBm4J6h-X_V8kgyzetDkZyX2eY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rVVy07OBzXJzNvCcMHGN3qzkz3qAkYb6gjhFOTy5waE.jpg?width=108&crop=smart&auto=webp&s=2437d55dea74ed25a45026a47bc800fad06bd066', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rVVy07OBzXJzNvCcMHGN3qzkz3qAkYb6gjhFOTy5waE.jpg?width=216&crop=smart&auto=webp&s=529c030e13817399c1ba27c031a444c7db3a4e71', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rVVy07OBzXJzNvCcMHGN3qzkz3qAkYb6gjhFOTy5waE.jpg?width=320&crop=smart&auto=webp&s=bd6838dc90629834d52f84956f7dfd7c382f626e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rVVy07OBzXJzNvCcMHGN3qzkz3qAkYb6gjhFOTy5waE.jpg?auto=webp&s=023592f4df29305bcfd7f92cc38326f284ed9045', 'width': 480}, 'variants': {}}]} |
Cheapest 7b GPU Inference Setup? | 6 | Right now I'm using runpod, colab or inference APIs for GPU inference. I've tried CPU inference and it's a little too slow for my use cases.
If I had to put together a PC purely for GPU inference (7b models), what's the cheapest setup I can have? Cheapest = both in terms of purchase cost and power utilization.
I was thinking of something like:
- Celeron processor (since I'll be offloading all layers to GPU)
- 8-16 GB RAM
- AMD RX580 8GB (I've heard about a 16GB chinese variant, are these legit?)
Is anyone using a similar setup? | 2023-12-22T20:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/18onsb6/cheapest_7b_gpu_inference_setup/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18onsb6 | false | null | t3_18onsb6 | /r/LocalLLaMA/comments/18onsb6/cheapest_7b_gpu_inference_setup/ | false | false | self | 6 | null |
LLM to Brainstorm Videogame Quests (Rtx 4090) | 2 | Hello,
I'm quite new at LLMs, and I have a usecase where I'd like a local model to help brainstorm quests, characters, and maybe make a first draft of dialogue for characters. The game is serious, though it is not 18+ NSFW.
My computer has a 4090 24gb VRAM. Offloading to GPU gets me an OK token speed.
I've just started using LM Studio, since it was pretty easy to install, and get models with.
**I wonder, however, if that's the best interface to use (is LM Studio considered ok), and which models would be good at being creative and consistent.** Some smaller models wildly hallucinate new characters.
I'm on Psyfigther 13b, but I'm sure this isn't the best I could do. Mixtral's the new hotness, though I don't know if that can run on LM studio.
​
Any tricks from veteran users to a LLM newbie? I'm too new to even be able to pinpoint the exact questions I'd need! | 2023-12-22T19:46:41 | https://www.reddit.com/r/LocalLLaMA/comments/18on8qx/llm_to_brainstorm_videogame_quests_rtx_4090/ | PwanaZana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18on8qx | false | null | t3_18on8qx | /r/LocalLLaMA/comments/18on8qx/llm_to_brainstorm_videogame_quests_rtx_4090/ | false | false | self | 2 | null |
Anyone working on a "tiny" version of Mixtral-8x7b | 6 | Hi everyone! I'm very excited about the potential use cases for LLMs running locally on mobile devices and I'm curious to know if anyone knows of any efforts to make a "tiny" version of Mixtral-8x7b, similar to \[TinyLlama\]([https://github.com/jzhang38/TinyLlama?tab=readme-ov-file](https://github.com/jzhang38/TinyLlama?tab=readme-ov-file)). Right now even a 2-bit quantized version of Mixtral is \[15GB\]([https://github.com/jzhang38/TinyLlama?tab=readme-ov-file](https://github.com/jzhang38/TinyLlama?tab=readme-ov-file)) so it seems to me that a smaller model may be necessary to make this a reality.
I'm sure there are efforts to figure out the best ways to quantize MoE architectures, which I would love to learn more about as well.
Basically, I would love to contribute to work happening in this area! I'm new to this open-source community and don't know anyone so I figured I'd put out a blast here. Cheers. | 2023-12-22T19:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/18on651/anyone_working_on_a_tiny_version_of_mixtral8x7b/ | bonthebruh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18on651 | false | null | t3_18on651 | /r/LocalLLaMA/comments/18on651/anyone_working_on_a_tiny_version_of_mixtral8x7b/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'MjBjWvUxV4ERbTDugierSndbZWXi65aMw55GPNFbiuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=108&crop=smart&auto=webp&s=cb34deef57d799efd0c0fdd9d17c1f0bb0f20ec2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=216&crop=smart&auto=webp&s=9fc203582d0c2306561ed332d99b4a3823b5bdaa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=320&crop=smart&auto=webp&s=3b23df78cc9a64c99f597edb166aae5ccda3eb74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=640&crop=smart&auto=webp&s=2ed42380ff0a96d119e23337db2b119e2471aaaf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=960&crop=smart&auto=webp&s=924e034e30bbacd1fcca8530ad6a3040c831eeab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?width=1080&crop=smart&auto=webp&s=cb63e800c034356db8e00431c8352db3428243bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YLgyZxp-fSC3Dqanw7F0W_mdAEvtM6ZhqfuRXmApItM.jpg?auto=webp&s=9769b62b6243288258628ea63e68adaf6b4101f4', 'width': 1200}, 'variants': {}}]} |
Run Mixtral LLM locally in seconds with Ollama! | 1 | [removed] | 2023-12-22T19:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/18on421/run_mixtral_llm_locally_in_seconds_with_ollama/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18on421 | false | null | t3_18on421 | /r/LocalLLaMA/comments/18on421/run_mixtral_llm_locally_in_seconds_with_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6NFx-wVE7S1a2iiGNyMj-XyeMr6aJj59vIlIcbRspoA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Razh7uBASwKlxOqfSwYoEOA4vK1LzTvZvA8L9Cx0aLc.jpg?width=108&crop=smart&auto=webp&s=4fd343d6af76e0d166db35125d2c2db57505d68b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Razh7uBASwKlxOqfSwYoEOA4vK1LzTvZvA8L9Cx0aLc.jpg?width=216&crop=smart&auto=webp&s=0a322e523dfc924194a52a13ca2ab4be429d73c6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Razh7uBASwKlxOqfSwYoEOA4vK1LzTvZvA8L9Cx0aLc.jpg?width=320&crop=smart&auto=webp&s=6d835428182daa9d6d1f0175be0cf1fd4ac9718a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Razh7uBASwKlxOqfSwYoEOA4vK1LzTvZvA8L9Cx0aLc.jpg?auto=webp&s=775c7f4b832f75028738fc3be0944f36701c7a17', 'width': 480}, 'variants': {}}]} |
How do I enable longer responses? | 1 | I'm a complete noob and I'm sorry if this is a simple question, but I can't seem to figure out the answer.
I have llama.cpp running locally with the Mixtral-8x7B-Instruct-v0.1-GGUF model. It's doing amazing. But when I give it a prompt with a lot of text (i.e. a newsletter article) and ask it to re-write the article for clarity... it does a great job but stops about halfway through the output. I'm running the web server variant with -c 4096 and --batch-size 4096. I'm on an M1 Ultra with 128GB memory. Which setting do I need to tweak to enable it to deliver the complete output? | 2023-12-22T19:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/18omq7d/how_do_i_enable_longer_responses/ | FixMoreWhineLess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18omq7d | false | null | t3_18omq7d | /r/LocalLLaMA/comments/18omq7d/how_do_i_enable_longer_responses/ | false | false | self | 1 | null |
Best 7b model for RAG use cases | 1 | [removed] | 2023-12-22T19:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/18omcqs/best_7b_model_for_rag_use_cases/ | faridukhan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18omcqs | false | null | t3_18omcqs | /r/LocalLLaMA/comments/18omcqs/best_7b_model_for_rag_use_cases/ | false | false | self | 1 | null |
Ditch ChatGPT and try one of these downloadable chatbot options | 1 | [removed] | 2023-12-22T18:37:50 | https://www.reddit.com/r/LocalLLaMA/comments/18olr9u/ditch_chatgpt_and_try_one_of_these_downloadable/ | heaversm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18olr9u | false | null | t3_18olr9u | /r/LocalLLaMA/comments/18olr9u/ditch_chatgpt_and_try_one_of_these_downloadable/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=108&crop=smart&auto=webp&s=027a4a136867b336cfe98d6c39e0251e1ba0b055', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=216&crop=smart&auto=webp&s=9a7929b9305bcdbe407f1f92f6d6ff052daa19b8', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=320&crop=smart&auto=webp&s=0734e32327e4cfd403ebf8ffdf65183ffeebc1fc', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=640&crop=smart&auto=webp&s=e3feb83e359c84d1f8cdc966f94a8cc059151e49', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=960&crop=smart&auto=webp&s=870dd5ec30e26b8871e8a59d148f669ddd3c276f', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?width=1080&crop=smart&auto=webp&s=2ce833a6565a2978737477a6cb1b5893d1ccbcde', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/V0bMuIoeIfXcCnG6uE4-9PiNgKWrVXkZZsdOyfiZwH0.jpg?auto=webp&s=9e4b3880988ad486a361a37c3a53e19ff66ac844', 'width': 1140}, 'variants': {}}]} |
Mistral Instruct Finetune Dataset Format | 1 | I'm sorry for this very newbie question, I want to know if my dataset is in correct format for finetuning mistral 7b instruct v1 model or not, I'm using together ai provider so it must following jsonl format with {"text":...}. My current dataset looks like this inside of the "text" key value, is it correct format ?
<s>[INST] how many district with incumbent being lindy boggs [/INST] Here is the SQL query to answer the question: ```CREATE TABLE table_1341586_19 (district VARCHAR, incumbent VARCHAR) SELECT COUNT(district) FROM table_1341586_19 WHERE incumbent = "Lindy Boggs"``` </s> | 2023-12-22T18:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/18oldy7/mistral_instruct_finetune_dataset_format/ | laveriaroha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oldy7 | false | null | t3_18oldy7 | /r/LocalLLaMA/comments/18oldy7/mistral_instruct_finetune_dataset_format/ | false | false | self | 1 | null |
Closest thing to an Assistant? Sending emails, setting reminders...Like the upcoming Humane Ai Pin | 20 | Hi!
I've dabbling with LLM's and I'm really curious if there is a way to make it act as an assistant. At the moment from my testing it remembers "appointments" and with websearch (using Silly Tavern websearch extension) it knows what time is it. It also know in what timezone I'm on from my Persona Description.
It's really cool as with Text-to-Speech using XTTSv2 and the enma-watson.wav that I found on this subreddit, I get pretty good audio from its responses. Also with the speech-recognision extension and the whisper-stt module I can basically have back and forth conversations only using voice.
But it won't for example write on it's own when the time is coming to remember me. I've tried the "Idle" extension of SillyTavern but I don't think it works for what I want.
Is there any way to connect an LLM to send emails, or to a calendar API/APP?
It looks like that the "**Humane AI Pin**" would do stuff like that. And it has GPT4 as a backend if I'm not mistaken. I guess it uses some sort of vector database to remember all your stuff. | 2023-12-22T18:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/18ol4yg/closest_thing_to_an_assistant_sending_emails/ | bullerwins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ol4yg | false | null | t3_18ol4yg | /r/LocalLLaMA/comments/18ol4yg/closest_thing_to_an_assistant_sending_emails/ | false | false | self | 20 | null |
YSK that AGI is a negotiating chip for OpenAI | 1 | Their contract states that MSFT is entitled to everything they create EXCEPT for any AGI. That's why "AGI has been achieved internally" was a big deal, and also why the bickering has gone on for so long & so hotly. Follow the money.
https://cryptoslate.com/agi-is-excluded-from-ip-licenses-with-microsoft-should-it-be-attained-at-openai/
"The board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology." | 2023-12-22T17:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/18okgsp/ysk_that_agi_is_a_negotiating_chip_for_openai/ | platistocrates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18okgsp | false | null | t3_18okgsp | /r/LocalLLaMA/comments/18okgsp/ysk_that_agi_is_a_negotiating_chip_for_openai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rDHczkN8U1ZAbr94k5IbDJl0kk5WFlbHABiTdZz0gFM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=108&crop=smart&auto=webp&s=2ae0539b9b990b9ed61e2ae968cea64b9f494e5c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=216&crop=smart&auto=webp&s=20c8ea240c870623aa9fdfe26b80919a1b7b61b0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=320&crop=smart&auto=webp&s=8f5d2928b9bd3403d38300fa0dc5b03d104fe16b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=640&crop=smart&auto=webp&s=af9237f002530f91d755ce195431802fc63b5170', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=960&crop=smart&auto=webp&s=b3bcee0184c96cef7fa1376e00494a1c2eaf2e03', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?width=1080&crop=smart&auto=webp&s=c14a228b479e473f8ab574be9c8ddf92a11fe12b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bippubJtYiJp4cQhT56X26I2f_cJ5lN6YtQeClY4IIo.jpg?auto=webp&s=9b35b644dc45b584fb40bd8d6df76013ca8e14bc', 'width': 1200}, 'variants': {}}]} |
Apple's MLX Framework adds quantization support | 34 | When Apple first released the MLX framework for Apple Silicon macs it didn't support quantized models and so RAM requirements were more than a lot of people were used to. They've recently shipped quantization support. The [mlx examples](https://github.com/ml-explore/mlx-examples) have been updated to include quantization examples and the recently launched [mlx-community](https://huggingface.co/mlx-community) on Hugging Face already has pre-quantized versions of llama2 and mistral available for download.
Caveats: From what I've gathered, quantization's main impact is on RAM requirements. Perf improvements will require optimized kernels. Quantization seems to apply to inference only, at this point. At the very least their LoRA training example hasn't been updated to include info on using quantization. | 2023-12-22T17:38:12 | https://www.reddit.com/r/LocalLLaMA/comments/18oke4y/apples_mlx_framework_adds_quantization_support/ | FlishFlashman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oke4y | false | null | t3_18oke4y | /r/LocalLLaMA/comments/18oke4y/apples_mlx_framework_adds_quantization_support/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'NRitNAa83fFqArIPC9NwWMzw33e-3y8kSaQuduYdmHw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=108&crop=smart&auto=webp&s=4a1268ba73f625630c853841eb701b60e9da430d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=216&crop=smart&auto=webp&s=ffa85266a824704a31e2cc9f3ba1c86188927d34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=320&crop=smart&auto=webp&s=522bb4472443586103abaf7ecec5d213cbf51ea4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=640&crop=smart&auto=webp&s=8944df7784796384fc6ecf32e33d3b30bc808fb6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=960&crop=smart&auto=webp&s=085efbf5be09e3ebe3cfb03193d22e1e6bd3e246', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?width=1080&crop=smart&auto=webp&s=1569bde0815c3fe81a912e25b3e747aac1d84001', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k3XhFS0uf3T6hJH2r1bU3krBYMIYmYxXYP3cImVkTjo.jpg?auto=webp&s=d4dc62e45288a0ab3a146807b2e52f5fa17c6f6e', 'width': 1200}, 'variants': {}}]} |
YSK that AGI is a negotiating chip for OpenA | 1 | YSK that AGI is a negotiating chip for OpenAI.
Their contract states that MSFT is entitled to everything they create EXCEPT for any AGI. That's why "AGI has been achieved internally" was a big deal, and also why the bickering has gone on for so long & so hotly. Follow the money. | 2023-12-22T17:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/18ok9ur/ysk_that_agi_is_a_negotiating_chip_for_opena/ | platistocrates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ok9ur | false | null | t3_18ok9ur | /r/LocalLLaMA/comments/18ok9ur/ysk_that_agi_is_a_negotiating_chip_for_opena/ | false | false | default | 1 | null |
Opinions on Mixtral 0.1 8x7b and Mistral 0.2 7b | 57 | The following is just some personal thoughts on these models (Mixtral 0.1 8x7b and Mistral 0.2 7b) from a random user (me), and should not be taken as facts. Opinions among users on how they perform can vary depending on hardware, settings, versions, use-cases, preferences, etc.
I thought it could be fun to share some of my experience using them. Maybe I can also learn something new by doing so.
For a few days now I have tested these models (GGUF formats) back and forth in different environments (Oobabooga web-ui, LM Studio and Koboldcpp), with many different settings and prompts. These are my thoughts so far on these models:
​
**EDIT: Important:** All the Mistral and Mixtral models I have used and experimented with are the **instruct** versions! I forgot to add "instruct" when I typed their names.
​
* Mistral 0.2 7b Q8\_0 is the best and smartest all-around model I have ever used so far. It's very smart for its small size and lightning fast, perfect for tasks that require both speed and "intelligence" like role playing (especially with multiple characters). I have tested many fine tune versions of Mistral 0.1 7b and LLama 2 13b, but nothing (at least in my use-cases) has beaten Mistral 0.2 7b so far.
* Mixtral 8x7b Q5\_0 (the best of the Mixtral quants I have tested and the biggest quant my hardware can handle) is quality-wise overall better than Mistral 0.2 7b Q8\_0 (on most parts), but **not much** better, and they have different strengths. Mixtral 8x7b Q5\_0 is better at understanding and following complex prompts and explain logic, while Mistral 0.2 7b Q8\_0 is a bit better at writing text in general, like stories and hypothetical events (this could just be a preference on my part). However, Mistral 0.2 7b is quite good at logic too, and it's waaay faster than Mixtral (at least on my hardware).
* Mixtral 8x7b "K" quants (Q4\_K\_M and Q5\_K\_M) performs worse than Q4\_0 and Q5\_0. I can not really tell exactly how (I would need to do more tests for that), but the K quant versions seems a bit less smart and have a greater tendency to start rambling incoherently.
* Mixtral 8x7b Q4\_K\_M and Q4\_0 struggles to perform simple programming tasks like writing simple JavaScrip code snippets, the code they output either does not work as intended or even give errors. Q5\_K\_M and Q5\_0 however succeeds to perform the exact same coding tasks perfectly.
* Mixtral 8x7b Q4\_K\_M and Q4\_0 are much worse at understanding and following more complex prompting than Q5\_K\_M and Q5\_0.
* Mixtral 8x7b Q5\_0 is still quite far behind ChatGPT 3.5 for general use like asking questions and chatting (I have not compared them at complex prompting and logic). When asking the same questions, ChatGPT 3.5 has almost always given me a more informative, measured, coherent and generally more satisfying reply in most topics.
* Mixtal 8x7b perform worse and starts hallucinate (sometimes extremely much) if temperature is quite high, like 0.7. I found that setting temperature to 0.1 or 0.2 makes it more stable and functioning.
​
Well, that was my experiences and opinions, if anyone care :)
​ | 2023-12-22T17:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/18ok5or/opinions_on_mixtral_01_8x7b_and_mistral_02_7b/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ok5or | false | null | t3_18ok5or | /r/LocalLLaMA/comments/18ok5or/opinions_on_mixtral_01_8x7b_and_mistral_02_7b/ | false | false | self | 57 | null |
Does having more resources leads to better outcomes in terms of quality of output, or am i dreaming this? | 3 | I've installed lmstudio on one machine with 16gb ram, and another with 64gb ram. Both have the same GPU and 8gb vram.
The speed difference is mostly on first prompt, with a slight improvement on larger model (mostly consisting of being able to load them in the first place hehe) - **BUT** on models such as mythomax l2 13b q4\_k\_m (yes sorry, now you know my dark secret) , i noticed it was able to continue the conversation for much longer before "failing" (by that i mean repeat itself or start spewing out nonsense).
TLDR: does performance improve quality of output? | 2023-12-22T17:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/18ojz9n/does_having_more_resources_leads_to_better/ | RadioSailor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ojz9n | false | null | t3_18ojz9n | /r/LocalLLaMA/comments/18ojz9n/does_having_more_resources_leads_to_better/ | false | false | self | 3 | null |
Why aren’t LoRA’s a big thing i the LLM realm? | 122 | Sorry if that’s a stupid question, but it seems wierdly underutilized to have LoRA’s for different tasks. E.g. a LoRA on top for storywriting, one for sales emails etc. Is that just worse than finetuning a model or what is your explanation?
In the StableDiffusion realm it is certainly much more utilized and with great success. | 2023-12-22T16:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/18oj983/why_arent_loras_a_big_thing_i_the_llm_realm/ | ___defn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oj983 | false | null | t3_18oj983 | /r/LocalLLaMA/comments/18oj983/why_arent_loras_a_big_thing_i_the_llm_realm/ | false | false | self | 122 | null |
Should I adjust my learning rate? | 1 | Hi, I am using axolotl to train Llama-2-7b-chat-hf with qlora. My training dataset is \~1.3M rows of data and I'm using the following training parameters:
* micro\_batch\_size: 2
* num\_epochs: 4
* learning\_rate: 0.0002
* optimizer: paged\_adamw\_32bit
This is my training loss chart after 1.3 epochs:
https://preview.redd.it/7n29rurzkv7c1.png?width=1050&format=png&auto=webp&s=de5bc7dd13ed3994b9dafbd03c91b8128e63252a
​
My eval lost has very little change over time. At 1k steps it was 2.871, it then dipped slightly to 2.755 at 5k steps before going back up to 2.767 at 6k steps:
https://preview.redd.it/9frp4czvjv7c1.png?width=1050&format=png&auto=webp&s=5a5c645781dde4eb66a27414d9c6c2d89eb8ec4f
Do I need to adjust my learning rate? | 2023-12-22T16:46:55 | https://www.reddit.com/r/LocalLLaMA/comments/18oj8ol/should_i_adjust_my_learning_rate/ | RizzMiser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oj8ol | false | null | t3_18oj8ol | /r/LocalLLaMA/comments/18oj8ol/should_i_adjust_my_learning_rate/ | false | false | 1 | null | |
Ollama iOS mobile app (open source) | 26 | 2023-12-22T16:32:11 | https://v.redd.it/mebj3z6eiv7c1 | 1amrocket | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18oiwno | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mebj3z6eiv7c1/DASHPlaylist.mpd?a=1705854745%2CODFmNjQ2NTNmNjQ1ZmU0ZjQwMTAwZGExNmMxNTBlNThjZDUxZGE1MzIyZTU5NDQ0ZjFlM2VlN2U2ZTg1MGIwZQ%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/mebj3z6eiv7c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/mebj3z6eiv7c1/HLSPlaylist.m3u8?a=1705854745%2CMDM1ODk1ZjM2MzhiYjJhYWQ1NzkxNzViYWE3MWNiMmU2MGZkMTE5ZTRjYzg1Y2E4N2I3ZmM0MmFiZjc1YzAwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mebj3z6eiv7c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_18oiwno | /r/LocalLLaMA/comments/18oiwno/ollama_ios_mobile_app_open_source/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=108&crop=smart&format=pjpg&auto=webp&s=a259d7bf54972c6c995dfd6dedfa07d569182924', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=216&crop=smart&format=pjpg&auto=webp&s=b9025a88d2f82931599430ecb4456df3b4693502', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=320&crop=smart&format=pjpg&auto=webp&s=61854db7da5e2825bccff380d3e2e5274e927052', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=640&crop=smart&format=pjpg&auto=webp&s=de8edbaf31a00d40a81d17186259175c276c96a5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=960&crop=smart&format=pjpg&auto=webp&s=9d4f2a345cb97884cafe8e3eb4681f266b74a30f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1da5bb3f46cef316a524060dfb70e4860dd7146b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YzJ6aTRvM2xpdjdjMfSuwtMFsz81_YvBeCujB9YP6NmQY-Q3EZ8LTnVSHDmg.png?format=pjpg&auto=webp&s=c4059e2ed85ef45ab8cad2b7ca933407111f47ea', 'width': 1920}, 'variants': {}}]} | ||
Gemini-pro is so lazy till the point it's so useless for RAG applicatiom | 34 | Currently i have some free credits from GCP and curious to try out gemini model. But the API result from the model is just too bad compare to other large model and even worst than mistral 7B. Don't get me wrong that the gemini still answer correctly but the way it answers is so blandness and unhelpful despite I already specitic promoted to be a helpful assistant. For example, I feed the promt with multipe paragraphs about one default value (irrigation-rate in greenhouse), an example about default value in client and why we use that value. Then i ask about "what is irrigation-rate default in GH" and gemini only anwser the specific value "40" no more or less. And with the same config and setup, other model perform very well and helpful not useless like gemini. I still give gemini-pro the benefit of doubt, maybe I need another better promt for gemini but the result from that is just too lazy and bad.
So I want to ask does anyone have used gemini-pro API and found successful usecases that google still have some lead in LLM world? Or do I need some specific configs for gemini? | 2023-12-22T16:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/18oituc/geminipro_is_so_lazy_till_the_point_its_so/ | cheviethai123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oituc | false | null | t3_18oituc | /r/LocalLLaMA/comments/18oituc/geminipro_is_so_lazy_till_the_point_its_so/ | false | false | self | 34 | null |
Can you feed an entire source code to LLAMA? | 3 | i want to give LLAMA an entire source code to fine tune it with it, is it possible? and will the source code stay in my machine or will be shared with other companies? | 2023-12-22T16:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/18oik3l/can_you_feed_an_entire_source_code_to_llama/ | Dependent_Tie8184 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oik3l | false | null | t3_18oik3l | /r/LocalLLaMA/comments/18oik3l/can_you_feed_an_entire_source_code_to_llama/ | false | false | self | 3 | null |
fine-tuning from logits — dumb or genius? | 11 | As OpenAI now alows logit-derived probabilities to be output( https://github.com/openai/openai-cookbook/blob/main/examples/Using\_logprobs.ipynb )
and such a probability distribution can be grabbed from the open models as well,
would it make sense to build fine-tuning datasets with those as that could speed up the training big times? | 2023-12-22T16:13:55 | https://www.reddit.com/r/LocalLLaMA/comments/18oihrq/finetuning_from_logits_dumb_or_genius/ | uhuge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oihrq | false | null | t3_18oihrq | /r/LocalLLaMA/comments/18oihrq/finetuning_from_logits_dumb_or_genius/ | false | false | self | 11 | null |
Who wants to explore some edge deployment LLM e-commerce opportunities? | 1 | I was thinking recently and believe there's great market potential for a variety of edge deployment LLM products, especially with the rise of smaller open source models these past few months.
I currently run a few large ecom stores that I've built, with my specialty being SEO. Been in GPT3 beta since June 2021 and have a background in economics/data science so have a fairly good understanding, but am just looking to explore more on the hardware-software edge integration side, which I believe could be a big e-commerce winner.
Anyways, if anyone has any interest shoot me a message or just respond here! | 2023-12-22T16:11:16 | https://www.reddit.com/r/LocalLLaMA/comments/18oifo9/who_wants_to_explore_some_edge_deployment_llm/ | fission4433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oifo9 | false | null | t3_18oifo9 | /r/LocalLLaMA/comments/18oifo9/who_wants_to_explore_some_edge_deployment_llm/ | false | false | self | 1 | null |
Question about System RAM and GPU VRAM requirements for large models | 1 | Hello folks, I need some help to see if I can add GPUs to repurpose my older computer for LLM (interference mainly, maybe training later on).
My current limitation is that I have only 2 ddr4 ram slots, and can either continue with 16GBx2 or look for a set of 32GBx2 kit.
My understanding is that we can reduce system ram use if we offload LLM layers on to the GPU memory. So theoretically the computer can have less system memory than GPU memory?
For example, referring to TheBloke's lzlv_70B-GGUF provided Max RAM required: Q4_K_M = 43.92 GB
So using 2 GPU with 24GB (or 1 GPU with 48GB), we could offload all the layers to the 48GB of video memory.
Does that mean the required system ram can be less than that? Say 32GB or even 16GB of ram?
My confusion comes when I was testing out the above on a friend's VM, and their Task Manger reports that 46.7GB of Memory is in use [from 5GB with no programs running to 46.7GB after loading lzlv_70 Q4_K_M with Koboldccp]. I see from the program that all the layers were offloaded into the GPU, and Task manager reports the VRAM to sit at 41GB (dedicated GPU memory).
Does that mean a dedicated machine with 32GB of system memory and 48GB of GPU VRAM would not work?
Would the issue be exacerbated if I tried 32GB of system memory and 96GB of GPU VRAM?
Or would I also have to upgrade the rest of the system to ensure that system ram is equal or greater than the GPU VRAM?
If you have a similar set up, I would love to hear your experience with it and if the system is still functional for other basic tasks as well.
Tldr: Can I get away with 32(or 64)GB of system ram and 48(or 96)GB GPU VRAM for large LLM like lzlv-70b or goliath-120b? | 2023-12-22T15:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/18oi2vd/question_about_system_ram_and_gpu_vram/ | Silver-of-Curiosity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oi2vd | false | null | t3_18oi2vd | /r/LocalLLaMA/comments/18oi2vd/question_about_system_ram_and_gpu_vram/ | false | false | self | 1 | null |
How-to for training a model on tweets | 1 | [removed] | 2023-12-22T15:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/18oh9bz/howto_for_training_a_model_on_tweets/ | SideShow_Bot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oh9bz | false | null | t3_18oh9bz | /r/LocalLLaMA/comments/18oh9bz/howto_for_training_a_model_on_tweets/ | false | false | self | 1 | null |
Noob question on MOE finetuning possibilities | 5 | I don't really understand the FT processes or MOE architecture but I was wondering if it would it be possible to only train part of the network at a time?
Would this let you reduce VRAM requirements for FT and allow you not 'spoil' parts of the model you're happy with? | 2023-12-22T15:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/18oh7gg/noob_question_on_moe_finetuning_possibilities/ | HikaruZA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oh7gg | false | null | t3_18oh7gg | /r/LocalLLaMA/comments/18oh7gg/noob_question_on_moe_finetuning_possibilities/ | false | false | self | 5 | null |
JSON mode in vLLM | 42 | I am the author of the[Outlines library](https://github.com/outlines-dev/outlines) that provides guided generation for Large Language Models. Our users frequently asked us how they could deploy JSON-guided generation to solve their use case. So I patched the vLLM library and modified their API serving file to add the possibility to pass a JSON Schema along with the prompt.
We're currently adding Exllama2 to the library, and I was wondering if some of you would be interested in us adding a serving API for it in the library as well?
I'd be happy to get your feedback! | 2023-12-22T14:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/18ogd75/json_mode_in_vllm/ | GoBayesGo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ogd75 | false | null | t3_18ogd75 | /r/LocalLLaMA/comments/18ogd75/json_mode_in_vllm/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': '20L5uo42HDnGHpNOXV8hP7kSao53LFtUV02eMCO3kgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=108&crop=smart&auto=webp&s=debf90f85c36f0f56c817f62e6a23773ea1263da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=216&crop=smart&auto=webp&s=59fc054397226e0b9966415367a8d161825e8222', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=320&crop=smart&auto=webp&s=76bdaea0bae8094340cdeb8074d590bb98885054', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=640&crop=smart&auto=webp&s=fbd94dcb8e8bd5a524102daa7384e80e1808276c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=960&crop=smart&auto=webp&s=71bfdc2f7a4afaeaf2a15c359f7a439b742c9188', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?width=1080&crop=smart&auto=webp&s=b3f636a3c092d658dc88ed0048200387b1b0c287', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AUaEDegu6fPOBeD4BuoULnDNUvMQF39ipI02ZibyCmU.jpg?auto=webp&s=7597940f4cd895a4d8d30899e76f7d4b91b760fc', 'width': 1200}, 'variants': {}}]} |
Can AMD provide better support to LLM Community loccal SOFTWARE to use their NPU from ryzen 7 ryzen 9 7940hs 7840 hs and new 8940hs ? | 1 | [removed] | 2023-12-22T14:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/18ofvjx/can_amd_provide_better_support_to_llm_community/ | maxpayne07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ofvjx | false | null | t3_18ofvjx | /r/LocalLLaMA/comments/18ofvjx/can_amd_provide_better_support_to_llm_community/ | false | false | self | 1 | null |
Apple LLM breakthrough: LLM in a flash - Efficient Large Language Model Inference with Limited Memory | 2 | 2023-12-22T14:14:09 | https://arxiv.org/pdf/2312.11514.pdf | Xnohat | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18ofv4x | false | null | t3_18ofv4x | /r/LocalLLaMA/comments/18ofv4x/apple_llm_breakthrough_llm_in_a_flash_efficient/ | false | false | default | 2 | null | |
Can I run 70b models with 24gb VRAM and a 3090RTX? | 6 | title | 2023-12-22T13:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/18ofh58/can_i_run_70b_models_with_24gb_vram_and_a_3090rtx/ | laveshnk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ofh58 | false | null | t3_18ofh58 | /r/LocalLLaMA/comments/18ofh58/can_i_run_70b_models_with_24gb_vram_and_a_3090rtx/ | false | false | self | 6 | null |
Andrej Karpathy championing this sub while working at 'Open’ AI doesn't sit well with me....... | 1 | 2023-12-22T13:45:01 | https://v.redd.it/ze7knn2rou7c1 | Iboxelephants | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18of9ab | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ze7knn2rou7c1/DASHPlaylist.mpd?a=1705844715%2COWIzYmE5N2MwMzRkMThkYmI4YjBiMmExZTcwZDU0NWZhNGUwNmIzYTVlMzI2YmFiNmRmMmEwZjIwY2VlYzkxOQ%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/ze7knn2rou7c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ze7knn2rou7c1/HLSPlaylist.m3u8?a=1705844715%2CNTk1NmJhMTRiYTA5MDVlYTBkMmQxYzJjYzU1YmNjMmVhYjcwYjNiOWMzNWUxNjQ5OTU4ZTQxMjRhNWRlM2QyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ze7knn2rou7c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_18of9ab | /r/LocalLLaMA/comments/18of9ab/andrej_karpathy_championing_this_sub_while/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=108&crop=smart&format=pjpg&auto=webp&s=97364b37b5dc9a6d06289b16b95b5e100f6564de', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=216&crop=smart&format=pjpg&auto=webp&s=03528fb6c8cc7206847631b09e4ea4b011bed0c0', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=320&crop=smart&format=pjpg&auto=webp&s=288b648eb889e3534fd9c29e554a56d0d780f121', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=640&crop=smart&format=pjpg&auto=webp&s=fae60d51db1321587eae6706934ebada583a93f6', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=960&crop=smart&format=pjpg&auto=webp&s=0a5eb9d5431b50ad22f5373438fe27bddaac4c53', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=69830708a7fce0e8282698a514ebc25f75701f0b', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/cTBoc3Q3dHFvdTdjMaZ4WeSzrT0pgNhSGByXf2bB6xaHjZITawBwxMZH5MWj.png?format=pjpg&auto=webp&s=af6560b053677d6b1164044159ef3587450b6506', 'width': 1080}, 'variants': {}}]} | ||
Prompt engineering strategies to ensure output consistency? | 1 | Following is the output of Microsoft's Phi-2 (phi-2 8 bit quant gguf) running on .95 temp.
`Introduce yourself.`
`Hi there, my name is Alex and I am a student studying psychology at University of Cambridge. My main interest is in understanding how the human mind works and how we can use that knowledge to improve people's lives. I am particularly interested in topics such as mental health, memory, and decision making. Outside of studies, I enjoy reading, playing sports, and spending time with friends and family.`
`### Exercise 1:`
`What is Alex's main interest?`
`Answer: Understanding the human mind and using it to improve people's lives.`
`### Exercise 2:`
`Name one topic that Alex is interested in studying.`
`Answer: Mental health, memory, decision making.`
And it keeps on going up to exercise 5. Sometimes even longer. I know it's not fine-tuned to be a chatbot, so I don't mind it hallucinating about being a psychology student. What irks me is the way it tends to continue conversations by writing about these exercises. Is there a prompt engineering trick to remove these? Or am I supposed to fine-tune with LoRA? Or maybe by passing certain command-line args to the llama.cpp executable?
​ | 2023-12-22T13:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/18of8vz/prompt_engineering_strategies_to_ensure_output/ | saccharineboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18of8vz | false | null | t3_18of8vz | /r/LocalLLaMA/comments/18of8vz/prompt_engineering_strategies_to_ensure_output/ | false | false | self | 1 | null |
Andrej Karpathy dickriding this sub while working at ‘Open’ AI doesn't sit well with me. | 1 | 2023-12-22T13:42:47 | https://v.redd.it/1zr6c4qcou7c1 | Iboxelephants | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18of7ok | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1zr6c4qcou7c1/DASHPlaylist.mpd?a=1705844583%2CNTc0ZTRiMmI2NzRhNzRmODcyN2FkMDg0NzdhMWJmMDQyN2U4NTYzOTNiMmFhNTQ4OTY4NzA0ZWI5ZjQxOWRiNA%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/1zr6c4qcou7c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1zr6c4qcou7c1/HLSPlaylist.m3u8?a=1705844583%2CMWI2NmY1ZWQ3NTcwMjc4NzBkMTJiNmQ0Y2ViNTY4YWY5YjQ4OWViYmI1YzQ3YTFkZGZmY2NiMmQ4NzJmZjU3Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1zr6c4qcou7c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_18of7ok | /r/LocalLLaMA/comments/18of7ok/andrej_karpathy_dickriding_this_sub_while_working/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'd2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=108&crop=smart&format=pjpg&auto=webp&s=45a63f988aba553974a694317bfa3c790a0d76ff', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=216&crop=smart&format=pjpg&auto=webp&s=2ef6db1110c8c83537bf4fd13a12cca76335861a', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=320&crop=smart&format=pjpg&auto=webp&s=eb0e26200873c7b5d80393dc391e46bda29b8440', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=640&crop=smart&format=pjpg&auto=webp&s=13255d04deff36672588c011cbaf459049d922d9', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=960&crop=smart&format=pjpg&auto=webp&s=58ea5879099569dfadbdd3cc437835bf5acdf6ba', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b7798a8cbaaaf29269abf1bd2e7242cdd237a410', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/d2hzeXN0Z2NvdTdjMXoz1L1IrqfSHVj_TqXJlX7WZduhHNOBmwVM6Z0Mh630.png?format=pjpg&auto=webp&s=425467af018bdb84f109f467d60e8a3600f9bd9a', 'width': 1080}, 'variants': {}}]} | ||
What linux distribution is better to run LLMs on old laptop? | 2 | I got a very old laptop in perfect condition but not in use. It got 8gb Ram and touch screen.
It's currently running windows 11.
What is the best **lightweight linux distribution** that can run koboldccp or text generation ?
**And what token rate can i expect for a q4_k_m 7b model ?**
If possible i want to use it for simple browsering and YouTube when not in use. | 2023-12-22T13:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/18oexp7/what_linux_distribution_is_better_to_run_llms_on/ | Prior-Blood5979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oexp7 | false | null | t3_18oexp7 | /r/LocalLLaMA/comments/18oexp7/what_linux_distribution_is_better_to_run_llms_on/ | false | false | self | 2 | null |
Apple Researchers Unveil DeepPCR: A Novel Machine Learning Algorithm that Parallelizes Typically Sequential Operations in Order to Speed Up Inference and Training of Neural Networks | 79 | 7x Faster training and 11x Faster generation
"The paper demonstrates the effectiveness of DeepPCR through various applications. It achieves speedups of up to 30× for forward and 200× for backward passes in multi-layer perceptrons. Additionally, the algorithm is applied to parallelize training of deep ResNet architectures and generation in diffusion models, resulting in up to 7× faster training and 11× faster generation."
Paper: [https://arxiv.org/pdf/2309.16318.pdf](https://arxiv.org/pdf/2309.16318.pdf)
Research page: [https://machinelearning.apple.com/research/deeppcr](https://machinelearning.apple.com/research/deeppcr)
[https://twitter.com/i/status/1735876638947348656](https://twitter.com/i/status/1735876638947348656)
Thoughts? | 2023-12-22T13:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/18oeimf/apple_researchers_unveil_deeppcr_a_novel_machine/ | paryska99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oeimf | false | null | t3_18oeimf | /r/LocalLLaMA/comments/18oeimf/apple_researchers_unveil_deeppcr_a_novel_machine/ | false | false | self | 79 | null |
Looking for a model for iterative summarisation using webui API and question about endpoints | 2 | Can you recommend any 7b GGUF model that performs exceptionally well when it comes to summarisation? I've been using OpenAI offerings for my workflow/script, and it works well, but is rather expensive. So far I've been experimanting with Dolphin. Admittedly, I'm a noob, so I'm not sure which endpoint to use for best results. Can you point me to a guide that addresses this or give me some hints? Thank you! | 2023-12-22T12:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/18oeabn/looking_for_a_model_for_iterative_summarisation/ | black_cat90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oeabn | false | null | t3_18oeabn | /r/LocalLLaMA/comments/18oeabn/looking_for_a_model_for_iterative_summarisation/ | false | false | self | 2 | null |
I REFUSE TO ACCEPT the idea that transformers based LLM we have won't be able to equal the best human mathematicians and reasoners. | 1 | Why?
Because scaling laws are still true.
A larger model trained the same way with similar digital data quality will always outperform a smaller model especially in reasoning and maths.
There can be no discussion in this matter. It is true and if we look at the trends it will continue to be true the more we scale.
Phi - 2 and Mistral 7B achieve better performance that larger models because of their data quality (synethic data was also used) and newer training methods.
Obviously these are great for open source and I support such models but we need larger models to achieve more ‘intelligence’ i.e. reasoning abilities.
Something like a 10 - 100 trillon parameter model trained in the same way as Phi -2 would maybe be it.
Obviously we need synthetic data for this.
But with synethic data (which is becoming more and more a reality) and more compute (which is inevitable), we should achieve equal to or better than human expert performance in every domain without needing new architectures or whatever q star is.
Obviously it would be horribly inefficient but maybe we can treat it like a CERN project for AI. | 2023-12-22T12:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/18oe9bv/i_refuse_to_accept_the_idea_that_transformers/ | Iboxelephants | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oe9bv | false | null | t3_18oe9bv | /r/LocalLLaMA/comments/18oe9bv/i_refuse_to_accept_the_idea_that_transformers/ | false | false | self | 1 | null |
Need help for use case with local models | 1 | [removed] | 2023-12-22T12:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/18odjci/need_help_for_use_case_with_local_models/ | faridukhan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18odjci | false | null | t3_18odjci | /r/LocalLLaMA/comments/18odjci/need_help_for_use_case_with_local_models/ | false | false | self | 1 | null |
I built a unlimited free to use GPT4, Voice AI (100+ Voices) and RoomGpt( 80+ themes) in Switch. It's free and unlimited generations. Hope you guys can make use of them:). Here is the link https://play.google.com/store/apps/details?id=com.app.switchapp1 . Find clickable link in body section. | 1 | https://play.google.com/store/apps/details?id=com.app.switchapp1 | 2023-12-22T11:57:02 | https://www.reddit.com/gallery/18odaam | LengthWorking4994 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18odaam | false | null | t3_18odaam | /r/LocalLLaMA/comments/18odaam/i_built_a_unlimited_free_to_use_gpt4_voice_ai_100/ | false | false | 1 | null | |
Unique fine tuned Models, what are your favorites? | 1 | Recently I found that there was a model fine tuned on mysticism/occultism and I only got to play with it a bit last night but honestly found it amazing.
I would love to learn more models like this. Are there any out there fine tuned for D&D? Star Wars? Specific novelists?
My other question which might be better as it's own thread is what is the difference between fine running a model and simply having that model with access to local files? | 2023-12-22T11:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/18od81g/unique_fine_tuned_models_what_are_your_favorites/ | Foot-Note | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18od81g | false | null | t3_18od81g | /r/LocalLLaMA/comments/18od81g/unique_fine_tuned_models_what_are_your_favorites/ | false | false | self | 1 | null |
Couuld llama2 70b be run on a tesla m10? | 3 | I am trying to run llama2 70b and use it as a base model for further optimization.
Does anyone have any experience using the tesla m10? Would it be possible, and if it is, anything I should know that would might be a challenge?
https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-m10/pdf/188359-Tesla-M10-DS-NV-Aug19-A4-fnl-Web.pdf | 2023-12-22T11:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/18ocu6q/couuld_llama2_70b_be_run_on_a_tesla_m10/ | bluespy89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ocu6q | false | null | t3_18ocu6q | /r/LocalLLaMA/comments/18ocu6q/couuld_llama2_70b_be_run_on_a_tesla_m10/ | false | false | self | 3 | null |
Llama 2 is awesome! | 8 | It is really good to be able to mess around with such an open source project! | 2023-12-22T11:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/18ocoph/llama_2_is_awesome/ | azmoura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ocoph | false | null | t3_18ocoph | /r/LocalLLaMA/comments/18ocoph/llama_2_is_awesome/ | false | false | self | 8 | null |
desperate noob needs help | 1 | [removed] | 2023-12-22T11:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/18ocn9z/desperate_noob_needs_help/ | TravelOdd3712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ocn9z | false | null | t3_18ocn9z | /r/LocalLLaMA/comments/18ocn9z/desperate_noob_needs_help/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RhTfmtdYpD3FM9yL1KB0OZqOyIM1ddRt2J1Ctjsd_Fc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/T1SZWoSg1O-0Nspmx3NgT-ikFs40ixi5HcnkXVZgf_g.jpg?width=108&crop=smart&auto=webp&s=e5a287b1663987da09838c6ca88936f0aac32b05', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/T1SZWoSg1O-0Nspmx3NgT-ikFs40ixi5HcnkXVZgf_g.jpg?width=216&crop=smart&auto=webp&s=513ff473b1b1fe8b3a40ffef3b45c545e3133483', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/T1SZWoSg1O-0Nspmx3NgT-ikFs40ixi5HcnkXVZgf_g.jpg?width=320&crop=smart&auto=webp&s=1675e58fd2714e605ea2a5a4494c495d3015fcce', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/T1SZWoSg1O-0Nspmx3NgT-ikFs40ixi5HcnkXVZgf_g.jpg?auto=webp&s=401d15bd1e29c02a1fe39190b8737c1cb27b8400', 'width': 480}, 'variants': {}}]} |
Is there a model that works well with Russian language? | 1 | Is there a model that works well with Russian language? | 2023-12-22T11:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/18ocn7l/is_there_a_model_that_works_well_with_russian/ | glorsh66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ocn7l | false | null | t3_18ocn7l | /r/LocalLLaMA/comments/18ocn7l/is_there_a_model_that_works_well_with_russian/ | false | false | self | 1 | null |
Is there a good model to extract names from text? | 2 | Is there a good model to extract names from text? | 2023-12-22T11:14:44 | https://www.reddit.com/r/LocalLLaMA/comments/18ocmrn/is_there_a_good_model_to_extract_names_from_text/ | glorsh66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ocmrn | false | null | t3_18ocmrn | /r/LocalLLaMA/comments/18ocmrn/is_there_a_good_model_to_extract_names_from_text/ | false | false | self | 2 | null |
Macbook or PC for running LLMs? | 3 | So, I have a MacBook and a PC and I was wondering which would be more suited for running LLMs.
My MacBook is an M1 Pro and has 32gb of unified memory (shared cpu and gpu memory. Its 128-bit LPDDR4X SDRAM)
My PC is a 13600kf with 32 gb of 3200MT/s DDR4 and a 6700xt with 12gb of vram
So, my three options for self hosting LLMs are PC cpu, PC gpu and Mac. My first thought is that pc cpu should be disqualified because as far as I can tell it's worse than the mac due to its iirc worse ram. Unless my assumption is incorrect and the PC is better than mac, that leaves my MacBook vs my rx 6700xt. The MacBook has almost triple the GPU's ram, but it's slower and would (I think?) be using slower compute.
What should I use to run LLMs? I'm leaning towards my MacBook because I want to try out relatively large models, but I'm worried that it might be slow and also might cause my laptop to overheat.
Also, whats the best way to tell if a model can run on my amount of ram? | 2023-12-22T11:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/18oclws/macbook_or_pc_for_running_llms/ | McpeIsSoBuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oclws | false | null | t3_18oclws | /r/LocalLLaMA/comments/18oclws/macbook_or_pc_for_running_llms/ | false | false | self | 3 | null |
I built a unlimited free to use GPT4, Voice AI (100+ Voices) and RoomGpt( 80+ themes) in Switch. It's free and unlimited generations. Hope you guys can make use of them:). Find the link in Comments section. | 1 | 2023-12-22T10:55:01 | https://www.reddit.com/gallery/18ocbnm | LengthWorking4994 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18ocbnm | false | null | t3_18ocbnm | /r/LocalLLaMA/comments/18ocbnm/i_built_a_unlimited_free_to_use_gpt4_voice_ai_100/ | false | false | 1 | null | ||
Been using Goliath 120B, game changing for roleplay, but are there cheaper alternatives? | 40 | Goliath 120B is amazing, but are there smaller models that has the same level of detail and creativity? | 2023-12-22T10:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/18oc7rg/been_using_goliath_120b_game_changing_for/ | Extra-Fig-7425 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oc7rg | false | null | t3_18oc7rg | /r/LocalLLaMA/comments/18oc7rg/been_using_goliath_120b_game_changing_for/ | false | false | self | 40 | null |
I tried to teach Mistral 7B a new language (Sundanese) and it worked! (sort of) | 144 | [Nero10578/Mistral-7B-Sunda-v1.0 · Hugging Face](https://huggingface.co/Nero10578/Mistral-7B-Sunda-v1.0)
Seeing that even Bing chat which is powered by chatGPT-4 couldn't speak in Sundanese when asked, I thought of trying to teach Mistral-7B Sundanese using just QLora training. It surprisingly worked out pretty well for how little data I had to train it with.
Why Sundanese? Because I can speak it and it is a regional language in Indonesia that isn't used much if at all on the internet so there was basically almost no chance it was trained well on any of these LLM models coming out.
This is more of an exercise to see if a small open-source model like Mistral 7B can be trained to learn a new very obscure language that is apparently never there in the original dataset. More details on the dataset and training settings are in the huggingface model card. Please ask me if you have any questions.
Also, I have no idea how I am supposed to name the version, but I just called this 1.0 because the 0.x versions are basically me trial and erroring burning up electricity and heating my place for quite a while before I got to this result that is somewhat coherent and surprisingly translates knowledge in English to Sundanese decently.
For the comparisons, I understand that almost no one here understands Sundanese so it's best if you copy paste these to google translate to see what it's saying if you're interested.
**Comparisons**
We can see that without the finetune the model just defaults to replying in Indonesian even though the character card is in Sundanese and says to only reply in Sundanese. This is probably because the model is trained on data that correlates that Sundanese is spoken in Indonesia where people speak Indonesian. It also definitely works best on answering random questions if you asked trivia questions that has a chance to be on Wikipedia.
Mistral-7B-v0.1:
AI
Naon anu abdi tiasa ngabantosan anjeun?
You
pang jelaskeun naon balapan mobil formula 1 te
AI
Mobil Formula 1 adalah mobil yang digunakan dalam pertandingan mobil Formula 1. Mobil ini memiliki desain yang unik dan kuat, dan dioperasikan oleh pemimpin terbaik dari seluruh dunia. Pertandingan mobil Formula 1 menjadi salah satu kegiatan olahraga terpopuler di dunia, dan menarik banyak penonton.
You
Ai fernando alonso te saha? Pang terangkeun
AI
Fernando Alonso adalah seorang pemimpin mobil Formula 1 yang berasal dari Spanyol. Beliau telah menyegarkan piala mobil Formula 1 dua kali, dan telah menjadi salah satu pemimpin terbaik di dunia mobil Formula 1.
Mistral-7B-Sunda-v1.0
AI
Naon anu abdi tiasa ngabantosan anjeun?
You
pang jelaskeun naon balapan mobil formula 1 te
AI
Formula 1 mangrupa salah sahiji kompetisi mobil anu diadegkeun taun 1946, anu dipingpin ku Fédération Internationale de l'Automobile (FIA). Kompetisi ieu dilaksanakeun unggal usum ti bulan Maret nepi ka Nopémber. Balapan Formula 1 biasana dilakukeun di laporan anu panjangna kurang leuwih 5-7 km, anu mana mobil-mobil anu parigel dibawa ku tim-tim anu aya di dunia.
You
Ai fernando alonso te saha? Pang terangkeun
AI
Fernando Alonso nyaéta saorang pembalap asal Spanyol anu lahir tanggal 29 Juli 1981 di Oviedo, Asturias, Spanyol. Anjeuna mimiti karir profesionalna di taun 1999, nalika anjeuna meunang gelar juara GP2 Series taun 2000. Dina taun 2001, Alonso mimiti maén di Formula One, nalika anjeuna ditandatanganan kontrak ku Tim Minardi. Tapi, sabada dua kali balapan, anjeuna diganti ku Alexander Wurz. Taun 2003, Alonso ganti tim ka Tim Renault, anu mana anjeuna bisa nyitak rékor anyar minangka pembalap paling umurna anu bisa meunang podium di Grand Prix Monaco. Podium éta ogé jadi hiji-hijina podium anu didamel ku Alonso di musim 2003.
The original Mistral 7B model also devolves to random rambling when conversed in Sundanese about things that are not Wikipedia trivia questions. Whereas the finetuned model actually replied coherently and mostly correctly.
Mistral-7B-v0.1:
AI
Naon anu abdi tiasa ngabantosan anjeun?
You
kumaha carana masak endog goreng
AI
Kutu kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas, kupas kupas,
Mistral-7B-Sunda-v1.0
AI
Naon anu abdi tiasa ngabantosan anjeun?
You
kumaha carana masak endog goreng
AI
Endog goreng bisa dijieun ku cara méré endog kana minyak panas. Anda bisa menggunakan wajan atau pemangkok untuk memasaknya. Anda juga bisa menambahkan bumbu seperti bawang bombay, lada, dan uyah untuk menambah rasa.
​ | 2023-12-22T10:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/18oc1yc/i_tried_to_teach_mistral_7b_a_new_language/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oc1yc | false | null | t3_18oc1yc | /r/LocalLLaMA/comments/18oc1yc/i_tried_to_teach_mistral_7b_a_new_language/ | false | false | self | 144 | {'enabled': False, 'images': [{'id': 'Kbc4t8i3_BJ1yNpqg-shARbU8csjm5G1LtpVQYX83oQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=108&crop=smart&auto=webp&s=24d2d9104f9d0e2fc4966eeb323ac2a1fb6cca96', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=216&crop=smart&auto=webp&s=c9c7deb8d35596a78e69538d30ef497805909495', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=320&crop=smart&auto=webp&s=c69332c9604ffd01ef659afd350f29c7df5ffc9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=640&crop=smart&auto=webp&s=69020f049040bce254750725b0964544831da80a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=960&crop=smart&auto=webp&s=ef3918a4863a473185d48e33bbbb9bed8b538dae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?width=1080&crop=smart&auto=webp&s=b71abb8f387fe18d7be48e1727d3ef616ae8d022', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FvkYiF4N1m6tNlmNT4G-9GCgIBclpimu1SFx7b_g-Ic.jpg?auto=webp&s=3eb0d5a3a87720969d76b11db53258e3e6a89a1b', 'width': 1200}, 'variants': {}}]} |
3B models are great when used with ChatGPT character | 16 | I recently tested a lot of 3B models and it is surprisingly usable when giving it ChatGPT character. Below is an example when I put this prompt in the memory tab; 'Gor is ChatGPT. Gor give accurate answer in direct and concise manner. Gor follow all instructions.'
This tips might be useful for those who use lower models.
[The output is not bad!](https://preview.redd.it/840znegxct7c1.png?width=1174&format=png&auto=webp&s=9091e46ca40988085a2afc9c3e3d68f38773f362)
​
[The memory tab in koboldcpp](https://preview.redd.it/7ksgrzvsdt7c1.png?width=548&format=png&auto=webp&s=795293ce666735ca723f8334b29963b6a1d089aa) | 2023-12-22T09:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/18oazcz/3b_models_are_great_when_used_with_chatgpt/ | DivineAscension | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oazcz | false | null | t3_18oazcz | /r/LocalLLaMA/comments/18oazcz/3b_models_are_great_when_used_with_chatgpt/ | false | false | 16 | null | |
Cannot reproduce the exact same training results - diffusion models | 1 | Hi ! In order to collaborate with my team we want to be able to get the same training results based on a finetuned version of [https://huggingface.co/runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
We have real issues defining the exact same environment, even if we have the same code/ use the same args with weight & biaises. We try to do our best with some versioning of the image datasets but it seems that the issue can also come from CUDA versions and the different libs (I also read it can come from the GPU being used).
If you have some tools that could help us replicable the same training environment that would be great. Not sure exactly what's wrong in our config.
Thanks!
​
​ | 2023-12-22T09:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/18oau6r/cannot_reproduce_the_exact_same_training_results/ | New_Detective_1363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oau6r | false | null | t3_18oau6r | /r/LocalLLaMA/comments/18oau6r/cannot_reproduce_the_exact_same_training_results/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QCpE9U9tHthu6Tk68NcMSdO4JuQy7KXbcN_EMSAAqpA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=108&crop=smart&auto=webp&s=dff119d09f945c4fb3ecf70685bedf3820cb50c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=216&crop=smart&auto=webp&s=3988333f99ed9c766da3ffda70509bb950b9cf25', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=320&crop=smart&auto=webp&s=68d996d758f23d2c638764aace8b5ba9e84b0c19', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=640&crop=smart&auto=webp&s=291a76d953dd0abf75da6f4eb3404f338a47b09f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=960&crop=smart&auto=webp&s=db8ef74c427e3b24aaa62e5bc07d58f1439d1f2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?width=1080&crop=smart&auto=webp&s=955870b448075c539ae4775cec5cc1e0de07be1d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-nJ7f0Q0CjLyTmaxM0npqUNftpWUfhbKtuco-fYGp7M.jpg?auto=webp&s=3be979df2de94c5c221ff1c3c3be047aef1baef4', 'width': 1200}, 'variants': {}}]} |
Multiple Users compute requirements | 4 | Hi,
I'm going to be hosting either OpenChat-3.5-16K or Mixtral-32k on Runpod. I'll have anywhere from 10-20 concurrent users. Can anyone give me some advice about what kind of compute I'll need to purchase? I plan to run my model on Ooba. | 2023-12-22T08:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/18oakpt/multiple_users_compute_requirements/ | Inevitable_Job_492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oakpt | false | null | t3_18oakpt | /r/LocalLLaMA/comments/18oakpt/multiple_users_compute_requirements/ | false | false | self | 4 | null |
Fine-tuned Mistral on my Telegram messages | 1 | [removed] | 2023-12-22T08:51:13 | https://www.reddit.com/r/LocalLLaMA/comments/18oai62/finetuned_mistral_on_my_telegram_messages/ | furiousteabag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oai62 | false | null | t3_18oai62 | /r/LocalLLaMA/comments/18oai62/finetuned_mistral_on_my_telegram_messages/ | false | false | self | 1 | null |
How to add stopping strings to llama.cpp? | 3 | I've searched everywhere on how to add stopping strings to the llama.cpp webui server, but I cannot find a way to make it work. I really want to use the webui, and not the console.
Do anyone know how to add stopping strings to the webui server? There are settings inside the webui, but not for stopping strings.
Im running ./server to start the web server.
I found a [python script](https://github.com/einzig-diego/LLaMA-CPP-Server-Endpoint-API/blob/main/python-requests.py) that uses stop words, but it does not make the text stream in the webui server... | 2023-12-22T08:16:39 | https://www.reddit.com/r/LocalLLaMA/comments/18oa12k/how_to_add_stopping_strings_to_llamacpp/ | fetballe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oa12k | false | null | t3_18oa12k | /r/LocalLLaMA/comments/18oa12k/how_to_add_stopping_strings_to_llamacpp/ | false | false | self | 3 | null |
What models can I comfortably run with an 16 gb ram and rtx 3060? | 6 | i am new in this field, don't have any idea. Can i run dolphin mistral8x7b ?? | 2023-12-22T08:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/18o9spl/what_models_can_i_comfortably_run_with_an_16_gb/ | Good-Delay3011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o9spl | false | null | t3_18o9spl | /r/LocalLLaMA/comments/18o9spl/what_models_can_i_comfortably_run_with_an_16_gb/ | false | false | self | 6 | null |
How to use Dolphin-Mixtral uncensored model on you Mac Silicon | 1 | 2023-12-22T07:58:03 | Sponge-Theory | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18o9rfj | false | null | t3_18o9rfj | /r/LocalLLaMA/comments/18o9rfj/how_to_use_dolphinmixtral_uncensored_model_on_you/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'Ybh3HcNCIYbOV9V9XmoAEIhpAkwafeOjcdgk8Jh1aGw', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=108&crop=smart&auto=webp&s=0c4985bcdc261f59054fd76c7112a7da7cd2fa4d', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=216&crop=smart&auto=webp&s=dfdecd54b9dbfe2b637e7fe11a8af4ad8c056684', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=320&crop=smart&auto=webp&s=29b54ad1cac235f2fac51dada89358f7814074b0', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=640&crop=smart&auto=webp&s=ef5701e764ea47e568c96010035790e3dff00ff3', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=960&crop=smart&auto=webp&s=bd9b3abcdb40bcc5b0f8445f0e92bc817f986eb1', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=1080&crop=smart&auto=webp&s=c11b0894957128086d330835429a9d34e44da185', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?auto=webp&s=fe4a7a8cf4ffb145ebe44ff1cb16af890c672883', 'width': 1162}, 'variants': {'nsfw': {'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8e0c446a5248212e683def183763fa5bb10fe5aa', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=70bc0194420894ea702ed4c5e6e82cb480da3bc5', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=29b1f4573104c924fa5ae2110e11e6bbc30131b5', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=04c7d0bd08e74c80a6ca1a375f3ef2ef9363bebf', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=96a450d15cfe0d58475aabb26e5a5355cefdf51f', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9c3e7eabfed7e664489f35da312e3b15e77e97a1', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?blur=40&format=pjpg&auto=webp&s=1ac2cf911e266333b5a226c74096e359bf5a28ba', 'width': 1162}}, 'obfuscated': {'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8e0c446a5248212e683def183763fa5bb10fe5aa', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=70bc0194420894ea702ed4c5e6e82cb480da3bc5', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=29b1f4573104c924fa5ae2110e11e6bbc30131b5', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=04c7d0bd08e74c80a6ca1a375f3ef2ef9363bebf', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=96a450d15cfe0d58475aabb26e5a5355cefdf51f', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9c3e7eabfed7e664489f35da312e3b15e77e97a1', 'width': 1080}], 'source': {'height': 784, 'url': 'https://preview.redd.it/dorpe4kuys7c1.png?blur=40&format=pjpg&auto=webp&s=1ac2cf911e266333b5a226c74096e359bf5a28ba', 'width': 1162}}}}]} | ||
Why do graphic cards click when doing compute? | 11 | Whenever I run LLM inference on my Intel Arc/AMD/or Nvidia cards including Teslas. There is this click click click sound as it does the compute.
I wonder where that came from? What purpose does it serve?
Oh and smaller GPUs like Mali GPUs on devboards don't seem to click, unlike graphics cards on bigger PCs. | 2023-12-22T07:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/18o9l7q/why_do_graphic_cards_click_when_doing_compute/ | A_Degenerate_Idiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o9l7q | false | null | t3_18o9l7q | /r/LocalLLaMA/comments/18o9l7q/why_do_graphic_cards_click_when_doing_compute/ | false | false | self | 11 | null |
Phind-codellama trained on stack-overflow amongst other things. | 10 | I am running it locally via ollama on an M2 MBP. I asked it a coding question and after its response I said “thank you”. To which it replied “You’re welcome! If you find this helpful, please consider accepting the answer.” Hmmm :-). | 2023-12-22T07:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/18o9ji5/phindcodellama_trained_on_stackoverflow_amongst/ | nborwankar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o9ji5 | false | null | t3_18o9ji5 | /r/LocalLLaMA/comments/18o9ji5/phindcodellama_trained_on_stackoverflow_amongst/ | false | false | self | 10 | null |
How to reduce Hallunications of the outputs generated by the LLMs. | 2 | After a LLM generates a answer is there a way to trace back from which document the generated text has been retrieved? I have stored the document information as meta data while upserting the vector embeddings into a vector database. I want attribution to where in the document corpus was the generated text from.
Also are there any metrics which can be used to test the relevance and correctness of the data after the LLM generates a output? | 2023-12-22T07:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/18o9cy8/how_to_reduce_hallunications_of_the_outputs/ | vm123313223 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o9cy8 | false | null | t3_18o9cy8 | /r/LocalLLaMA/comments/18o9cy8/how_to_reduce_hallunications_of_the_outputs/ | false | false | self | 2 | null |
Can I run LLM on Android natively? | 1 | [removed] | 2023-12-22T07:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/18o94qa/can_i_run_llm_on_android_natively/ | bharathdp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o94qa | false | null | t3_18o94qa | /r/LocalLLaMA/comments/18o94qa/can_i_run_llm_on_android_natively/ | false | false | self | 1 | null |
Running Mixtral-8x7B-Instruct-v0.1 On Server? | 2 | Hello, from using h20's [https://gpt-gm.h2o.ai](https://gpt-gm.h2o.ai) website, I have found that the Mixtral-8x7B-Instruct-v0.1 model works PERFECTLY for my use case. Is there any way I can deploy this model to a server with the same document-based capabilities as this demo website for relatively low cost (<$50 a month)? Thanks!!!! | 2023-12-22T07:08:09 | https://www.reddit.com/r/LocalLLaMA/comments/18o9203/running_mixtral8x7binstructv01_on_server/ | Critical_Pop_2216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o9203 | false | null | t3_18o9203 | /r/LocalLLaMA/comments/18o9203/running_mixtral8x7binstructv01_on_server/ | false | false | self | 2 | null |
pip install startup advisor | 2 | Using supervised fine-tuning of the **stable lm-zephyr-3b** with 1000+ hours of tech & startup podcast transcripts, now you can `pip install localmentor` to chat startup on your local computer.
Repo: [https://github.com/remyxai/LocalMentor](https://github.com/remyxai/LocalMentor) | 2023-12-22T06:53:01 | https://www.reddit.com/r/LocalLLaMA/comments/18o8u21/pip_install_startup_advisor/ | remyxai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o8u21 | false | null | t3_18o8u21 | /r/LocalLLaMA/comments/18o8u21/pip_install_startup_advisor/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'SqEKD0fqoDt1-uXyTjjxegoRheuvk6hjRw30XHfw5XE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=108&crop=smart&auto=webp&s=8a4e39a6f4e516dff2dde750576b3ef8ce28b1c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=216&crop=smart&auto=webp&s=0fd5869bea9dae0a420257017a1e159696dfa4f2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=320&crop=smart&auto=webp&s=28ba18dddd7bad7300e40192a5af49b7200caa88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=640&crop=smart&auto=webp&s=b388dd01b476a91b1bdf818356bb61a0cb141b45', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=960&crop=smart&auto=webp&s=c051f383ce84d5c34b1ae9ff61c6985564a41615', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?width=1080&crop=smart&auto=webp&s=d6bf9213526715f6be8ecdc2c1b59f7d9b2ae9bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7niuJDxqY2DTbhcaZfoGkzEa3NOX43mSThcBDL-gA9Y.jpg?auto=webp&s=24bdab608c1a033f4e99ceb0df9f64d1d1810064', 'width': 1200}, 'variants': {}}]} |
Has anyone tried running this with a local model, maybe Mixtral? “We demonstrate generative agents by populating a sandbox environment, reminiscent of The Sims, with twenty-five agents.” It was open-sourced a while ago, but the API costs to run it would be prohibitive. | 20 | 2023-12-22T06:27:01 | https://github.com/joonspk-research/generative_agents | danysdragons | github.com | 1970-01-01T00:00:00 | 0 | {} | 18o8fhm | false | null | t3_18o8fhm | /r/LocalLLaMA/comments/18o8fhm/has_anyone_tried_running_this_with_a_local_model/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'oZY7hufqIQ35fb0KnqeA-TW1NJe0knjyVE90X7JsYS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=108&crop=smart&auto=webp&s=29359ed50bc66f0735c49d9175390789b0674b5f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=216&crop=smart&auto=webp&s=71178ab4507c8ae428504899cd8f3236019dd78c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=320&crop=smart&auto=webp&s=960d412a402d3b90c3ece6f4ac0b1508bd6f226a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=640&crop=smart&auto=webp&s=ac84d0bfa318f7ea6f45ace64a90a699f87f4a6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=960&crop=smart&auto=webp&s=9c3fe6c29f6d443686b71fec39d7e527ac3b3a9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=1080&crop=smart&auto=webp&s=25d32f708fed6d25d45addc4bc54d362af176605', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?auto=webp&s=16dadd64f07ef163969a3af399d9d6c63ffa4ab2', 'width': 1200}, 'variants': {}}]} | ||
Need Guidance on Running LLAMA 7B/13B Models on Local K8s Cluster | 2 | Hello everyone,
I’m setting up a local k8s cluster using 5 workstations, each equipped with an AMD Ryzen™ 9 7950X CPU and 128 GB of RAM. Three of these systems have GeForce RTX 3060 GDDR6 12 GB GPUs, and two have GeForce RTX 4060 GDDR6 8 GB GPUs. All run Ubuntu desktop 22.04
I'm aiming to run LLAMA 7B and 13B model inferences on this cluster without quantization, if possible, and may consider fine-tuning later. As I'm new to hosting LLMs locally, I'm seeking guidance on several points:
* How can I run inference split across multiple nodes? I'm okay with the performance hit from any necessary network transfers.
* Do I need to shard the models first?
* What libraries are recommended for this purpose?
* Are there examples, repositories, or resources specifically for doing this with k8s?
* Should I use raw model weights or opt for the Hugging Face versions?
* Where do libraries like DeepSpeed fit into this picture, or are they not relevant at this stage?
Any help, guidance, or resources you could provide would be immensely appreciated. I'm also interested in any relevant experiences or tips you might have.
Thank you in advance! | 2023-12-22T06:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/18o8d69/need_guidance_on_running_llama_7b13b_models_on/ | riskyolive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o8d69 | false | null | t3_18o8d69 | /r/LocalLLaMA/comments/18o8d69/need_guidance_on_running_llama_7b13b_models_on/ | false | false | self | 2 | null |
AppAgent: Multimodal Agents as Smartphone Users | 1 | https://huggingface.co/papers/2312.13771
Abstract
Recent advancements in large language models (LLMs) have led to the creation of intelligent agents capable of performing complex tasks. This paper introduces a novel LLM-based multimodal agent framework designed to operate smartphone applications. Our framework enables the agent to operate smartphone applications through a simplified action space, mimicking human-like interactions such as tapping and swiping. This novel approach bypasses the need for system back-end access, thereby broadening its applicability across diverse apps. Central to our agent's functionality is its innovative learning method. The agent learns to navigate and use new apps either through autonomous exploration or by observing human demonstrations. This process generates a knowledge base that the agent refers to for executing complex tasks across different applications. To demonstrate the practicality of our agent, we conducted extensive testing over 50 tasks in 10 different applications, including social media, email, maps, shopping, and sophisticated image editing tools. The results affirm our agent's proficiency in handling a diverse array of high-level tasks. | 2023-12-22T06:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/18o880p/appagent_multimodal_agents_as_smartphone_users/ | metalman123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o880p | false | null | t3_18o880p | /r/LocalLLaMA/comments/18o880p/appagent_multimodal_agents_as_smartphone_users/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KAZAEsQ-XUC_NxiYLajYSb_vsZi5cNq3hy2UA9JibTs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=108&crop=smart&auto=webp&s=274f44faa26bb905b0ec13c365f7f0849e213f80', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=216&crop=smart&auto=webp&s=718f0c139b31882c39a11449a8f8ff3be1c2f4bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=320&crop=smart&auto=webp&s=112876d1516ef7395d8d7c806ada3c3773a35c06', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=640&crop=smart&auto=webp&s=d03d64f27de227bd9027dbe3a22361899e5fdaca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=960&crop=smart&auto=webp&s=41c2ba0c0388dda73d3041488f29c06d597be887', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?width=1080&crop=smart&auto=webp&s=37ca48bf85d84051880d2c1cacb2968b96356628', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WdiN-2m8TnTXhibxLMU71JsugnxcGKwCE8u8Q551Df4.jpg?auto=webp&s=ba269722055838dc5c4706ffdc7263d3131a9642', 'width': 1200}, 'variants': {}}]} |
What's the best way to beat GPT-5 with Open Source & Distributed Compute? | 47 | According to you (the community) it looks like the biggest hurdle to beating GPT-4 & GPT-5 level models is compute.
However, using a variety of strategies, including distributed compute, it looks achievable.
The question is, does anyone have clear next steps for how we would go about this?
This is what we gathered in our last thread:
1. We realized that Bitcoin's annual compute is enough to cook up over 16,000 GPT-4 level models per year -- IF we could use the Bitcoin ASIC hardware for LLM training.
2. Regardless of #1 we could probably figure out a way to distribute the training in new ways, and use our existing hardware, although the size of these models are so large that we would quickly run into bottlenecks passing data back and forth.
3. We've all seen projects like Petals or Flash optimization that break LLMs into layers or makes smarter decisions about which parts to keep in DRAM/VRAM and which parts can be stored in slower memory. So it seems possible to break the problem down into smaller sub-tasks if done well.
4. Some of you mentioned ways in which we could partner with various GPU farms like Cerebras, Runpod or Huggingface. This could work but I'm not sure how sustainable it is in the long run.
5. My favorite idea takes advantage of our *immediate strengths* as a worldwide commodity: Some of you mentioned rethinking LLMs as swarms. If we could create smaller experts that were trained on the other experts (knowing when a topic is outside of their expertise, or when a fellow expert is to be trusted)
Then the problem becomes much easier to solve:
We don't need huge models, we need good data and lots of LLMs working together.
If true, could we fine-tune thousands of Mamba or Phi-2 level LLMs on various expert level tasks that mirror various experts such as neuro surgeons, financial advisors, etc.
Then have the experts learn about each other's strengths and weaknesses for weighted voting or handoffs?
If so, then a GPT-4 or GPT-5 level model would look more like a swarm intelligence or a planet full of potentially billions of experts 😎
What's the best way to do this? And what would be the next steps? | 2023-12-22T06:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/18o83dm/whats_the_best_way_to_beat_gpt5_with_open_source/ | askchris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o83dm | false | null | t3_18o83dm | /r/LocalLLaMA/comments/18o83dm/whats_the_best_way_to_beat_gpt5_with_open_source/ | false | false | self | 47 | null |
Typical Fine-tuning Settings and Parameters? | 6 | Hi all, I've recently started fine-tuning some models using transformers and have a couple questions about settings and parameters.
1. When using Lora, why do we typically only add adapters on Q and V matrices?
2. Are there rough defaults to \`lora\_r\` and \`lora\_alpha\` parameters? Any other params to try tuning?
3. What is the usual range for learning rate warm up ratio?
4. What learning rate scheduler is suggested? I've been using linear decay, but cosine annealing feels like it'd be good to get out of local minima.
Are there other considerations I should think about? Such as other important hyper-parameters? Or if there's a better alternative than HF transformers? Thanks is advance for the help! | 2023-12-22T05:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/18o7v66/typical_finetuning_settings_and_parameters/ | xnick77x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o7v66 | false | null | t3_18o7v66 | /r/LocalLLaMA/comments/18o7v66/typical_finetuning_settings_and_parameters/ | false | false | self | 6 | null |
Favorite fine-tuning tool? | 8 | I've seen a lot of people here suggest Axolotl ([https://github.com/OpenAccess-AI-Collective/axolotl.git](https://github.com/OpenAccess-AI-Collective/axolotl.git)) as there go-to tool for fine-tuning models, but there also are a few other options that seem to be popular. For example, LLaMA Factory ([https://github.com/hiyouga/LLaMA-Factory.git](https://github.com/hiyouga/LLaMA-Factory.git)) appears to be more streamlined. Is there any reason why it is not more popular here? I would love to hear any other positive experiences with other tools. | 2023-12-22T04:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/18o6wo0/favorite_finetuning_tool/ | SirStagMcprotein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o6wo0 | false | null | t3_18o6wo0 | /r/LocalLLaMA/comments/18o6wo0/favorite_finetuning_tool/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'vwO_BTf6pLhUmX_TRqL3aU-kiYzWkx6M_WXTmbaBVa4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=108&crop=smart&auto=webp&s=232091dd4f5c006766403f8b8480557fb4aa509d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=216&crop=smart&auto=webp&s=3b607b5c8b2eeeb79724932d39c88b81bc6bdbf1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=320&crop=smart&auto=webp&s=2c8a6ba47f6e28bbae6a3ac9f2ca32aab6a52b51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=640&crop=smart&auto=webp&s=56931d9e6d35845a93beb49e3931f0792695f6c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=960&crop=smart&auto=webp&s=41e3804127445a467358170cb3b45023d8b2d7ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?width=1080&crop=smart&auto=webp&s=6b41b69dae471f2f12a39c0b7412db1d0ef89d17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o-rCba6np3PdOkN20kolVyIAj1pfH4xabEvxH8dPgnA.jpg?auto=webp&s=a6b876a7f693435c812ca8368f0c60f2ee468502', 'width': 1200}, 'variants': {}}]} |
Helpful VRAM requirement table for qlora, lora, and full finetuning. | 130 | Source: https://github.com/hiyouga/LLaMA-Factory#hardware-requirement
### Hardware Requirements
| Method | Bits | 7B | 13B | 30B | 65B | 8x7B |
| ------ | ---- | ----- | ----- | ----- | ------ | ------ |
| Full | 16 | 160GB | 320GB | 600GB | 1200GB | 1000GB |
| Freeze | 16 | 20GB | 40GB | 120GB | 240GB | 200GB |
| LoRA | 16 | 16GB | 32GB | 80GB | 160GB | 120GB |
| QLoRA | 8 | 10GB | 16GB | 40GB | 80GB | 80GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 32GB |
I think it would be great if people get more accustomed to qlora finetuning on their own hardware.
it seems llama.cpp may eventually support GPU training in the future, (just speculation due one of the gpu backend collaborators discussing it) , and mlx 16bit lora training is possible too.
(GPU+CPU training may be possible with llama.cpp, the gpu eg: 3090 could be good for prompt processing.) | 2023-12-22T03:54:01 | https://www.reddit.com/r/LocalLLaMA/comments/18o5u0k/helpful_vram_requirement_table_for_qlora_lora_and/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o5u0k | false | null | t3_18o5u0k | /r/LocalLLaMA/comments/18o5u0k/helpful_vram_requirement_table_for_qlora_lora_and/ | false | false | self | 130 | {'enabled': False, 'images': [{'id': 'KH8rP1laXrKX7kQH4ecJCjAPVRTJEoI3PEpjxFaNGbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=108&crop=smart&auto=webp&s=ef60162e69c0c9707b500c028514ac6a678840fe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=216&crop=smart&auto=webp&s=a6ffa5036f63f56c4922b7a95dc43e664881a8aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=320&crop=smart&auto=webp&s=26dea7a7c9bccdc070486dbf0da16b6c23876e67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=640&crop=smart&auto=webp&s=43fac426a185597b7d643c093bd5646fc63bc7b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=960&crop=smart&auto=webp&s=f09f44a420f6b56abdd2eaf7c506440bedbf6303', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=1080&crop=smart&auto=webp&s=e64e5e6c1800838daa9cfa8678ad4d4b54aa6eba', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?auto=webp&s=23667636b1ac7a061c65051a99f7c27616d24bfc', 'width': 1280}, 'variants': {}}]} |
eGPU | 5 | Hi I have a total noob question about local llm. I have a laptop with a GFX that has 6 GB of VRAM. Unfortunately, I can't afford a new computer, I have an offer to buy an egpu T3 cheaply, I'm aware that egpu have a performance loss. But as I understand it, e.g. With llama.cpp the model is loaded into the vram and stays there until I close the application, or is the model reloaded every time I ask the llm a question or something like that? | 2023-12-22T03:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/18o57uz/egpu/ | Strong_Sympathy9955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o57uz | false | null | t3_18o57uz | /r/LocalLLaMA/comments/18o57uz/egpu/ | false | false | self | 5 | null |
The "alignment" is becoming insane | 1 | [removed] | 2023-12-22T03:17:42 | https://www.reddit.com/r/LocalLLaMA/comments/18o56ou/the_alignment_is_becoming_insane/ | petrus4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o56ou | false | null | t3_18o56ou | /r/LocalLLaMA/comments/18o56ou/the_alignment_is_becoming_insane/ | false | false | self | 1 | null |
Is there a penalty for having more experts in MoE (Mixtral)? | 6 | If the speed is derived from having the equivalent of eight 7b models, with two being used. Wouldn’t be faster and/or more accurate with a sixteen head 4b model that can opt to use 2, 3 or 4 experts? | 2023-12-22T03:14:10 | https://www.reddit.com/r/LocalLLaMA/comments/18o54bh/is_there_a_penalty_for_having_more_experts_in_moe/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o54bh | false | null | t3_18o54bh | /r/LocalLLaMA/comments/18o54bh/is_there_a_penalty_for_having_more_experts_in_moe/ | false | false | self | 6 | null |
What are some models you have been running? On what hardware? and why? | 7 | Finally figured out how to actually use LocalAI and Ollama and textgen, stable diffusion, etc. Fair to say my 1660 6gb isnt the best, still works, but eh, so I'm in the market for a new AI rig. An A100 would be great, but I'd only ever get my hands on one in my dreams. But I wanna hear what hardware you are running in your setup? What models do you run? What's your utilization on said hardware? And while youre here, might as well ask why you run this? Is it just a ChatGPT/Bard selfhosted chatbot or are you running some crazy autonomous agents? I'm most interested in the hardware but would love to see what some of you geniuses have done. Thanks | 2023-12-22T02:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/18o3p82/what_are_some_models_you_have_been_running_on/ | Mafyuh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o3p82 | false | null | t3_18o3p82 | /r/LocalLLaMA/comments/18o3p82/what_are_some_models_you_have_been_running_on/ | false | false | self | 7 | null |
Upgrade to 96 GB from 64 GB Apple M3 Max for LLM inference, pre-training? | 1 | [removed] | 2023-12-22T01:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/18o2wma/upgrade_to_96_gb_from_64_gb_apple_m3_max_for_llm/ | Adventurous-Sign4520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o2wma | false | null | t3_18o2wma | /r/LocalLLaMA/comments/18o2wma/upgrade_to_96_gb_from_64_gb_apple_m3_max_for_llm/ | false | false | 1 | null | |
How much context can mistral and its finetunes handel ? | 4 | Uts probably very simple but I am a bit lost at the moment up to which context size can mistral models be used and is there a difference between v 0.1 and 0.2 ?
Lastly I am using the open hermes 2.5 fine tune up to what size of Context can it handle ? | 2023-12-22T01:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/18o2pwb/how_much_context_can_mistral_and_its_finetunes/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o2pwb | false | null | t3_18o2pwb | /r/LocalLLaMA/comments/18o2pwb/how_much_context_can_mistral_and_its_finetunes/ | false | false | self | 4 | null |
Open Source RAG - 500 users | 11 | Hi,
I have been sneaking inside this forum just looking at all the new open source models dropping and I thought theoretically what it would require implementing an open source on premise RAG solution with capacity to serve without performance jitter to 500 people.
Like a ChatGPT but I personally host the LLM solution, and it should be able to serve at least 500 people without losing in tokens/sec.
What would my hardware requirements be for this kind of system? I am thinking in terms of a not so hefty budget also. It should not be several A100's or H100's. Maybe some other solution? How many GPU's would I need? What open source models as of now would come a bit closer to the capacity of GPT-4 or GPT-3.5 when it comes to RAG solutions? Maybe Mixtral is tje best overall?
Tldr;
What would be a budget variant hardware requirement to serve a local RAG application to 500 people with reasonable tokens/sec and very reasonable RAG capacities in close range of GPT-4 or GPT 3.5? What open source model would you say right now is the best for this? | 2023-12-22T00:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/18o29ju/open_source_rag_500_users/ | pathfinder6709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o29ju | false | null | t3_18o29ju | /r/LocalLLaMA/comments/18o29ju/open_source_rag_500_users/ | false | false | self | 11 | null |
Is there any cheap mobo/cpu combo recommended for running 2-3 tesla p40s? | 2 | I've got 2 P40s but just found out my current mobo can't handle more than 1. Since my cpu is a i5-11400f it would also need a gpu so I'd need 3 slots minimum, and ideally 4 for the 3rd card.
Since I've already got everything else (ram, gpu, psu, ssd), I'm wondering if there's any recommended cheap mobo/cpu set I can buy on ebay that can run the 2 cards I have? I'm seeing quite a few $300 xeon bundles but can't tell what's "good enough" or even if they'd be able to handle 2-3 p40s in the bios. Or if there's a cheap used mobo that can do what I need?
(in Canada btw) | 2023-12-21T23:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/18o0tqc/is_there_any_cheap_mobocpu_combo_recommended_for/ | TopRecognition9302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o0tqc | false | null | t3_18o0tqc | /r/LocalLLaMA/comments/18o0tqc/is_there_any_cheap_mobocpu_combo_recommended_for/ | false | false | self | 2 | null |
Fine Tuning Mixtral Dolphin? | 3 | Anyone succeeded/any write-ups? I've only found this tutorial, which is bit poor [https://predibase.com/blog/how-to-fine-tune-mixtral-8x7b-with-open-source-ludwig](https://predibase.com/blog/how-to-fine-tune-mixtral-8x7b-with-open-source-ludwig) | 2023-12-21T23:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/18o0fc0/fine_tuning_mixtral_dolphin/ | Otherwise-Tiger3359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o0fc0 | false | null | t3_18o0fc0 | /r/LocalLLaMA/comments/18o0fc0/fine_tuning_mixtral_dolphin/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '5cdtkjF4UabknBVHv0jzqKKGN18nxOccQ_96tWdVrrA', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=108&crop=smart&auto=webp&s=22a3be2b589ff22fb70df84c3e1d34dd609280de', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=216&crop=smart&auto=webp&s=171a8a17e127cde5f4875dd7755e8035f9a525c6', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=320&crop=smart&auto=webp&s=6604000233007d6f10109b8a8aa25a1cd522fdc3', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=640&crop=smart&auto=webp&s=2b25336512c4a1f39a8be8fdca1a3a92d344f57a', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=960&crop=smart&auto=webp&s=896bc9d2aec3923cd82fe2e550a0877098561509', 'width': 960}, {'height': 572, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?width=1080&crop=smart&auto=webp&s=a58b880b69b064749d13d3989a2ab8fede946774', 'width': 1080}], 'source': {'height': 636, 'url': 'https://external-preview.redd.it/pjM3BEvfnHV9vja2n6geFvR-DTXgu3ZKPMzjNK8bmH8.jpg?auto=webp&s=6eb99488e71eda54c000df8e163e66504afda77b', 'width': 1200}, 'variants': {}}]} |
mixtral q4 k-m really braind dead for me compared to mistral 7b f16 | 22 | clblast isnt working so i cant do more than q4 for now
mistral 7b has been really great so i had hoped for mixtral to massivly out perform it but for now its the other way around despite most charts claiming even q4 b13 models should outperform it
and here we have a model which is said to outperform 70b models
is that just me or is q4 mixtral just that terrible?
mixtral
`User:tell me about krabat from the black mill`
`Deep Thought: Krabat is a protagonist in German folklore. He was born to poor parents in 1605, and died at age 24 of plague...`
`User:who died?`
`Deep Thought: Krabat was born to poor parents in 1605, and died at age 24 of plague. He is a protagonist in German folklore...`
good old mistral
`User:tell me about krabat from the black mill`
`Deep Thought: Krabat is a character in an old German novel called "Krabat from the Black Mill". He's a young orphan who's forced to go and live at the Black Mill, which is run by a mysterious wizard named Old One Eye. Over time, Krabat learns dark magic and becomes increasingly powerful, but he eventually turns against his master and defeats him in battle. The novel has been adapted into several plays and films over the years, and it's still popular today.`
mixtral
`User:find me music like the airport from the motors`
`Deep Thought: I'm not sure. Could you please clarify what you meant by "the airport"?`
`User:its a song made by a band called the motors`
`Deep Thought: I'm not sure. Could you please clarify what you meant by "the airport"? It sounds like there might be an error in my understanding of your request... User:`
good old mistral
`User:find me music like the airport from the motors`
`Deep Thought: I've found some music that might be similar to "The Airport" by The Motors. Here are a few songs you might enjoy:`
`- "Rock and Roll High School" by The Ramones`
`- "Rebel Rebel" by David Bowie`
`- "Ace of Spades" by Motörhead`
`- "I Love Rock 'n' Roll" by Joan Jett & the Blackhearts`
`User:`
probably not the best idea to ask a LLM about music but mixtral fails do even understand the question at its core | 2023-12-21T23:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/18o0dm8/mixtral_q4_km_really_braind_dead_for_me_compared/ | Zealousideal_Nail288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18o0dm8 | false | null | t3_18o0dm8 | /r/LocalLLaMA/comments/18o0dm8/mixtral_q4_km_really_braind_dead_for_me_compared/ | false | false | self | 22 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.