title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Phi-2 on Pixel 3! (llama.cpp PR) | 77 | 2023-12-16T02:45:52 | https://asciinema.org/a/22XyksdGZNwFbWIFoNqHQPC1i | Aaaaaaaaaeeeee | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 18jhb5j | false | null | t3_18jhb5j | /r/LocalLLaMA/comments/18jhb5j/phi2_on_pixel_3_llamacpp_pr/ | false | false | 77 | {'enabled': False, 'images': [{'id': '5zxV4fODIpT6GEUeoihYoNLRAHYO2q8WjPwb78kc-e8', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?width=108&crop=smart&auto=webp&s=dccc8ea14dade044feeaf3e27a86a4b0c72b192a', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?width=216&crop=smart&auto=webp&s=76ce6684d8ec5105240a46114ba278932381a549', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?width=320&crop=smart&auto=webp&s=10730948167d3b4cd244667941f0166eeca9bcbd', 'width': 320}, {'height': 717, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?width=640&crop=smart&auto=webp&s=195468adeb06d0b6394063709e09fedf84f3f6f4', 'width': 640}, {'height': 1076, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?width=960&crop=smart&auto=webp&s=d7a04eac4e850eee72fc7cd2ca684ebc9eff1659', 'width': 960}], 'source': {'height': 1098, 'url': 'https://external-preview.redd.it/nrNSy_5t685j2w5_ZCiVqIA33FkRA_jRV-gwk0bdnNg.jpg?auto=webp&s=f6a8e077d159d3dd13d63f612e0351002d8faa8a', 'width': 979}, 'variants': {}}]} | ||
What can we infer about the future of GPU VRAM availability? Are cheap high VRAM cards or equivalent options possible/likely? | 1 | [removed] | 2023-12-16T02:03:32 | https://www.reddit.com/r/LocalLLaMA/comments/18jgixs/what_can_we_infer_about_the_future_of_gpu_vram/ | TopRecognition9302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jgixs | false | null | t3_18jgixs | /r/LocalLLaMA/comments/18jgixs/what_can_we_infer_about_the_future_of_gpu_vram/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bTpoSoAvbYS_GE9DWQ-PjqdFrx2S7pBaK_hLr_n5zrg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kyEJjYR2YlpmXYwesHHIrhbkqePLXyY0FzPjtgfqbT4.jpg?width=108&crop=smart&auto=webp&s=5c2fa181556506296d17fb3667bfb99fdf2aae7e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/kyEJjYR2YlpmXYwesHHIrhbkqePLXyY0FzPjtgfqbT4.jpg?width=216&crop=smart&auto=webp&s=578818b8413d2ac61d067203ad4b64ab3bca40e6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/kyEJjYR2YlpmXYwesHHIrhbkqePLXyY0FzPjtgfqbT4.jpg?width=320&crop=smart&auto=webp&s=682c4873cfeb09a3e846065312d6fe555378cbac', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/kyEJjYR2YlpmXYwesHHIrhbkqePLXyY0FzPjtgfqbT4.jpg?auto=webp&s=1cf4542b7634fc7d9d45d1a82d7a5585bee0bfda', 'width': 480}, 'variants': {}}]} |
LMStudio vs Textgen-webui on Mac? Why such differences in speed? | 3 | Hey everyone, I've been a little bit confused recently with some of these textgen backends. Running on M1 Max 64gb.
I really love LMStudio; the UX is fantastic, and it's clearly got lots of optimisations for Mac. Crazy great speed, makes great use of Mac hardware, and I love whatever black magic they've got going on that just instantly fills up your RAM when the model runs inference and then drops it right back out once it's finished.
Problem is, even though it's great for Goliath, and produces great output, it frickin sucks for Mixtral. I don't know why but it refuses to give coherent output from Mixtral. Maybe has something to do with sampler, but honestly even with the exact same settings in Textgen-webui, I'm getting wayyy better results.
Textgen-webui on the other hand seems to run much much slower over all. Once the model gets going it maybe starts to get decent speed, but generally the t/s is way slower. Weirdly, I notice also it doesn't have the black magic going on with the RAM - instead it just loads the model up and keeps it there, with no apparent way to have it dynamically load in. It also takes up way less RAM too, for whatever reason; LMStudio q8 Mixtral takes up 60gb of RAM at 32k context. Textgen webui is only taking up 15gb..?? Same with Goliath. Not only that but it takes so long trying to load Goliath (been staring at it attempting to load first token for a good 5 minutes now) that it's not usable at all.
So I guess my question is; what gives? They're both llama.cpp based right? So how come the implementations end up being so different? How do I get textgen-webui to behave more like LMStudio and run Goliath? | 2023-12-16T01:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/18jgc1b/lmstudio_vs_textgenwebui_on_mac_why_such/ | OldAd9530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jgc1b | false | null | t3_18jgc1b | /r/LocalLLaMA/comments/18jgc1b/lmstudio_vs_textgenwebui_on_mac_why_such/ | false | false | self | 3 | null |
[Q]: Is it possible to compile llama.cpp for multiple Mac architectures | 1 | [removed] | 2023-12-16T01:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/18jg9jm/q_is_it_possible_to_compile_llamacpp_for_multiple/ | hhh312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jg9jm | false | null | t3_18jg9jm | /r/LocalLLaMA/comments/18jg9jm/q_is_it_possible_to_compile_llamacpp_for_multiple/ | false | false | self | 1 | null |
Do I need different executables for different CPU architectures? | 1 | [removed] | 2023-12-16T01:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/18jg6hn/do_i_need_different_executables_for_different_cpu/ | hhh312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jg6hn | false | null | t3_18jg6hn | /r/LocalLLaMA/comments/18jg6hn/do_i_need_different_executables_for_different_cpu/ | false | false | self | 1 | null |
Questions regarding 70b Models in OObaBooga | 2 | 1. I have a 3090 with 24GB of VRAM, and 64GB of regular RAM. I should be able to run them somehow right?
2. If yes. How? What settings or types of models should I look for? When I try to load even 34B models it doesn't load. I don't understand what settings need to be tweaked I guess?
3. I use Oobabooga btw. is that good? bad? doesn't matter? alternatives?
4. Any recommendations for models? I like totally uncensored models that I can play around with. Tiefighter was pretty good. wizard vicuna, mythomax etc. stuff like that in the 70B range?
Thanks. I know these answers must be pretty easy.. just a few settings here and there yeah? | 2023-12-16T01:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/18jg0ab/questions_regarding_70b_models_in_oobabooga/ | assfaceMCdickbutt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jg0ab | false | null | t3_18jg0ab | /r/LocalLLaMA/comments/18jg0ab/questions_regarding_70b_models_in_oobabooga/ | false | false | self | 2 | null |
Want to learn how to run a LLM. | 1 | I am newer to the LLM space and have always been fascinated. I don’t have a super heavy rig but would like to try mistral etc. is there a guide on running it on a cloud provider so I can train it without the local resources. | 2023-12-16T01:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/18jfuiq/want_to_learn_how_to_run_a_llm/ | Ok-Activity-2953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jfuiq | false | null | t3_18jfuiq | /r/LocalLLaMA/comments/18jfuiq/want_to_learn_how_to_run_a_llm/ | false | false | self | 1 | null |
How are you running your models? | 8 | I've tried a couple of different methods, kobloidcpp seems to run the best. I heard great things about ExLlamav2, so I've been trying to use it. I installed ExUI, but when trying to load models on that, the progress bar fills up and gets stuck. Trying to run exLlamav2 models on textgen-webui gets me a bit further as the models load and I can generate with them, but the output is incoherent nonsense.
Anyone have any recommendations for other methods or help with exLlamav2/ExUI? It seems like a great platform, but there isn't really any help documentation for it. | 2023-12-16T01:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/18jfqzx/how_are_you_running_your_models/ | Salendron2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jfqzx | false | null | t3_18jfqzx | /r/LocalLLaMA/comments/18jfqzx/how_are_you_running_your_models/ | false | false | self | 8 | null |
QUICK Mistral Fine Tune Question | 3 | I am trying to fine tune Mistral 7b on some of my personal proj info. I am creating the dataset, and I'm not sure if the input text file should be formatted like:
Example A:
\### Human: What are some notable Japanese manufacturers that produced vintage race quads/ATVs during the late 70s and mid-90s?
\### Assistant: Some notable Japanese manufacturers during this period include Yamaha, Honda, Kawasaki, and Suzuki. These companies produced iconic models that are highly sought after by enthusiasts and collectors.
​
\### Human: Could you explain the significance of the Yamaha Banshee in the vintage race quad/ATV community?
\### Assistant: The Yamaha Banshee, produced from 1987 to 2006, is highly regarded for its powerful twin-cylinder two-stroke engine and impressive performance. It's a favorite among racing enthusiasts and is often modified for enhanced power and speed.
OR-------------------------------------
Example B: (no lines/spaces)
\### Human: What are some notable Japanese manufacturers that produced vintage race quads/ATVs during the late 70s and mid-90s?### Assistant: Some notable Japanese manufacturers during this period include Yamaha, Honda, Kawasaki, and Suzuki. These companies produced iconic models that are highly sought after by enthusiasts and collectors.### Human: Could you explain the significance of the Yamaha Banshee in the vintage race quad/ATV community?### Assistant: The Yamaha Banshee, produced from 1987 to 2006, is highly regarded for its powerful twin-cylinder two-stroke engine and impressive performance. It's a favorite among racing enthusiasts and is often modified for enhanced power and speed.
​
\-----------------------------------------------------------------
Also, I am getting this error using the code below, not sure whats up. I am currently testing on a laptop with a small GPU (3060 6GB), so not sure if thats the issue or I have something else punched in wrong.
THANKS A TON LADIES AND GENTS!
​ | 2023-12-16T01:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/18jfe6y/quick_mistral_fine_tune_question/ | aallsbury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jfe6y | false | null | t3_18jfe6y | /r/LocalLLaMA/comments/18jfe6y/quick_mistral_fine_tune_question/ | false | false | self | 3 | null |
Anyone have any experience using both a 4090 and a 2080ti? | 3 | I'm upgrading from a 2080ti to a 4090 and instead of trying to sell the 2080ti, I thought I'd see if it's feasible to hold on to it. Unfortunately I can't find any resources online of anyone using both for local llms. I plan to mostly just do inference. If I do any training, I'll just rent some cloud gpus. Has anyone tried using both? Looking to get some thoughts on this.
Here's some hardware info:
Ryzen 3900x
64Gb DDR4 Ram
16x PCIe slots (split to 8x when dual gpus)
I have room for both gpus, enough cooling and enough psu power.
Linux OS | 2023-12-16T00:59:22 | https://www.reddit.com/r/LocalLLaMA/comments/18jfatb/anyone_have_any_experience_using_both_a_4090_and/ | jun2san | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jfatb | false | null | t3_18jfatb | /r/LocalLLaMA/comments/18jfatb/anyone_have_any_experience_using_both_a_4090_and/ | false | false | self | 3 | null |
What is the smallest possible model (file size)? Or how to create a basic chat model? | 8 | im interested in bundling llama.cpp with a small model (say sub 50 megs) that can handle chat.
Are there any micro models like this? It should recognize basic conversational english like 'hi', how are you, and give the user a taste of how LLMs work.
Or how can i go about creating one? I saw the shakespeare examples but i want modern english and of course having it make sense of user input and its response | 2023-12-16T00:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/18jf6z0/what_is_the_smallest_possible_model_file_size_or/ | herozorro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jf6z0 | false | null | t3_18jf6z0 | /r/LocalLLaMA/comments/18jf6z0/what_is_the_smallest_possible_model_file_size_or/ | false | false | self | 8 | null |
The Mixtral Timeline | 28 | 2023-12-16T00:38:26 | cubestar362 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18jevwg | false | null | t3_18jevwg | /r/LocalLLaMA/comments/18jevwg/the_mixtral_timeline/ | false | false | default | 28 | {'enabled': True, 'images': [{'id': 'u7kqt10hxj6c1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=108&crop=smart&auto=webp&s=93d2fb408c380e8bd189fcae26566b809fc9ab4d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=216&crop=smart&auto=webp&s=cf7a5b0ca81755ecd4cd9f799f1ff808de542fa1', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=320&crop=smart&auto=webp&s=f8eddd021ba738f5672c8c811546dcb5419c8a36', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=640&crop=smart&auto=webp&s=2d21c9600511b30aaf730daef483ecfda9796814', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=960&crop=smart&auto=webp&s=8d062fd1cf667ec0f56a713ba8cbd712966aeaca', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?width=1080&crop=smart&auto=webp&s=f44d82c1d88b68707975bb76d348a2e4bf9203b8', 'width': 1080}], 'source': {'height': 8920, 'url': 'https://preview.redd.it/u7kqt10hxj6c1.png?auto=webp&s=6d8c4dfb93e905c520644b882cfe77002024db80', 'width': 1680}, 'variants': {}}]} | ||
Tired of small model? Did you miss : Tulu V2 DPO 70B | 19 | I bought a good machine to be able to run big model when the craze of Llama 2 started.
While it's nice seeing all the small models doing great, so more people can access the world of LLM, bigger will always be better(just talking of LLM guys ;) ).
I tend to miss models if they are not plugged in this subreddit or Trending on Huggingface.
But i found this, less than a month old : https://huggingface.co/allenai/tulu-2-dpo-70b
Gguf here : https://huggingface.co/TheBloke/tulu-2-dpo-70B-GGUF
It's among the top non proprietary model : https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
It's a generalist model, had a lot of iteration, and I found it pretty good during my small home testing.
It's censor, but if it bothers you, it's easy to jail break with the "Sure thing" trick.
./main -ngl 32 -m ./models/tulu-2-dpo-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>`\n` {PROMT} `\n`<|assistant|>`\n` Sure thing"
Just thought I'd share. | 2023-12-16T00:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/18je9es/tired_of_small_model_did_you_miss_tulu_v2_dpo_70b/ | mantafloppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18je9es | false | null | t3_18je9es | /r/LocalLLaMA/comments/18je9es/tired_of_small_model_did_you_miss_tulu_v2_dpo_70b/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'r0dcm8h1XGXKNrOLMLxF57-0dqI9Bxn4QQPAcorCa_8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=108&crop=smart&auto=webp&s=b3096688a2bc4ab9f074e581f2cc929349d757fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=216&crop=smart&auto=webp&s=1686ced22b7828e25c83d74e938521c6644a8ef8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=320&crop=smart&auto=webp&s=2fa92e8eee60d88a92481316b904c7c31de99c84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=640&crop=smart&auto=webp&s=108c21099f4d7c2f193ce7ee1699ab0b1bca9fef', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=960&crop=smart&auto=webp&s=8066e8bd8cf0269204d90b58adb8ead1cc4c8730', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?width=1080&crop=smart&auto=webp&s=fb43a6ed7fbf945c190abf888686ed93b4bf506d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/47tgYyvKEdssXNneetrfYCeNSf-lA4OACg7mSIKhw6Q.jpg?auto=webp&s=4457dbf604bac75e3d21c8b861a872f91577426d', 'width': 1200}, 'variants': {}}]} |
Is "loss" the be-all-end-all metric to look for during training? | 5 | I set a very low learning rate and high number of epoch for my training.
I see my loss has basically stabilised at around \~0.5. But my epochs have not finished yet.
Is there any point in continuing the training? Will the model get any better at understanding my data if I leave it to complete the total epochs? | 2023-12-15T23:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/18jdzfq/is_loss_the_beallendall_metric_to_look_for_during/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jdzfq | false | null | t3_18jdzfq | /r/LocalLLaMA/comments/18jdzfq/is_loss_the_beallendall_metric_to_look_for_during/ | false | false | self | 5 | null |
We’ve built LoRA fine-tuning and serving for Mixtral | 31 | Hey everyone, Justus from [Haven](https://haven.run/) here - two weeks ago, we launched our platform that lets people fine-tune, chat with, and export open source LLMs. One week ago, we launched [mamba-chat](https://github.com/havenhq/mamba-chat), probably the best non-transformer LLM.
Today, we’ve added support for Mixtral 8x7b Instruct to our platform. You can fine-tune for a few dollars (to cover the GPU time) and chat with the resulting model for free.
Some stuff that’s coming soon:
* We’re working on a per-token-priced API, that’s probably going to go live in the next few days (to run your fine-tuned models).
* We’re also planning to release the Haven code as open source under Apache 2.0. Expect that to happen some time next week.
Until then, you can fine-tune models [here](https://app.haven.run/models). Would love to get your feedback! | 2023-12-15T23:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/18jdtoz/weve_built_lora_finetuning_and_serving_for_mixtral/ | pip-install-torch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jdtoz | false | null | t3_18jdtoz | /r/LocalLLaMA/comments/18jdtoz/weve_built_lora_finetuning_and_serving_for_mixtral/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'pTKqHjhoCugrW2rJAn5c3mQ4bp39CO2q-VCteGDYE7Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=108&crop=smart&auto=webp&s=c72722ebfe18850415d6d897244df540fef828c6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=216&crop=smart&auto=webp&s=bd45ce295e3c93b79cfc4bb35bd809d08cd58369', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=320&crop=smart&auto=webp&s=ca57191da0e4ed1530f68372d845eec14099d40f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=640&crop=smart&auto=webp&s=e04cbbaafb467addad6f22d31af4f2e792859dcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=960&crop=smart&auto=webp&s=0170ecfc57ed080894a7f9e61a0aac13e55fcfc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?width=1080&crop=smart&auto=webp&s=8a01e162e1a4866f6bdcfccc62c934560f3ab555', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/48ZVBLyHwf6RTg1Z8euphyECkk8nLgMQRr9g0GeQZyE.jpg?auto=webp&s=e8a8f4f3b77dcbcee4405729ecaabbc5099d7709', 'width': 1200}, 'variants': {}}]} |
LocalGPT | 1 | I've been exploring using documents as knowledge bases for LLMs.
So far I've used LocalGPT and then chatgpt 4 to question them about the example Orca paper included in LocalGPT. I've found that their ability to know anything useful about the document is far far lower than their general knowledge ability.
Am I right in thinking this is just because they have much less data in the document, and so deriving any patterns in it is extremely hard or can a lot be done with the vectorisation of the document?
If the first assumption is correct then will document analysis using vectorisation always be rubbish?
Cheers | 2023-12-15T23:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/18jdej0/localgpt/ | Breath_Unique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jdej0 | false | null | t3_18jdej0 | /r/LocalLLaMA/comments/18jdej0/localgpt/ | false | false | self | 1 | null |
CogAgent: A Visual Language Model for GUI Agents | 28 | Model and code: [https://github.com/THUDM/CogVLM](https://github.com/THUDM/CogVLM)
Paper: [https://arxiv.org/abs/2312.08914](https://arxiv.org/abs/2312.08914)
Abstract
>People are spending an enormous amount of time on digital devices through graphical user interfaces (GUIs), e.g., computer or smartphone screens. Large language models (LLMs) such as ChatGPT can assist people in tasks like writing emails, but struggle to understand and interact with GUIs, thus limiting their potential to increase automation levels. In this paper, we introduce CogAgent, an 18-billion-parameter visual language model (VLM) specializing in GUI understanding and navigation. By utilizing both low-resolution and high-resolution image encoders, CogAgent supports input at a resolution of 1120\*1120, enabling it to recognize tiny page elements and text. As a generalist visual language model, CogAgent achieves the state of the art on five text-rich and four general VQA benchmarks, including VQAv2, OK-VQA, Text-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE. CogAgent, using only screenshots as input, outperforms LLM-based methods that consume extracted HTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW, advancing the state of the art.
Examples:
https://preview.redd.it/c34c3pgajj6c1.jpg?width=5874&format=pjpg&auto=webp&s=139b411c58798c20c640044d04dccb6bc349b799
https://preview.redd.it/82m70qqajj6c1.png?width=966&format=png&auto=webp&s=1eca61a029e2f5e3022fe9811c7c05b9397985c1 | 2023-12-15T23:13:12 | https://www.reddit.com/r/LocalLLaMA/comments/18jd40x/cogagent_a_visual_language_model_for_gui_agents/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jd40x | false | null | t3_18jd40x | /r/LocalLLaMA/comments/18jd40x/cogagent_a_visual_language_model_for_gui_agents/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'XKEv6OKlJ1DkK0YZPaHB7q2Z9XgWj_ukCXnwxdjGYt0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=108&crop=smart&auto=webp&s=bb9b5bce9865452497162108b466d0fbe50ad1fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=216&crop=smart&auto=webp&s=45ecdb8b46c6acadf138cc07ad0c22e547dc9d60', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=320&crop=smart&auto=webp&s=18c1644b3ae1482d88493e0d00c184a2d4fdb545', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=640&crop=smart&auto=webp&s=779cda889077a1a1f14b0caab929613d482db1eb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=960&crop=smart&auto=webp&s=a2d53d5abd6632cd147b8bd8652c24cefa3a44fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?width=1080&crop=smart&auto=webp&s=11132d61352d2fcea2cc2e4e51e759bf8596979b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kBsgrzhh7EiQLMa-nCEkX4vzC6g1Vi0gN7qM9E58Shw.jpg?auto=webp&s=3ad1771af10721e3332593e3a8f112720d1882d2', 'width': 1200}, 'variants': {}}]} | |
Mistral Platform API is censored, sad | 1 | [removed] | 2023-12-15T23:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/18jcv4w/mistral_platform_api_is_censored_sad/ | AromaticBombay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jcv4w | false | null | t3_18jcv4w | /r/LocalLLaMA/comments/18jcv4w/mistral_platform_api_is_censored_sad/ | false | false | self | 1 | null |
I have a lot of unstructured text (fiction stories). Can I have my LLM generate the prompts that I then use to train the model? | 9 | Basically I have a large amount of fiction writing and I want the LLM to be able to write fiction. But I have no prompts, it's just raw text. I was thinking of running each piece of text through my LLM first and ask it to generate some potential prompts that would cause an LLM to output that text. So like in reverse. Then I would fine tune the model using those prompts and the fiction text.
e.g.
\--------
\#Prompt:
"Write a prompt that a human might ask an LLM so that the LLM may generate the following text:
"""<A few paragraphs of a story about a boy who finds out he's a wizard or something>"""
\#Response:
"Write a story about a boy who finds out he's a wizard and needs to defeat an evil wizard."
\--------
Than I would feed that back into the LLM, prompt first, while fine tuning. e.g.
\--------
\#Prompt
"Write a story about a boy who finds out he's a wizard and needs to defeat an evil wizard."
\#Response
"""<A few paragraphs of a story about a boy who finds out he's a wizard or something>"""
​
What do you think? Would this work? Or would it cause the model to degenerate? Maybe using GPT3.5 or GPT-4 to generate prompts and would lead to better results, but I would like to use my own LLM for this. What if I augmented the prompt to sound different, such as by paraphrasing it or back translating it or some other text augmentation technique?
Or is there a totally better way to do this that I hadn't thought of? Thanks all | 2023-12-15T22:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/18jcd6r/i_have_a_lot_of_unstructured_text_fiction_stories/ | PMMEYOURSMIL3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jcd6r | false | null | t3_18jcd6r | /r/LocalLLaMA/comments/18jcd6r/i_have_a_lot_of_unstructured_text_fiction_stories/ | false | false | self | 9 | null |
Which Model for the Economy? | 3 | As an economics master's student, I am looking for a model that I can feed with economics articles and then give me outputs focused especially on the field of economics. Do you know of a model that focuses on this field or has potential in this field? | 2023-12-15T22:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/18jbp4x/which_model_for_the_economy/ | mrsalvadordali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jbp4x | false | null | t3_18jbp4x | /r/LocalLLaMA/comments/18jbp4x/which_model_for_the_economy/ | false | false | self | 3 | null |
What can I run with 8gb of ram? | 1 | Can I run 7gb models even if it’s kind of slow? How about stable diffusion, etc? | 2023-12-15T22:02:18 | https://www.reddit.com/r/LocalLLaMA/comments/18jbjen/what_can_i_run_with_8gb_of_ram/ | TheHumanFixer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jbjen | false | null | t3_18jbjen | /r/LocalLLaMA/comments/18jbjen/what_can_i_run_with_8gb_of_ram/ | false | false | self | 1 | null |
Best local+API UI to run LLMs ? | 6 | I like to run locals models, but also want a good front-end to use GPT4 and mistral-medium when needed. It would be cool to do all of this in the same front end. Would be even cooler to be able to RAG documents, use multimodal (GPT4-vision and llava/bakllava/obsidian/....) etc.
Can you recommend me some UIs that try to be this API + local unified portal to use LLMs? Even if it has no local component but is compatible with openai api, I could take care of using vLLM/llama.cpp server/kobold/... myself. | 2023-12-15T22:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/18jbinq/best_localapi_ui_to_run_llms/ | noioiomio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jbinq | false | null | t3_18jbinq | /r/LocalLLaMA/comments/18jbinq/best_localapi_ui_to_run_llms/ | false | false | self | 6 | null |
Apple Mac Studio with M1 Ultra 64GB (Early 2022) - $2199.99 | 20 | 2023-12-15T21:39:11 | https://computers.woot.com/offers/apple-mac-studio-with-m1-ultra-early-2022z | fallingdowndizzyvr | computers.woot.com | 1970-01-01T00:00:00 | 0 | {} | 18jb0op | false | null | t3_18jb0op | /r/LocalLLaMA/comments/18jb0op/apple_mac_studio_with_m1_ultra_64gb_early_2022/ | false | false | 20 | {'enabled': False, 'images': [{'id': '4sOdE-tNlCGz1T1wo2tGfFOnoSBWAaiqRKIstS1c3qQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JbCv9E5pTQquvLvQEfi4fRCssTS8yDwlgCimVzwQOqE.jpg?width=108&crop=smart&auto=webp&s=67cb236f2e3fa9d35dcea1ca3b0e6eff8f419370', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JbCv9E5pTQquvLvQEfi4fRCssTS8yDwlgCimVzwQOqE.jpg?width=216&crop=smart&auto=webp&s=8e7f33cea4f03d3bf25b4a728bdcd35a6cd570d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JbCv9E5pTQquvLvQEfi4fRCssTS8yDwlgCimVzwQOqE.jpg?width=320&crop=smart&auto=webp&s=26369c047f5e090970048b1f3b4ceee08b196510', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JbCv9E5pTQquvLvQEfi4fRCssTS8yDwlgCimVzwQOqE.jpg?width=640&crop=smart&auto=webp&s=9710dd90e79566c7ce207d8da6d41d19f8e82fa9', 'width': 640}], 'source': {'height': 441, 'url': 'https://external-preview.redd.it/JbCv9E5pTQquvLvQEfi4fRCssTS8yDwlgCimVzwQOqE.jpg?auto=webp&s=ee643a73cfe36ebd6bb52a048fd2ac59f5b92e4e', 'width': 882}, 'variants': {}}]} | ||
Mixtral Settings | 2 | Hi all, I've been able to get `mixtral-8x7b-v0.1.Q6_K.gguf` running on on the Oobabooga web UI, using dual 3090's. I've seen some flashes of brilliance, but so far it is hard to get it to generate usable content. I'm getting better output with other models (usually 70b 4-bit quantized models to be fair, though the mixtral version I am using is only slightly smaller than those). For reference, I am focused on more productivity related uses (i.e. following instructions), rather than creative.
I've been playing around with settings, but I have not been able to get consistently usable output, though they do seem to have a significant impact on the output.
What settings have others found that are optimal for Mixtral on Oobabooga? I've seen some other threads with recommended settings, but I haven't found those very effective for what I am doing.
I've been using the chat-instruct mode in Oobabooga, but is there a better one? Should I skip Oobabooga altogether and just use llama.cpp directly? Ultimately I want to use it with langchain for productivity.
​ | 2023-12-15T21:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/18jawag/mixtral_settings/ | rwclark88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jawag | false | null | t3_18jawag | /r/LocalLLaMA/comments/18jawag/mixtral_settings/ | false | false | self | 2 | null |
koboldcpp Mac M2 | 4 | Anyone got this working and actually comparable to Ollama? I can get it "running" but it takes literal minutes to create a response. Using the same gguf model in Ollama and it works fast, like gpt3 fast.
If you did, and it works great, could you share the install steps? | 2023-12-15T21:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/18jam7s/koboldcpp_mac_m2/ | shiney_eggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18jam7s | false | null | t3_18jam7s | /r/LocalLLaMA/comments/18jam7s/koboldcpp_mac_m2/ | false | false | self | 4 | null |
Is Mistral going to Turncoat on Open Source Models? | 121 | * They now have tons of money and investors with interests
* They haven't released mistral medium but did release on their API, didn't hear news about release
* They have stated that they are competing with OpenAI and their pricing sheet for the API service says as much
I mean, there are other models, and will be better models, etc. but is this the beginning of a trend, or do you guys think they will pull through and at least release Mistral medium? What am I missing here? Am I to take them off of my heroes list? | 2023-12-15T20:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/18j9qor/is_mistral_going_to_turncoat_on_open_source_models/ | enspiralart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j9qor | false | null | t3_18j9qor | /r/LocalLLaMA/comments/18j9qor/is_mistral_going_to_turncoat_on_open_source_models/ | false | false | self | 121 | null |
Mixtral knows how to make an omelette without breaking any eggs! (GPT 3.5 doesn't) | 85 | 2023-12-15T20:25:15 | https://www.reddit.com/gallery/18j9di7 | netikas | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18j9di7 | false | null | t3_18j9di7 | /r/LocalLLaMA/comments/18j9di7/mixtral_knows_how_to_make_an_omelette_without/ | false | false | 85 | null | ||
Who knows the services where you can test communication with Mixtral Medium in chat format for free? | 1 | For example, as at the address [https://huggingface.co/chat](https://huggingface.co/chat) / you can chat with a regular Mixtral. | 2023-12-15T20:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/18j9da4/who_knows_the_services_where_you_can_test/ | Imunoglobulin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j9da4 | false | null | t3_18j9da4 | /r/LocalLLaMA/comments/18j9da4/who_knows_the_services_where_you_can_test/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ffNXCUPQerMMTV5UAIgJRS5QMtKWEhNQFfpmL7I4Bcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=108&crop=smart&auto=webp&s=fa74f814d5c43d0d9d47c3591a9d667818ebe0c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=216&crop=smart&auto=webp&s=e3494c6906d2c95f78811be98ecf631cdeb08c13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=320&crop=smart&auto=webp&s=08f0479f19185f357e3bccc42a42f10f6fac664c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=640&crop=smart&auto=webp&s=2fdeeb9ada89c2bf4e5dc697043da66bd62cf959', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=960&crop=smart&auto=webp&s=e7b3230584c769f71759db14271d12a5f8cf831a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=1080&crop=smart&auto=webp&s=a8b11dd06cf9be6635cb9fcb2dedf71ecdd9c491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?auto=webp&s=b8bf601deac4d62d484c6fb69764f7d09d0fd168', 'width': 1200}, 'variants': {}}]} |
Colossal AI vs DeepSpeed vs NeMo | 3 | I am learning about best approach or frameworks for fine-tuning an LLM on my own data. Any experience with that?
I'm considering Colossal AI, DeepSpeed from Microsoft and NVIDIA NeMo framework | 2023-12-15T20:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/18j91r3/colossal_ai_vs_deepspeed_vs_nemo/ | Ambitious-Badger24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j91r3 | false | null | t3_18j91r3 | /r/LocalLLaMA/comments/18j91r3/colossal_ai_vs_deepspeed_vs_nemo/ | false | false | self | 3 | null |
Getting this error. What does it mean/how do I fix it? (sillytavern + Koboldcpp) | 2 | 2023-12-15T20:09:10 | Benjamin_swoleman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18j90q7 | false | null | t3_18j90q7 | /r/LocalLLaMA/comments/18j90q7/getting_this_error_what_does_it_meanhow_do_i_fix/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'QMHzjzS4dZnBXwh6Sez0v-Yt2Rl4DCouzNFTCs7vE0Q', 'resolutions': [{'height': 15, 'url': 'https://preview.redd.it/x64ukzmlmi6c1.png?width=108&crop=smart&auto=webp&s=8cb6efd8dd928e7463325e7ef5d296ddf0156ffc', 'width': 108}, {'height': 31, 'url': 'https://preview.redd.it/x64ukzmlmi6c1.png?width=216&crop=smart&auto=webp&s=6378ab3f63d5de20f45a91a007d6f3e43579a367', 'width': 216}, {'height': 46, 'url': 'https://preview.redd.it/x64ukzmlmi6c1.png?width=320&crop=smart&auto=webp&s=7e149aaf256ab20c5bfd0aa9e21ff217751f05e1', 'width': 320}], 'source': {'height': 80, 'url': 'https://preview.redd.it/x64ukzmlmi6c1.png?auto=webp&s=6fc695a156c4d022f0f2ebdb2ff03b829688cdff', 'width': 545}, 'variants': {}}]} | |||
Chatbot Arena Leaderboard updated: Mixtral 8x7b above Gemini Pro | 246 | 2023-12-15T20:03:38 | https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard | galambalazs | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18j8waq | false | null | t3_18j8waq | /r/LocalLLaMA/comments/18j8waq/chatbot_arena_leaderboard_updated_mixtral_8x7b/ | false | false | 246 | {'enabled': False, 'images': [{'id': 'f-Er2nh8Xt_YPyZ8le6GRHTfsR8EEtNIQE7W_Ea98Kw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=108&crop=smart&auto=webp&s=80a187ff989ccc5449f757c2e367667d58a885e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=216&crop=smart&auto=webp&s=c7539df997340dc356ffad7fbd1c838ba9e6cbd4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=320&crop=smart&auto=webp&s=8df6afae03208587c034aea0beb2e771da2cd7ca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=640&crop=smart&auto=webp&s=bd33fd03fb5e6f3271aca71c07c815fbe5675f3f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=960&crop=smart&auto=webp&s=b7ae2f0a9350f90168e5392f629aac1ceed8f0de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?width=1080&crop=smart&auto=webp&s=50c20bd445ea3c62aed508404fd058df91d3c705', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8cNFhKRvnMLhSF_I-JZvEn7pPt-P79iUIxKSuSYjwUQ.jpg?auto=webp&s=f8d0fd23f01bd5a632a28a9d6a0ff0f0d2aa07d1', 'width': 1200}, 'variants': {}}]} | ||
Chatbot Arena Leaderboard updated: Mixtral 8x7b above Gemini Pro | 1 | 2023-12-15T19:54:50 | https://lmsys-chatbot-arena-leaderboard.hf.space/?__theme=light | galambalazs | lmsys-chatbot-arena-leaderboard.hf.space | 1970-01-01T00:00:00 | 0 | {} | 18j8p58 | false | null | t3_18j8p58 | /r/LocalLLaMA/comments/18j8p58/chatbot_arena_leaderboard_updated_mixtral_8x7b/ | false | false | default | 1 | null | |
When can we expect multimodality in the form of vision, the addition of plug-ins (WolframAlpha, web search capabilities, working with files and databases) in models like Mixtral? | 9 | Please explain on an abstract level what mechanisms are generally needed to implement these functions locally? | 2023-12-15T19:50:00 | https://www.reddit.com/r/LocalLLaMA/comments/18j8l8p/when_can_we_expect_multimodality_in_the_form_of/ | Imunoglobulin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j8l8p | false | null | t3_18j8l8p | /r/LocalLLaMA/comments/18j8l8p/when_can_we_expect_multimodality_in_the_form_of/ | false | false | self | 9 | null |
Would it be possible to make a mixture of experts (MoE) of a mixture of experts? | 3 | Sorry if the title is a bit confusing lol, but I mean take mixtral for example, instead of 8 experts being good at one thing, you could have 8 experts being good at 8 types of one thing. You could have a math expert, and you could have experts for calculus, algebra, basic math (division, addition, etc), and a similar case for coding (expert for popular languages, and maybe one or two for more obscure languages) and other things, such as science, etc. | 2023-12-15T19:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/18j7xtg/would_it_be_possible_to_make_a_mixture_of_experts/ | absouluteUNIT3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j7xtg | false | null | t3_18j7xtg | /r/LocalLLaMA/comments/18j7xtg/would_it_be_possible_to_make_a_mixture_of_experts/ | false | false | self | 3 | null |
budget PC: Titan X / older GPU | 1 | Advice on cheap PC that will run a private chatbot with a 7B model.
I've read decent praise for Titan X despite its age, but the praise came from gamers. It has 12GB VRAM. My wallet likes the sub $100 price tag.
My current PC gets <2 tokens/sec so all I'm looking for is a decent improvement for private chats.
My worry with older GPUs are drivers... Any other cheap GPUs with decent VRAM? | 2023-12-15T19:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/18j7t35/budget_pc_titan_x_older_gpu/ | vap0rtranz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j7t35 | false | null | t3_18j7t35 | /r/LocalLLaMA/comments/18j7t35/budget_pc_titan_x_older_gpu/ | false | false | self | 1 | null |
Cheapest GPU provider to host fine-tuned models? | 2 | Who provides cheapest GPU inferencing and hosting of fine-tuned models (7B size)? I already have the finetuned model and ready, just looking for a cheap place to host and run inferencing.
I've looked at Replicate and Together.ai, they both provide really the best tools in this space, but hosting is expensive. Together costs about 1.4/hr to host a 7B model. Replicate is more expensive.
Ideally, I wouldn't be charged for idle time and only active time (replicate does this already, but your finetuned model needs to be based off of a limited set of base models)
Any recommendations? | 2023-12-15T19:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/18j7msi/cheapest_gpu_provider_to_host_finetuned_models/ | blackstonewine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j7msi | false | null | t3_18j7msi | /r/LocalLLaMA/comments/18j7msi/cheapest_gpu_provider_to_host_finetuned_models/ | false | false | self | 2 | null |
Finding a best option for software development | 1 | Hi guys I'm a software engineer working mostly with go, recently I tired of paying for random ai bots and decided to try local LLaMa but I need something just for development debug and generation code something like chatbots.
I have 8gb ram and using Linux I got enough space and swap for more support
Thanks | 2023-12-15T18:47:16 | https://www.reddit.com/r/LocalLLaMA/comments/18j76ah/finding_a_best_option_for_software_development/ | AMiR_ViP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j76ah | false | null | t3_18j76ah | /r/LocalLLaMA/comments/18j76ah/finding_a_best_option_for_software_development/ | false | false | self | 1 | null |
Mistral in bursts | 1 | [removed] | 2023-12-15T18:13:55 | https://www.reddit.com/r/LocalLLaMA/comments/18j6elv/mistral_in_bursts/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j6elv | false | null | t3_18j6elv | /r/LocalLLaMA/comments/18j6elv/mistral_in_bursts/ | false | false | self | 1 | null |
any instruct fine-tunes of Phi-2 out there? | 17 | As Microsoft mentioned the model is undergone instruction instruction fine-tuning and won’t really be super useful until that’s done.
I noticed theBloke’s got a GPTQ version up already (gguf hopefully soon).
any news on this? I expect there are a few people working on it but that’s just speculation | 2023-12-15T18:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/18j6ehi/any_instruct_finetunes_of_phi2_out_there/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j6ehi | false | null | t3_18j6ehi | /r/LocalLLaMA/comments/18j6ehi/any_instruct_finetunes_of_phi2_out_there/ | false | false | self | 17 | null |
Just getting started, low cost machine | 5 | Would this work well for running models offline? I’m just getting started and have used some super slow small models on my work laptop.
Home build:
Intel Core i9-12900K
ASUS Z790-V Prime WiFi -ATX
G.Skill Ripjaws S5 32GB Kit DDR5 6000
GeForce RTX 3090 24GB
Inland Performance Plus 1TB 3D TLC NAND PCIe Gen 4 x4 NVMe M.2 Internal SSD
I haven’t selected power supply, case and cooling yet. Not sure how much draw and heat these activities produce. | 2023-12-15T17:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/18j5vw7/just_getting_started_low_cost_machine/ | xerfd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j5vw7 | false | null | t3_18j5vw7 | /r/LocalLLaMA/comments/18j5vw7/just_getting_started_low_cost_machine/ | false | false | self | 5 | null |
Need help finding the best GPU for my use case | 3 | Currently looking for a cloud GPU on [TensorDock.com](https://TensorDock.com) that's cost effective. Im a broke student who wants to use Local LLM's for fun and playing around, nothing crazy. Light usage maybe 5 hours a week or less. I don't know much about GPU's but i can give my requirements:
Budget: $50 - $60/Month
Tokens Per Second: I want snappy response speeds, I do not have patience to wait 5 years for it to finish giving me something, especially code. So any GPU i can use that will give atleast 30+ TPS would be awesome!
Thank you in advance! I want to clarify that im not interested in buying any hardware for myself. At the time being Cloud GPU's are most cost effective and convenient for me.
| 2023-12-15T17:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/18j5um3/need_help_finding_the_best_gpu_for_my_use_case/ | Articulity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j5um3 | false | null | t3_18j5um3 | /r/LocalLLaMA/comments/18j5um3/need_help_finding_the_best_gpu_for_my_use_case/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'eaFvxeJB04537lQ_SHSgx0DXorxInjZCJLrshYTAe4g', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=108&crop=smart&auto=webp&s=49f3db1617109d994e8cdf9468597fe6c9d1f6fc', 'width': 108}, {'height': 46, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=216&crop=smart&auto=webp&s=91bcf0f08ed150ff1ca3d0de7cb5512ea527c215', 'width': 216}, {'height': 68, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=320&crop=smart&auto=webp&s=6c60483377d12f4dd2339175a53c560cd7e9cffd', 'width': 320}, {'height': 136, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=640&crop=smart&auto=webp&s=9e1033d31e6984d0b3e0963dc388d581b645a868', 'width': 640}, {'height': 204, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=960&crop=smart&auto=webp&s=d7ffede524d49b8ddf444eb220cca2c0d8239c6e', 'width': 960}, {'height': 230, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?width=1080&crop=smart&auto=webp&s=da1a3f9f5be92217513d48b33b503f3100ce9ead', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/9Z2jO-7UWQ7PgcHX6KANzAP_ObCSvR2dqf05e8bvbPg.jpg?auto=webp&s=07f4094ce59c3b2f820a8db6cd4ead31bf0b81fd', 'width': 3000}, 'variants': {}}]} |
Messages are too long and cut off? | 4 | I use Koboldcpp with Sillytavern. The messages it generates are very long, and at the end, the last sentence just cuts of. Please point me in the right direction on how to fix this, because my knowledge of this ui is very limited. Thank you! | 2023-12-15T17:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/18j5puh/messages_are_too_long_and_cut_off/ | Benjamin_swoleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j5puh | false | null | t3_18j5puh | /r/LocalLLaMA/comments/18j5puh/messages_are_too_long_and_cut_off/ | false | false | self | 4 | null |
Generate <instruction, output> with WizardCoder from Typescript project | 4 | I'm trying to Instruct finetune DeepSeek model [^(\[1\])](https://github.com/deepseek-ai/DeepSeek-Coder/tree/main/finetune) to my private Typescript code base project, for generating code like copilot/chatgpt
So as i understand, in order to do that, my TS project need to be converted into <instruction, output> dataset. And in order to do that, I need some Self-Instruct LLM, e.g. WizardCoder to generate <instruction, output> list directly from the code base
But I'm having 2 big questions:
1. How to fine tune the WizardCoder LLM for typescript (I see it built only for Python, right?)
2. When I have that TS Wizard Coder LLM, how can I genreate the <instruction, output> dataset from the code base ?
I'm researching more but also crying for help due to my work deadline ! | 2023-12-15T17:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/18j5k5c/generate_instruction_output_with_wizardcoder_from/ | moreromem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j5k5c | false | null | t3_18j5k5c | /r/LocalLLaMA/comments/18j5k5c/generate_instruction_output_with_wizardcoder_from/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'rRT5TCtsTY8G16qTSsUKYoJGOvDGvjXuQ5jp9UOfNi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=108&crop=smart&auto=webp&s=39245668ddd918219614a5c826baead298e7c28b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=216&crop=smart&auto=webp&s=4b88a89f220c167c0c185e9d8f9f3a7fb8e4ccc5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=320&crop=smart&auto=webp&s=79b671d7a4274b45e6921d2a885deaf59e6b7717', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=640&crop=smart&auto=webp&s=870e0dda1ef88405186ab0c39e552dad44afaf2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=960&crop=smart&auto=webp&s=61ea84d2f0ce26478034eb3bc88ebb41383c6f86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?width=1080&crop=smart&auto=webp&s=95ff105ee960b424f0c70a621fb86d9b64a4def2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qoiH8bfY_Kmr5CV7iLu4-LVEHmFvMZCMsjxi_L1MyW8.jpg?auto=webp&s=9fff70f47ac54789cef1902460e55c3445e6dae3', 'width': 1200}, 'variants': {}}]} |
"You are a helpful AI assistant" | 75 | I've been stumbling around this sub for awhile, testing all the small models and preaching the good word of the omnipotent OpenHermes. Here's some system prompt tips I've picked up:
* Don't say "don't": this confuses them, which makes sense when you understand how they "think". They do their best to string concepts together, but they simply generate the next word in the sequence from the context available. Saying "don't" will put everything following that word into the equation for the following words. This can cause it to use the words and concepts you're telling it not to.
- Alternative: try to use "Only" statements. Instead of "Don't talk about any other baseball team besides the New York Yankees" say "Only talk about the New York Yankees".
* CAPITALIZING INSTRUCTIONS: For some reason, this works when used sparingly, it even makes some models pay attention to "don't". Surprisingly, this seems to work with even ChatGPT. It can quickly devolve your system prompt into confused yelling if you don't limit it, and can even cause your model to match the format and respond with confused yelling, so really only once or twice on important concepts.
* \n: A well formated system prompt goes a long way. Splitting up different sections with a line break makes a noticeable improvement in comprehension of the system prompt by the model. For example, here is my format for LMStudio:
" Here is some information about the user:
(My bio)
(system prompts)
Here is some context for the conversation:
(Paste in relevant info such as web pages, documentation, etc, as well as bits of the convo you want to keep in context. When you hit the context limit, you can restart the chat and continue with the same context).
* "You are a helpful AI assistant" : this is the demo system prompt to just get agreeable answers from any model. The issue with this is, once again, how they "think". The models can't conceptualize what is helpful beyond agreeing with and encouraging you. This kind of statement can lead to them making up data and concepts in order to agree with you. This is extra fun because you may not realize the problem until you discover for yourself the falacy of your own logic.
* Think it through/Go over your work: This works, but I think it works because it directs attention to the prompt and response. Personally, I think there's better ways to do this.
* Role assignment: telling it to act as this character or in that role is obviously necessary in some or even most instances, but this can also be limiting. It will act as that character, with all the limits and falacies of that character. If your waifu can't code, neither will your AI.
* Telling it to be confident: This is a great way to circumvent the above problem, but also runs the risk of confident hallucinations. Here's a 2 prompt trick I use:
Tell one assistant to not answer the user prompt, but to simply generate a list of facts, libraries, or research points from its own data that can be helpful to answering the prompt. The prompt will be answered by the same model LLM, so write the list with the same model LLM as the future intended audience instead of a human.
Then pass the list to your assistant you intend to chat with with something like "you can confidently answer in these subjects that you are an expert in: (the list).
The point of this ^^^ is to limit its responses to what it actually knows, but make it confidentially answer with the information it's sure about. This has been incredibly useful in my cases, but absolutely check their work. | 2023-12-15T17:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/18j59g1/you_are_a_helpful_ai_assistant/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j59g1 | false | null | t3_18j59g1 | /r/LocalLLaMA/comments/18j59g1/you_are_a_helpful_ai_assistant/ | false | false | self | 75 | null |
Setting ideal Mixtral-Instruct Settings | 48 | I've noticed some people claiming that Mixtral tends to repeat itself or gets stuck. Or, if it doesn't repeat itself, it becomes incoherent. I think this is yet another case of poor sampler config standardization across the board.
Mixtral itself is a strong enough model to the point where I'd argue Repetition Penalty is just not useful as anything more than a bandaid fix and does more harm than good. This is especially the case in instances where it applies a penalty for tokens that the model believes is guaranteed.
I propose a simplified standard preset for Mixtral, similar to what I've recommended in the past, but with a reduced Min P.
**Mixtral-Default:**
\- Temperature: 1.0
\- Min P: 0.02 (Only keeps tokens at least 1/50th as probable as the top candidate - cuts out extreme outliers)
\- Top P: 1.0 (Disabled)
\- Top K: 0 (Disabled)
\- TFS/Mirostat/RepPen, etc. should be **disabled** and don't seem to be necessary.
​
[Here is how this looks in SillyTavern.](https://preview.redd.it/yqjnjvg6jh6c1.png?width=589&format=png&auto=webp&s=1b576a7f6797b7c00ba636c866890141221c7495)
I chose Min P 0.02 as it looked to be setting reasonable cutoffs across the board for the tokens I looked at.
https://preview.redd.it/45hutlpemh6c1.png?width=967&format=png&auto=webp&s=08233e996c4f992d634c61ea2e821e652153055a
For creative writing, Temperature is quite controllable on this model with Min P 0.02. An extreme Temp value of 4.0 still writes coherently, though it is a bit too flowery with how it writes (imo) when set that high.
I've also noticed \~1.25-1.5 temperature is still stable (assuming Temperature is set last in the sampler order, haven't tested with Temp first) and maintains a good balance for creative tasks.
# What is your reasoning for these settings?
Something I noticed in the past when working on my sampler experiments is that dense language models (that do not use MoE) are typically quite bad at assigning just one particular token 100% probability.
You'd almost never get 100% as top token, and even \~99.9999% levels of precision was rare to see; instead, tokens that were clearly *wrong* but still marginally better than the large set of alternatives would be graded at a slight probability of around \~0.05% and would slightly keep down the 'correct' prediction from being deterministic.
I've always chalked this up to the fact that language models have to predict scores for all possible words, so slight scores for tokens that *sorta* fit but aren't correct was just an inevitability.
In fact, I only ever saw 100% confidence in a token a single time when testing models in the past.
But Mixtral, uh, breaks this rule?
​
[What.](https://preview.redd.it/v69zxhh8bh6c1.png?width=438&format=png&auto=webp&s=1d80e6219ccba6cc37bcd62f3b638531d6b41240)
Interestingly, unlike the near-total confidence tokens I was used to seeing in the past, it's possible for Mixtral to be so occasionally confident in the next token choice that it is the *only* positive probability assigned period.
​
[The above token in question.](https://preview.redd.it/36sxez5kbh6c1.png?width=526&format=png&auto=webp&s=4c43eb4720a14ceed969e6f2d7dc3106d75bf761)
That doesn't mean it's aggressively confident all the time, or anything like that; it's just that, instead of having a bunch of extremely small positive scores for bad choices for 'obvious tokens', it's not rewarding them enough to be considered at all in many cases, or if it is, it's extremely subtle in comparison to a dense model.
For the vague prompt of "The man's name was ", you get a very diverse distribution for the next token:
https://preview.redd.it/j002fpv4dh6c1.png?width=292&format=png&auto=webp&s=6c9a2ac59880d1b139e1c107d79d8aa35018d6fe
I theorize that the use of Mixture of Experts helps avoid the inherent 'noise' caused by inferencing with the whole model at once and helps focus it for different tasks.
Often times, there's only a certain part of the model that is relevant for assessing some tokens, and reducing the amount of parameters that are to predict that 'type' of token in those instances probably leads to a higher amount of confidence. | 2023-12-15T17:23:38 | https://www.reddit.com/r/LocalLLaMA/comments/18j58q7/setting_ideal_mixtralinstruct_settings/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j58q7 | false | null | t3_18j58q7 | /r/LocalLLaMA/comments/18j58q7/setting_ideal_mixtralinstruct_settings/ | false | false | 48 | null | |
Can Pre-trained Small Language Models Collaborate for Better Results? Feedback Needed for skiffs Project | 1 | [removed] | 2023-12-15T17:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/18j52vu/can_pretrained_small_language_models_collaborate/ | Genaforvena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j52vu | false | null | t3_18j52vu | /r/LocalLLaMA/comments/18j52vu/can_pretrained_small_language_models_collaborate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vUHOw5q7hzVsDit8dLpglWk39jqKYnIIMiv4DqsXmXE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=108&crop=smart&auto=webp&s=49a49c2b7188c0727f87dda33380418f0d2cb4b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=216&crop=smart&auto=webp&s=9bd7513b5d4ef4498fc952db14004c4fdabda4c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=320&crop=smart&auto=webp&s=ee9b0be567695ce2ce1818d4bc4b0085090c3b4c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=640&crop=smart&auto=webp&s=59d40ab8a0c03dd1e56340a2fdafcc14df9cb4e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=960&crop=smart&auto=webp&s=1de36de2829071abffe10ac704bc20364d40ad34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=1080&crop=smart&auto=webp&s=863ffd76b95bb1ed37cdda26adbbfe19fb2d244e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?auto=webp&s=43d65cb2e66b6da0d27e88133c8473cb8e2b889d', 'width': 1200}, 'variants': {}}]} |
Can Pre-trained Small Language Models Collaborate for Better Results? Feedback Needed for skiffs Project | 1 | [removed] | 2023-12-15T17:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/18j52u0/can_pretrained_small_language_models_collaborate/ | Genaforvena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j52u0 | false | null | t3_18j52u0 | /r/LocalLLaMA/comments/18j52u0/can_pretrained_small_language_models_collaborate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vUHOw5q7hzVsDit8dLpglWk39jqKYnIIMiv4DqsXmXE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=108&crop=smart&auto=webp&s=49a49c2b7188c0727f87dda33380418f0d2cb4b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=216&crop=smart&auto=webp&s=9bd7513b5d4ef4498fc952db14004c4fdabda4c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=320&crop=smart&auto=webp&s=ee9b0be567695ce2ce1818d4bc4b0085090c3b4c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=640&crop=smart&auto=webp&s=59d40ab8a0c03dd1e56340a2fdafcc14df9cb4e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=960&crop=smart&auto=webp&s=1de36de2829071abffe10ac704bc20364d40ad34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?width=1080&crop=smart&auto=webp&s=863ffd76b95bb1ed37cdda26adbbfe19fb2d244e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YViL759A0VocuDFKGk4Q5VfKaqlaqoJ2vKBiJj6Nn8g.jpg?auto=webp&s=43d65cb2e66b6da0d27e88133c8473cb8e2b889d', 'width': 1200}, 'variants': {}}]} |
https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF out now @TheBloke | 1 | [removed] | 2023-12-15T17:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/18j4uqb/httpshuggingfacecotheblokemixtral8x7bmoerpstoryggu/ | DaLexy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j4uqb | false | null | t3_18j4uqb | /r/LocalLLaMA/comments/18j4uqb/httpshuggingfacecotheblokemixtral8x7bmoerpstoryggu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SREYVLL6wlg3rhNfuZDP_POJJKz3ySsczVbVwp4y7EE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=108&crop=smart&auto=webp&s=b4b63df0f81c7c045c5a7c3c92e782a19f1439fd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=216&crop=smart&auto=webp&s=138df0d3c7856f20579dc8d81f91e53dae571e75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=320&crop=smart&auto=webp&s=dd2e7b6eac52cba95e642f90410d9b0428affdbb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=640&crop=smart&auto=webp&s=787a484335b5a90267feb3a4afd3e0e1b92201e3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=960&crop=smart&auto=webp&s=3d579966149f4e1b3f03a542e7d5cba5918f8638', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?width=1080&crop=smart&auto=webp&s=aebf5a5b517f80e998ef0b0f60fc324a5a662e02', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VNKTK0ZK6EmN2dF_OA_KDWVp3djRF-53I4nrVH5h1Qc.jpg?auto=webp&s=a556cd1a014af1fef9aba4cd01c8c8e96e89672e', 'width': 1200}, 'variants': {}}]} |
Mixtral support merged in llama.cpp! | 1 | 2023-12-15T17:01:56 | https://github.com/ggerganov/llama.cpp/pull/4406 | phoneixAdi | github.com | 1970-01-01T00:00:00 | 0 | {} | 18j4r26 | false | null | t3_18j4r26 | /r/LocalLLaMA/comments/18j4r26/mixtral_support_merged_in_llamacpp/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Ju6nuBn6qbyDHhs7heBwQl8j_lsG9P6T4lVcnqU_rgg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=108&crop=smart&auto=webp&s=35b797c6db36b283524978a61f9f15d68aad921f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=216&crop=smart&auto=webp&s=0a297f8d19459b5fd13a99b05d14eebc4ac72ec6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=320&crop=smart&auto=webp&s=058fae045a604fe8b8bfa46c67ad06c24d19b310', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=640&crop=smart&auto=webp&s=e87b87d872e767c83f3972337051cab3276677a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=960&crop=smart&auto=webp&s=5e840382f521622746008a6b48960bc32c3f6bb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?width=1080&crop=smart&auto=webp&s=22d7b053b8a46752588836f21cc93198254fb264', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RTD6ahfr-1mysni_dJiVPk9Uzw7QikEY2EecjFZzLwQ.jpg?auto=webp&s=45ad466496375db0d7cf2cc384a3dde9eceb1d71', 'width': 1200}, 'variants': {}}]} | ||
Online cheating on exams | 18 | At this point, for online learning, am I correct in thinking there are exactly zero realistic ways to stop someone from cheating on exams? | 2023-12-15T16:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/18j4gh1/online_cheating_on_exams/ | hmmqzaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j4gh1 | false | null | t3_18j4gh1 | /r/LocalLLaMA/comments/18j4gh1/online_cheating_on_exams/ | false | false | self | 18 | null |
Native LORA finetuning on Apple Devices (New MLX Framework) 😲!! | 18 | 2023-12-15T16:46:23 | https://github.com/ml-explore/mlx-examples/tree/main/lora | phoneixAdi | github.com | 1970-01-01T00:00:00 | 0 | {} | 18j4e80 | false | null | t3_18j4e80 | /r/LocalLLaMA/comments/18j4e80/native_lora_finetuning_on_apple_devices_new_mlx/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'zwxv2CSy1P-92HpJKlv35p8cs74BrUsovsVmDPyLg5o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=108&crop=smart&auto=webp&s=a18897120148b0fd37c5d521d34e82202f82ed68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=216&crop=smart&auto=webp&s=cb1228483c1ae60a725e291bf8132125bb9b346a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=320&crop=smart&auto=webp&s=54fb6d6750f05fd2abd6e1599a3b806f6f8c0a5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=640&crop=smart&auto=webp&s=40b54eb6b128ceefa6bf19917880a7c0c527cd4b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=960&crop=smart&auto=webp&s=c8ef10ef282c89bda83b5c7c7b75c485d8e14837', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?width=1080&crop=smart&auto=webp&s=8777432f59932338d378b570ec79b4c917ecc4fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rsgalBLHIbO6I2sHl_b3aL9e1Cx2OLYIgu5ooTnyfGI.jpg?auto=webp&s=33f9b52a5360e553a00a3e353dec5b593c5503f1', 'width': 1200}, 'variants': {}}]} | ||
How to run phi locally | 8 | How can i run phi1.5 model by microsoft locally on my MBP M1. | 2023-12-15T16:42:16 | https://www.reddit.com/r/LocalLLaMA/comments/18j4aok/how_to_run_phi_locally/ | Key-Dragonfly7642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j4aok | false | null | t3_18j4aok | /r/LocalLLaMA/comments/18j4aok/how_to_run_phi_locally/ | false | false | self | 8 | null |
Best model for Spanish text summarization | 5 | Hi all,
I'm looking for recommendations on which model can i use to obtain abstractive summarization in Spanish. I'm thinking also fine-tuning that model into the samsum-es dataset to obtain better results, but i believe i need to obtain a model with Spanish core first.
Any recommendation or experience will be highly appreciated. | 2023-12-15T16:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/18j3wnp/best_model_for_spanish_text_summarization/ | iamtdb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j3wnp | false | null | t3_18j3wnp | /r/LocalLLaMA/comments/18j3wnp/best_model_for_spanish_text_summarization/ | false | false | self | 5 | null |
I'd be grateful if someone could teach me to fish | 8 | I'd like to run ehartford/dolphin-2.5-mixtral-8x7b locally.
I've only run quantized models (MacBook m3 max).
How do I evaluate how much machine I need to run a model like ehartford/dolphin-2.5-mixtral-8x7b? The model card isn't terribly helpful in that regard on Huggingface.
I run Ooba. Do I down load the entire repository and then stitch together the .bin files somehow? | 2023-12-15T16:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/18j3lvn/id_be_grateful_if_someone_could_teach_me_to_fish/ | knob-0u812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j3lvn | false | null | t3_18j3lvn | /r/LocalLLaMA/comments/18j3lvn/id_be_grateful_if_someone_could_teach_me_to_fish/ | false | false | self | 8 | null |
What Embedding Models Are You Using For RAG? | 28 | Here's the bottom line (BLUF for you DoD folks): I'm interested in hearing what models you are using for high quality embeddings.
I'm interested in RAG retrieval. My application is pre-hospital EMS so I am searching for things like "motor vehicle accident with injuries" and getting back things like "car crash" or "MVA".
I rolled my own RAG probably more than 2 years ago (1000 years in LLM time). I have relied on SentenceTransformer() and my "go to" was all-mpnet-base-v2. And for my retrieval I used to use scipy.spatial.KDTree() and that also worked well. I always got back relevant documents. It worked well enough that I feel like I know what I'm doing.
I added FAISS as my vector store and with mpnet embeddings it still works really well. I'm unclear if faiss.search() is better than KDTree but that is a side issue.
Next, I've started using llama.cpp server as a front end to play around with models interactively. Then I saw the optional --embedding flag as a server option. Great. Very quickly I was able to chunk my text, send to server, get back embeddings and save them in faiss. I want to confirm two observations that I'm having. The first one is easy and I think obvious, the second one is just a gut feeling:
1. If the server model is a chat model, and I send a chunk of text without being in a valid prompt format, then you usually get useless outputs, and consequently the embedding will be equally useless (this is what I am seeing). But if I use a foundational model instead of a chat model, the embeddings are probably relevant
2. Using a foundational model for embeddings, I *usually* get back relevant embeddings. The dimensionality of mpnet is 768 and the dim of llama-2-7B is 4096. When I embed about 400 records, mpnet seems to outperform llama-2 but my gut tells me this is because the larger llama-2 dimensions are significantly diluted to the point that "near" vectors are not relevant. I'm hoping that when I go from 400 embeddings to 40,000 embeddings that the llama-2 RAG will perform equally well.
​ | 2023-12-15T15:56:40 | https://www.reddit.com/r/LocalLLaMA/comments/18j39qt/what_embedding_models_are_you_using_for_rag/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j39qt | false | null | t3_18j39qt | /r/LocalLLaMA/comments/18j39qt/what_embedding_models_are_you_using_for_rag/ | false | false | self | 28 | null |
Tokenizer of GGUF with LlamaCPP | 4 | Hey everyone,
I am working with a GGUF Model the Q8_0 of the ALMA-13B to be specific, it could be found here:
https://huggingface.co/TheBloke/ALMA-13B-GGUF/blob/main/alma-13b.Q8_0.gguf
I want to change the Word Embeddings of its Tokenizer (the ALMA also has multilingual support, so Im not sure how its tokenizer is configured), but I am unable to access the Tokenizer using LlamaCPP (imported from langchain). I want to access the tokenizer, iterate through it and if I find a token which is present in my custom Word2Vec Embeddings (stored in a .bin format) , then change its vectors with the vectors present in my custom embeddings.
Can anyone guide me or provide a link which can guide me? Any kind of help will be appreciated
Thanks in advance | 2023-12-15T15:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/18j39l6/tokenizer_of_gguf_with_llamacpp/ | Sheamus-Firehead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j39l6 | false | null | t3_18j39l6 | /r/LocalLLaMA/comments/18j39l6/tokenizer_of_gguf_with_llamacpp/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'z_xlY2ogs5G_WzoHPJ7I_A9zIFusgl0lbxbU7Wv6w34', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=108&crop=smart&auto=webp&s=7e3f7d9d2af6da44de85c6109989cb33c0194230', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=216&crop=smart&auto=webp&s=73d53d8eade90fce39c395a44c42ac96469bf027', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=320&crop=smart&auto=webp&s=4e226491aab915efdd28b4a09c8a3d041784cd9f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=640&crop=smart&auto=webp&s=02d9899b47d6f5c26a680ee53f770c8db11b2a14', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=960&crop=smart&auto=webp&s=95bf56febace4ea7818a51ad09c930510a3224a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?width=1080&crop=smart&auto=webp&s=7c85632149b9349f6c85f7d7da10b52fc67ae5dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PU4Z9q-Zi9LE4dgOvjZrTD70S7zG62HGV4oRJ9LcaWw.jpg?auto=webp&s=f98563a0f2d7c1bd9ba0d7b28deb6a4cc5b0f4f5', 'width': 1200}, 'variants': {}}]} |
Are the majority of costs with inferencing related to electricity consumption? | 6 | Sorry, a little bit off topic but I think this still relates.
I recently read that OpenAI spends an insane amount of money running inferencing for their ChatGPT models.
Got me curious: at the end of the day, are these costs mostly electricity?
If so, wouldn't relocating these GPU centers to countries with a lower electric cost going to give companies a huge competitive advantage versus competition?
The GPU's, all things considered, are fixed costs. Obviously subtracting typical business expenses such as salaries, at the end of the day, it all boils down to energy, right?
If I were to set up a company offering generative AI services. Perhaps running on an open source LLM model at scale.
For instance, average cost in U.S for kWh is $0.13.
Average cost for Russia per kWh is $0.06. Probably because of the insane number of nuclear stations + energy commodities they have, and lower population/demand.
So, almost 2x cheaper.
Russia is not a **great** example for obvious reasons.
But the fact remains: why aren't these companies offshoring their GPU centers to countries where electric costs are cheaper? | 2023-12-15T15:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/18j2zob/are_the_majority_of_costs_with_inferencing/ | flyers_nhl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j2zob | false | null | t3_18j2zob | /r/LocalLLaMA/comments/18j2zob/are_the_majority_of_costs_with_inferencing/ | false | false | self | 6 | null |
How create own model ? | 3 | Guys am will get attand in Olimpiyad...
I think to make.own model based on model based on llama or Mistral..
I need one important thing, it's model have to speak Azerbaijani
I know only one model that fluent speek Azerbaijani it's DeepSeek 67b chat ..
Can you give me some advice and guids or tutorial how fine tune resent models ..🥹
Thanks a lot | 2023-12-15T15:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/18j2w7e/how_create_own_model/ | omar07ibrahim1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j2w7e | false | null | t3_18j2w7e | /r/LocalLLaMA/comments/18j2w7e/how_create_own_model/ | false | false | self | 3 | null |
Mistral LMMs: Multi Image, Speech, Audio, and Video | 48 | Hey all, this is a follow up of my last two posts ([a](https://www.reddit.com/r/LocalLLaMA/comments/17ccp0l/multitoken_embed_arbitrary_modalities_images/), [b](https://www.reddit.com/r/LocalLLaMA/comments/17p5vtk/mistral_visionaudio_loras_a_lossy_260k_token/)) for training large multimodal models using my library [multi\_token](https://github.com/sshh12/multi_token). Between Gemini + the interest in the previous posts, I figured I'd try out training some more of these for more domains. Unlike the previous models, getting good multimodal datasets is fairly non trivial and I ended having to utilize a ton of GPT3.5 based synthetic training data.
Given $1K compute budget + the last 2 weekends, I was able to get some OK prototypes:
# [sshh12/Mistral-7B-LoRA-Multi-VisionCLIPPool-LLAVA](https://huggingface.co/sshh12/Mistral-7B-LoRA-Multi-VisionCLIPPool-LLAVA)
This is a version of LLaVA 1.5 but trained on a partially synthetic dataset of chats that refer to multiple images. To save on per image token costs (576/image in the original method), I use a slightly different projector that compresses them into 10 tokens/image. This should work on up to 6 images prefixed to the prompt.
>**User**: <image of pink flower><image of yellow flower> What is the difference in color between the images?
**System**: The first image has a pink flower, while the second image has yellow flowers.
# [sshh12/Mistral-7B-LoRA-AudioWhisper](https://huggingface.co/sshh12/Mistral-7B-LoRA-AudioWhisper)
Using the same technique, I built a projector for Whisper-small. Only works some of the time and fairly limited given it's the small speech encoder (b/c limited GPU VRAM) but kind of cool.
>**User**: What is being said? <speech "this is a test">
**System**: This is a test.
# [sshh12/Mistral-7B-LoRA-AudioCLAP](https://huggingface.co/sshh12/Mistral-7B-LoRA-AudioCLAP)
Using the CLAP audio embedding model and some random datasets I found on huggingface, I was able to train a model that takes in generic audio. It's funny how oddly specific this model can be about the audio (due to the way I synthesized captions).
>**User**: What is making this sound? <sound "bark">
>
>**System**: The sound is being made by a chihuahua barking.
# [sshh12/Mistral-7B-LoRA-XCLIP](https://huggingface.co/sshh12/Mistral-7B-LoRA-XCLIP)
It was a struggle to get a reliable dataset + encoder for video, but this was the best I was able to get to work with the limited pretrain + finetune dataset. I'm thinking multi-image with interwoven speech tokens is probably the way to go for future video understanding (as opposed to dedicated video encoders).
>**User**: <video https://www.youtube.com/watch?v=3569sBBgVsc> What instrument is shown in the video?
>
>**System**: a man is playing the piano in a room
Hope y'all find these useful! Consider these "prototypes" and set your expectations low -- but know that if you happen to have any good datasets (500k+ quality examples) + compute $$$ lying around it's pretty trivial to get the other modalities to LLAVA 1.5-level (I'm happy to help). Will also say that this isn't limited to Mistral 7B (it was just a good smallish model for these baselines) so if there's strong interest in a different pre-existing chat model, I can add support for that.
There's no reason you couldn't also just combine all of these into one mega cross modality model (the library supports it out of the box) -- the limitation is again just 1) getting a dataset that actually does cross modality and 2) the compute $$$ to train it. | 2023-12-15T15:34:13 | https://www.reddit.com/r/LocalLLaMA/comments/18j2rk8/mistral_lmms_multi_image_speech_audio_and_video/ | sshh12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j2rk8 | false | null | t3_18j2rk8 | /r/LocalLLaMA/comments/18j2rk8/mistral_lmms_multi_image_speech_audio_and_video/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'I9ZuYVbT4TofPwTJmxWf4bIgwxxifC-ryFw9Sxw-aKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=108&crop=smart&auto=webp&s=629d96ea6aef3773767bfcc632dccb64a760a8a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=216&crop=smart&auto=webp&s=5744081e1b812ee69bcc294e73b9bc5e53213aa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=320&crop=smart&auto=webp&s=c71e37595c865b288ffe611970367a204b2d2fe2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=640&crop=smart&auto=webp&s=661b3a40f61c73758daf91949b5bfdc35cf7ab60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=960&crop=smart&auto=webp&s=804a26c6fc75a8a5278531c0588a682556f33dbd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?width=1080&crop=smart&auto=webp&s=6297360b65117bd5d850991e704c277351f60111', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x14SRNOzS-chGr8nCri9HmzChoG4JVoBnuR2vniSaWw.jpg?auto=webp&s=e536082b04f6f3e98343d142626f3cbbe919bb4c', 'width': 1200}, 'variants': {}}]} |
The GPU Poor strike back | 105 | 2023-12-15T15:21:50 | https://thehackerllama.substack.com/p/the-gpu-poor-strike-back | hackerllama | thehackerllama.substack.com | 1970-01-01T00:00:00 | 0 | {} | 18j2hsk | false | null | t3_18j2hsk | /r/LocalLLaMA/comments/18j2hsk/the_gpu_poor_strike_back/ | false | false | 105 | {'enabled': False, 'images': [{'id': 's0ehe1BDAbGINMI4DRwaOuEVhJiST3h3Ye0jAStFLts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hKU84_ryyTLnrBnJhDNdLDxxood8Fo7PZ2CXf7oCv-A.jpg?width=108&crop=smart&auto=webp&s=4507cd5621f38a1c6c55618d6327208adb090e7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hKU84_ryyTLnrBnJhDNdLDxxood8Fo7PZ2CXf7oCv-A.jpg?width=216&crop=smart&auto=webp&s=3862ceeffe717da66b943da5b48b81b7b24f5562', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hKU84_ryyTLnrBnJhDNdLDxxood8Fo7PZ2CXf7oCv-A.jpg?width=320&crop=smart&auto=webp&s=b48bdd181d4a2898765f74696ed9fc6b3b0eacc2', 'width': 320}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/hKU84_ryyTLnrBnJhDNdLDxxood8Fo7PZ2CXf7oCv-A.jpg?auto=webp&s=ca6290929585c60e9828ea838b86d3f62afabb53', 'width': 575}, 'variants': {}}]} | ||
Best Local LLMs for drafting/editing business prose w/ 32gb VRAM? | 9 | Pretty much the title...
Long time lurker here. My division (\~20 data scientist) has server with a largely unused 32GB NVIDIA V100 on it. Through a fair bit of work with IT, I finally got text-generation-webui up and running on that server.
We're allowed to use chatGPT for all but "sensitive" things. So, I can generally use GPT-4 to help me code, but no "help me draft/edit this proposal/e-mail to a client/deliverable" or "Summarize this sensitive document." (I've felt far more limited by the former, hence the title).
It seems like most leaderboards/prior threads focus on coding, roleplaying, logic, or general chat. Anyone have advice/experience on which LLMs are best for drafting/editing generic business prose.
P.S. I wouldn't be surprised if a lot of folks say Mixtral, but I got everything approved/installed before that was supported. So, I'm interested in other answers as well. | 2023-12-15T15:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/18j2fsl/best_local_llms_for_draftingediting_business/ | Pedalnomica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j2fsl | false | null | t3_18j2fsl | /r/LocalLLaMA/comments/18j2fsl/best_local_llms_for_draftingediting_business/ | false | false | self | 9 | null |
Attempting to switch from GPT4ALL 59 LM Studio. Why is is so much slower? | 1 | [removed] | 2023-12-15T15:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/18j27en/attempting_to_switch_from_gpt4all_59_lm_studio/ | IWantAGI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j27en | false | null | t3_18j27en | /r/LocalLLaMA/comments/18j27en/attempting_to_switch_from_gpt4all_59_lm_studio/ | false | false | self | 1 | null |
Ai for code documentation errors? | 7 | I have a 400 page word document on a legacy system, developed in 2007. Our C code has changed, but nobody went back to update the original documentation.
Is there a good, local AI program, to compare source code against historical documentation? I can’t go to the cloud, as there are too many security concerns. | 2023-12-15T14:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/18j1uny/ai_for_code_documentation_errors/ | EngineerVsMBA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j1uny | false | null | t3_18j1uny | /r/LocalLLaMA/comments/18j1uny/ai_for_code_documentation_errors/ | false | false | self | 7 | null |
Self Hosted AI Assistant for local company | 5 | Hey Guys, I'm trying to setup a local AI coding assistant for my company. I have made several research and I'll retrieve a powerful GPU if needed. I did host several models for general usage but I need some information about;
Which models work best with VSCode extensions?
How do I fix parallel queries? Currently users need to wait for each other to get a reply
Most important; How do I train a model with our codebase and documentation? | 2023-12-15T14:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/18j1ssx/self_hosted_ai_assistant_for_local_company/ | Sepkov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j1ssx | false | null | t3_18j1ssx | /r/LocalLLaMA/comments/18j1ssx/self_hosted_ai_assistant_for_local_company/ | false | false | self | 5 | null |
Difficulty with Ollama | 1 | Hello!
I'm attempting to run LLaMA-2 locally on Mac using Ollama. When I enter "ollama run llama2" on the terminal, I get this error:
​
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/llama2/manifests/latest": tls: failed to verify certificate: x509: “ollama.ai” certificate is not trusted
​
I don't have much skill yet, so I guess I need some help to get past this point. Sorry to bother but any help you can provide is appreciated. | 2023-12-15T14:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/18j16cu/difficulty_with_ollama/ | underdog_m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j16cu | false | null | t3_18j16cu | /r/LocalLLaMA/comments/18j16cu/difficulty_with_ollama/ | false | false | self | 1 | null |
GerMerge - German mistral v0.2 merge. | 6 | This is a SLERP merge of mistral v0.2 and em\_german\_leo\_mistral-leo-mistral to enhance the new mistral 7B with german capabilities.
Disclaimer: I am a total noob, so don't expect any technical knowledge from me. I just played around with the parameters until I was satisfied with the outcome.
Example output:
Prompt: [INST]Ich habe heute 4 Äpfel. Gestern aß ich 3 Äpfel. Wie viele Äpfel habe ich heute?[/INST]
Output: Heute hast du 4 Äpfel. Die Äpfel, die du gestern gegessen hast, ändern nichts an der Anzahl der Äpfel, die du heute hast. Daher hast du immer noch 4 Äpfel.
No Benchmarks yet. If someone wants to do them feel free. Or just try it out.
Weights:
[https://huggingface.co/genericgod/GerMerge-em-leo-mistral-v0.2-SLERP](https://huggingface.co/genericgod/GerMerge-em-leo-mistral-v0.2-SLERP)
GGUF:
[https://huggingface.co/genericgod/GerMerge-em-leo-mistral-v0.2-SLERP-GGUF](https://huggingface.co/genericgod/GerMerge-em-leo-mistral-v0.2-SLERP-GGUF) | 2023-12-15T14:11:31 | https://www.reddit.com/r/LocalLLaMA/comments/18j0z6u/germerge_german_mistral_v02_merge/ | genericgod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j0z6u | false | null | t3_18j0z6u | /r/LocalLLaMA/comments/18j0z6u/germerge_german_mistral_v02_merge/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'wj0zgwwqZ3Mxt4VdrgxAl2kbcZJFkdUX7szGhead0B4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=108&crop=smart&auto=webp&s=0eff1be55a16663f8a87073f0fa26ac3cebf1a06', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=216&crop=smart&auto=webp&s=887f0f1f7e58ec0959a4941044250a869ed460d8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=320&crop=smart&auto=webp&s=0d4b59f9265dab179cc314e235a3940bdb750460', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=640&crop=smart&auto=webp&s=b4a81f07ff51f64c06407dcbfa20ebea342ee512', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=960&crop=smart&auto=webp&s=acdc06ef8ce42351871572607973d8dfc37a27dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?width=1080&crop=smart&auto=webp&s=1ff2450de996d51b9566574feef7fd741dd63dc9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZzoHUCCArtCz8WWnfmTz973PXGrbaqrAqxECotz1wAw.jpg?auto=webp&s=f8f8867a90a5186bf413c239ca254644b75744a9', 'width': 1200}, 'variants': {}}]} |
LLaMA just got sarcastic with me🤣 | 1 | 2023-12-15T14:09:35 | ChildOf7Sins | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18j0xq0 | false | null | t3_18j0xq0 | /r/LocalLLaMA/comments/18j0xq0/llama_just_got_sarcastic_with_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'yQNWJyM7srIPdniGKWCtLBJ2ikbGiuc2FQSuiVQKmyM', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/goani3ffug6c1.png?width=108&crop=smart&auto=webp&s=2b045948cc68b1e5b83bbe2d6c7d413d89ab0776', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/goani3ffug6c1.png?width=216&crop=smart&auto=webp&s=c3d81f9400274cc47ec70ea8b3d92a1b590e0a36', 'width': 216}, {'height': 94, 'url': 'https://preview.redd.it/goani3ffug6c1.png?width=320&crop=smart&auto=webp&s=1588bd54ac76d0eb19a13968c237c9cbe1c9506b', 'width': 320}, {'height': 188, 'url': 'https://preview.redd.it/goani3ffug6c1.png?width=640&crop=smart&auto=webp&s=e4f53159a937f6f6bdae1620c2a4a73d8e3fd512', 'width': 640}], 'source': {'height': 270, 'url': 'https://preview.redd.it/goani3ffug6c1.png?auto=webp&s=e05d2d882c836492001636d6a8ae44614b52ecef', 'width': 916}, 'variants': {}}]} | |||
What is the benefit of using A100 4090 instead of two rtx 4090? | 1 | [removed] | 2023-12-15T14:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/18j0ry3/what_is_the_benefit_of_using_a100_4090_instead_of/ | ipedpedi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j0ry3 | false | null | t3_18j0ry3 | /r/LocalLLaMA/comments/18j0ry3/what_is_the_benefit_of_using_a100_4090_instead_of/ | false | false | self | 1 | null |
recommendations for uncensored models for writing | 1 | [removed] | 2023-12-15T13:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/18j05h1/recommendations_for_uncensored_models_for_writing/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j05h1 | false | null | t3_18j05h1 | /r/LocalLLaMA/comments/18j05h1/recommendations_for_uncensored_models_for_writing/ | false | false | self | 1 | null |
recommendations for uncensored models | 1 | [removed] | 2023-12-15T13:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/18j04op/recommendations_for_uncensored_models/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j04op | false | null | t3_18j04op | /r/LocalLLaMA/comments/18j04op/recommendations_for_uncensored_models/ | false | false | self | 1 | null |
recommendation for uncensored models | 1 | [removed] | 2023-12-15T13:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/18j03i5/recommendation_for_uncensored_models/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j03i5 | false | null | t3_18j03i5 | /r/LocalLLaMA/comments/18j03i5/recommendation_for_uncensored_models/ | false | false | self | 1 | null |
"create_tensor: tensor 'blk.0.ffn_gate.weight' not found" error when loading Mixtral on LMStudio | 2 | I have this error when loading this model to lmstudio. I have tried different mixtral versions, reloading, restarting lmstudio multiple times, etc. Other models works correctly. I saw a thread on github from few days ago in which someone told that llama.cpp wasn't merged and did not support mixtral, but it is merged now. ( [error loading model: create\_tensor: tensor 'blk.0.ffn\_gate.weight' not found · Issue #4881 · oobabooga/text-generation-webui (github.com)](https://github.com/oobabooga/text-generation-webui/issues/4881) )
Anyone can help me with this? | 2023-12-15T13:27:37 | https://www.reddit.com/r/LocalLLaMA/comments/18j02pr/create_tensor_tensor_blk0ffn_gateweight_not_found/ | Zealousideal-Cry7806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18j02pr | false | null | t3_18j02pr | /r/LocalLLaMA/comments/18j02pr/create_tensor_tensor_blk0ffn_gateweight_not_found/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'EqGnwNtndPL6niBC0lmdLVP5WiWIze4ZZQGewkCqPQE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=108&crop=smart&auto=webp&s=865c4406b25b5c9a1cfdff0ae13257a6441b4bb0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=216&crop=smart&auto=webp&s=049507405f2150662563999b88f54d2163a5f443', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=320&crop=smart&auto=webp&s=0acbabf935df33b38a797dbc6fbb3044681a5c92', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=640&crop=smart&auto=webp&s=290f936e2925aedf86fc1527fa741efe4aa4da99', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=960&crop=smart&auto=webp&s=57dcfbac82a8564c4fc569c0ae831ef339043df5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?width=1080&crop=smart&auto=webp&s=c65c78ece7fd3c44b5d74a0714b09cacd6f9d9b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FXrvgUSwRSoJG0dOlbiUkvfrnlWtCiOfnZJemSguXXg.jpg?auto=webp&s=a582ca6ebaf4c874db5326a6ca231e382e8801e2', 'width': 1200}, 'variants': {}}]} |
Is there a way to get the alpaca electron/7b 4 bit quantized model of llama.cpp to hallucinate less? Yes it’s hilarious to watch it wander off into a life it has created of its own, but just curious. | 3 | I have just recently dug into the rabbit hole of hdl and made my own assembler, but im not sure where to start for this | 2023-12-15T13:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/18izz1i/is_there_a_way_to_get_the_alpaca_electron7b_4_bit/ | Wild-Librarian4511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izz1i | false | null | t3_18izz1i | /r/LocalLLaMA/comments/18izz1i/is_there_a_way_to_get_the_alpaca_electron7b_4_bit/ | false | false | self | 3 | null |
Serious RAG question | 16 | If you have built a serious RAG application, what are your best strategies for managing context?
And why no one talks about it? This seems to me the most important thing in user experience and engineering of any RAG.
I loved the cognitive agent approach for RAG - https://arxiv.org/pdf/2309.02427.pdf | 2023-12-15T13:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/18izvbh/serious_rag_question/ | ashutrv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izvbh | false | null | t3_18izvbh | /r/LocalLLaMA/comments/18izvbh/serious_rag_question/ | false | false | self | 16 | null |
Models recommendation for 16gb ram devices | 1 | [removed] | 2023-12-15T13:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/18izna2/models_recommendation_for_16gb_ram_devices/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izna2 | false | null | t3_18izna2 | /r/LocalLLaMA/comments/18izna2/models_recommendation_for_16gb_ram_devices/ | false | false | self | 1 | null |
How to run Phi 2 locally using CPU only? | 2 | Is there a way to run the Phi-2 2.7B model on a CPU without utilizing a GPU? I have a laptop with an integrated Intel Xe graphics card and do not have CUDA installed. However, I couldn't find a solution online for running the model exclusively on CPU. | 2023-12-15T13:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/18izm38/how_to_run_phi_2_locally_using_cpu_only/ | ZealousidealBadger47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izm38 | false | null | t3_18izm38 | /r/LocalLLaMA/comments/18izm38/how_to_run_phi_2_locally_using_cpu_only/ | false | false | self | 2 | null |
LLM options and recommedations | 1 | [removed] | 2023-12-15T13:03:13 | https://www.reddit.com/r/LocalLLaMA/comments/18izlvb/llm_options_and_recommedations/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izlvb | false | null | t3_18izlvb | /r/LocalLLaMA/comments/18izlvb/llm_options_and_recommedations/ | false | false | self | 1 | null |
Looking for recommendations | 1 | [removed] | 2023-12-15T13:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/18izkgy/looking_for_recommendations/ | vinay737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18izkgy | false | null | t3_18izkgy | /r/LocalLLaMA/comments/18izkgy/looking_for_recommendations/ | false | false | self | 1 | null |
What are the experts in Mixtral? | 2 | Hey! Trying to understand how the Mixtral is made.
From my understanding MoE is a multiple small llms that are trained on specific topic (programming, health, writing)
So, is there any information about exact topics of Mixtral experts, or my understanding of MoE is wrong? | 2023-12-15T12:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/18iz9h9/what_are_the_experts_in_mixtral/ | Andrey_best_2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iz9h9 | false | null | t3_18iz9h9 | /r/LocalLLaMA/comments/18iz9h9/what_are_the_experts_in_mixtral/ | false | false | self | 2 | null |
A little poll about usage types | 1 | What do you use your local model for?
[View Poll](https://www.reddit.com/poll/18iyx1e) | 2023-12-15T12:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/18iyx1e/a_little_poll_about_usage_types/ | DryArmPits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iyx1e | false | null | t3_18iyx1e | /r/LocalLLaMA/comments/18iyx1e/a_little_poll_about_usage_types/ | false | false | self | 1 | null |
Looking for recommendations & options | 3 | Hello everyone, I'll be very straightforward.
Which models would you recommend and do you know any resources on how to set them up. I need a model, preferably uncensored, to run locally. I have a friend who's into writing stories. We want to compare what an LLM could write, compared to his own work.
I have some basic experience working with python, so I'm not a complete newbie, I'm just new to the AI/ML stuff.
My hardware:
- CPU - Ryzen 9 5900x
- RAM - 32GB (2X16) @ 3200mhz
- GPU - RX 6950XT 16GB
As for the OS, it would be preferred if it could be done on Windows, but if not, I could dual boot a debian distro along side my Windows installation. | 2023-12-15T12:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/18iys4h/looking_for_recommendations_options/ | iMonstaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iys4h | false | null | t3_18iys4h | /r/LocalLLaMA/comments/18iys4h/looking_for_recommendations_options/ | false | false | self | 3 | null |
GPT4-V is a terrible designer | 19 | Hey!
I've been recently wondering a lot: is GPT4-V capable of generating a good design?
Short answer: **GPT4-V is a terrible designer**. For now. But my bet is that it's not gonna last long and large transformer models will change the design industry forever.
But let's start. My general goal was to generate code, JSON (format doesn't matter) that represents some design that looks good. The deeper I went the more I had a feeling that it's impossible to achieve because large models in a current state are clueless about design. I thought it'd be nice to have a more structured experiment to prove my point.
I created a simple dataset of 12 similar experiments:
1. I made a screenshot of a section from a well-designed website ([openai.com](https://openai.com/) or [nike.com](https://nike.com/))
2. I create 3 additional variants of this section, each one having obvious design flaws.
3. I asked GPT4-V which one is correct.
# Results
**2/12 correct, 10/12 errors.**
Success rate is only **16%** even though for the examples are absolutely trivial. The reasoning shows zero understanding of design harmony. Even the reasoning for correct answers is not very convincing.
# Examples
Here's a banner section from Nike website. The correct section is obviously **A**:
https://preview.redd.it/022dujy57g6c1.png?width=1440&format=png&auto=webp&s=50fb6ed9d618b5cdc303257a9ca0422f6968c55a
https://preview.redd.it/cihuajy57g6c1.png?width=1440&format=png&auto=webp&s=22a4faa1da752c36da5d655e767936aeb361c496
https://preview.redd.it/lzjlily57g6c1.png?width=1440&format=png&auto=webp&s=1fcaec6fb7e18e22329940abaa9d2e75ec1151e2
https://preview.redd.it/q29z1ky57g6c1.png?width=1440&format=png&auto=webp&s=6848a3e50578b0d9658e20063a344a6fc595a3f6
GPT4-V answer: C ❌
Reasoning:
>Section C appears to be the correctly designed one based on the images provided. Here's the reasoning:
>
>
In Section A, the text "LOREM IPSUM DOLOR SIT" overlays the image of the individuals, making it hard to read and cluttering the visual focus of the section.
\- Section B exhibits the same issue, with the text overlaying the individuals, and it's also centralized, which doesn't seem to fit well with the overall layout.
\- Section C positions the title and the description text neatly below the image, which creates a cleaner division between text and visual content, improving readability and overall aesthetic.
\- Section D also keeps the text away from the image, but the text alignment to the left in conjunction with the image composition looks slightly unbalanced compared to Section C.
>
>Section C provides a clean and organized design by separating the image and text effectively, which is typically a good practice in web design for both aesthetics and functionality.
The reasoning is very bad and shows a lack of basic understanding of design harmony.
Here's another one, a grid section from OpenAI website. Section **A** is correct, the rest has obvious design errors:
https://preview.redd.it/25dkcgfk7g6c1.png?width=1560&format=png&auto=webp&s=afb07e62480fbefe2a1169b86ec5774ef2d65240
https://preview.redd.it/z4t8zffk7g6c1.png?width=1560&format=png&auto=webp&s=96c8150dddf80b3defe31f565fdec2af7bc0178c
https://preview.redd.it/6s9j0ffk7g6c1.png?width=1561&format=png&auto=webp&s=fb4e69b2f2a9e739cbf2af892ba39eca82dc966b
https://preview.redd.it/m5drlgfk7g6c1.png?width=1560&format=png&auto=webp&s=9205f35ae80c34cdf70ba9b6aec4bd45236f1557
GPT4-V answer: C ❌
Reasoning again shows lack of fundamental understanding of design:
>Section C appears to be the most correctly and cleanly designed. The other sections have the following issues:
Section A: The text "Language models are few-shot learners" is obscured by the pattern in the background, making it difficult to read.
Section B: The text "few-shot learners" within the statement "Language models are few-shot learners" wraps onto a second line unnecessarily and is also obscured by the pattern.
Section D: The end of the sentence "We've trained GPT-3, an autoregressive language model with 175 billion parameters" is obscured by the background pattern, similar to Section A.Only Section C does not seem to have any text readability issues caused by overlapping with the background pattern, making it the most cleanly designed section in terms of legibility and layout.
# All the examples and code
You can find the full code of the experiment and all results here: [https://github.com/r00dY/ai-design-benchmark](https://github.com/r00dY/ai-design-benchmark)
# Thoughts
Honestly I was surprised by this level of incompetence but at the same time can't stop thinking that it's just temporary and that design is a perfect medium for transformers.
Let's start with a basic question: **why do things look good or bad?**
It's hard to answer this one with words. At the end of the day it all comes down to this feeling that design "clicks" or it doesn't. It's not about words, it's about a perceived feeling of visual harmony.
However, we can distill a lot of basic design rules:
* text blocks shouldn't overlap
* title should be larger than body text
* stack items should be aligned etc.
* the characters in a font must be visually consistent with each other.
We could find hundreds of these. People don't like fonts in which characters are not consistent with each other. But what does "consistent" even mean? Font designers know a lot of logical "heuristics" to provide this consistency, the stroke widths, the angles, the curves, the spacing, etc. **I'm pretty sure that if we tried hard enough we'd find some maths behind it.**
But designers are not mathematicians. They work in a different way. They watch billions of designs and they develop a feeling of what looks good and what doesn't. They don't think about it, they just feel it. Design process is basically a "trial and error" until something clicks.
So there are 2 facts:
1. Design has logical rules.
2. The rules are subconciously understood by people, they're not formal in any way.
**I think it's a perfect use case for large transformer models.** If a large model was pre-trained with a huge amount of good designs (even without any labeling) it would understand the rules about those designs in the same way designers do by watching hundreds of inspirations a day. And it's gonna be a game changer.
The experiments in this repo show that GPT4-V is not there yet. My feeling is that it wasn't pre-trained on enough designs yet?
Very curious about your thoughts! | 2023-12-15T12:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/18iynn2/gpt4v_is_a_terrible_designer/ | rudzienki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iynn2 | false | null | t3_18iynn2 | /r/LocalLLaMA/comments/18iynn2/gpt4v_is_a_terrible_designer/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'GjYW-4MEOauuAby3KlSK8RMYuUQmxatADtxQ8SNUpz8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?width=108&crop=smart&auto=webp&s=6330bfe882a086d085037eae65ebddf58d985fb5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?width=216&crop=smart&auto=webp&s=2723989673cf48bdcde93f669deddd84178ca7e7', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?width=320&crop=smart&auto=webp&s=443e30f454855f2574ed610f0552e40e9a338406', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?width=640&crop=smart&auto=webp&s=7ea1389b6fbb0d4893cd6e242415cd727531b853', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?width=960&crop=smart&auto=webp&s=00bdb72502aead762c5fa71c4e56420f7bcf5009', 'width': 960}], 'source': {'height': 562, 'url': 'https://external-preview.redd.it/M8oyzRSst_21qqEL0krjPF4eYz6zBxk0nfhE6MOcDgk.jpg?auto=webp&s=b3c75cf68942e80505259dc3403e46c07f10bbc5', 'width': 1000}, 'variants': {}}]} | |
Error loading "instruction-templates" | 2 | Why am I getting this error? When I try to load any preset I get this. Reinstalling the interface to an older version did not help.
https://preview.redd.it/jyv3if3t6g6c1.jpg?width=2500&format=pjpg&auto=webp&s=927d790f63e39bf8539a5bd5c70fa61109881608 | 2023-12-15T11:56:15 | https://www.reddit.com/r/LocalLLaMA/comments/18iygbf/error_loading_instructiontemplates/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iygbf | false | null | t3_18iygbf | /r/LocalLLaMA/comments/18iygbf/error_loading_instructiontemplates/ | false | false | 2 | null | |
How to utilize AWS and VLLM, to make an Api available to a running llm on an AWS sage maker gpu. | 2 | I'm using the code below code to do exactly this:
python -m vllm.entrypoints.openai.api_server --host 127.0.0.1 --port 8888 --mistralai/Mistral-7B-Instruct-v0.1 & npx localtunnel --port 8888
on sage maker notebook terminal.
what happens is when I run this line, the model is available on local host that I'm tunneling, to make it available as Api.
But even though it works. I don't think this is the right technique, is there another technique where i don't have to tunnel it and rather use aws services (which is available to me ) to make the api available, using the vllm framework. | 2023-12-15T11:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/18iy1ua/how_to_utilize_aws_and_vllm_to_make_an_api/ | Responsible-Log2173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iy1ua | false | null | t3_18iy1ua | /r/LocalLLaMA/comments/18iy1ua/how_to_utilize_aws_and_vllm_to_make_an_api/ | false | false | self | 2 | null |
Long Preamble/Hidden Prompt (Claude.ai)- Is this what we should be doing too? | 30 | I have managed to get [Claude.ai](https://Claude.ai) to spit out it's initial preamble/hidden prompt and it is starts with a basic user question and then a response from the AI. The response is quite extensive. Should we be adopting a similar system prompting approach to get our models to perform or is this something that will be very specific to Claude? It's more comprehensive than the ones I use with local LLMs.
[Claude.ai](https://Claude.ai) preamble:
Are you able to assist me with creative writing, question answering, summarization, math, coding, and all sorts of other tasks ?
Yes, I'd be happy to assist with a variety of tasks: For creative writing, I can help brainstorm ideas, suggest plot points or character details, or provide feedback on drafts. Just describe what you're working on and where you'd like some help. For question answering, I have broad general knowledge and can help answer questions across many domains. Let me know what questions you have. For summarization, I can analyze longer texts or articles and provide a concise summary capturing the key points. Send over a document and I can create a summary for you. For math, I can help explain concepts, walk through problems step-by-step, check work for accuracy, and more for a variety of math topics. Let me know what type of math help you need. For coding assistance, I can explain coding concepts, provide examples for implementation, check code for errors, give feedback on improvements, and more. Just share what language or concept you need help with. And for other tasks, don't hesitate to ask if I may be able to assist. I'd be glad to do what I can to help across a wide range of areas. Just let me know what you need and we can discuss if and how I can provide support. Let me know how else I can assist with summarizing or formatting the prompt text. | 2023-12-15T11:23:21 | https://www.reddit.com/r/LocalLLaMA/comments/18ixy58/long_preamblehidden_prompt_claudeai_is_this_what/ | antsloveit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ixy58 | false | null | t3_18ixy58 | /r/LocalLLaMA/comments/18ixy58/long_preamblehidden_prompt_claudeai_is_this_what/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': '3_SpT5T0ooxCFCom6cZ6UPwNtFl7TrBMQ-36YBzTJOE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=108&crop=smart&auto=webp&s=fd930dc10f5ebb3571bfeb23a386b0d483575abd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=216&crop=smart&auto=webp&s=9ffbbf0e287749d02482ce45140b0b04de86dc33', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=320&crop=smart&auto=webp&s=fbce6cb034e593b1625d2071e1130b34e6ae0c98', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=640&crop=smart&auto=webp&s=5513ba0922a40d8b4f51e0cf964a9834bcb8fabd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=960&crop=smart&auto=webp&s=f90abf23504a47b7b44de0aca8243ed90ae6724c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?width=1080&crop=smart&auto=webp&s=43e31652a7dafda0488526baf2f1652e6ce23c4f', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/WVpk6nmbm4gtmJ5WZl5RKn2YaXfDnFtIJxqA-vZitwA.jpg?auto=webp&s=7c98aa6d49166211d01a104c751d08e0ca43be7d', 'width': 2400}, 'variants': {}}]} |
Koboldcpp now has Linux binaries / Micromamba (conda) based runtime | 110 | Hey Everyone,
I am excited to announce that Koboldcpp now has official Linux binaries that should be compatible with most distro's out there (For the official version, not the forks). The binaries are similar to the Windows release where it is plug and play, not installations required and when I tested on Manjaro an empty distribution installation with just the proprietary driver option was enough to be able to use the CUBlas backend.
# Binaries
You can find the portable binary on the [release page](https://koboldai.org/cpp) of 1.51.1 and newer, and the latest one is directly available at [https://koboldai.org/cpplinux](https://koboldai.org/cpplinux) . Simply give it execute permissions (chmod +x koboldcpp-linux-x64) and launch it from the terminal. While the launcher GUI does function, Koboldcpp is designed to run in the background and display its results in the terminal as its running as a server. So if you wish to be able to close out of Koboldcpp easily or view its statistics running from the terminal is currently a must. Of course headless running is possible when launched trough the command line options (See --help for more info) so this can also be used for unattended AI servers.
For CUDA the relevant runtime libraries are bundled, the presence of the Nvidia driver should be enough. For OpenCL we expect you to have ocl-icd on your system along with the relevant OpenCL driver.
The current release is limited to AVX2, we are planning to expand this to non AVX2 systems similar to the Windows release. Of course CPU only systems also run out of the box, with a GPU being optional.
# Koboldcpp.sh
(This option may conflict with existing conda installations if these conda installations automatically init / hijack our script)
For those who do not have luck with the binaries, or wish to run from source we also have a new option allowing you to use koboldcpp.sh. This file is very similar to our play.sh from the regular KoboldAI Client, it will automatically download micromamba along with all the libraries and compilers you need for a successful Koboldcpp compilation. After compiling Koboldcpp it will automatically launch Koboldcpp inside its own Python environment.
With normal usage the usage of this file is identical to our .py script, you can either launch it in the terminal for a GUI, or use --help for the command line options of Koboldcpp to launch it headless from the command line. In addition to those options there are 2 extra commands:
`rebuild - This command will rebuild the conda environment and recompile the Koboldcpp binaries`
`dist - This is the command we use to generate the binary mentioned above. This should allow you to get a binary specific to your distribution. For wider compatibility we advise you to compile this inside an older Ubuntu docker or CI if you wish to distribute your distribution to other users.`
# Recap of Koboldcpp
**Koboldcpp is not just a wrapper**
Some mistake us for just an API wrapper / UI, this is not the case. Koboldcpp is its own fork of Llamacpp with its own unique features. Most notably its own context shifting implementation that allows context shifting to work over the API's. Because of this as long as the text of your context remains the same we can intelligently detect which context has been shifted and avoid reprocessing. Our implementation differs from the upstream one and is better suited to the more advanced API usage where parts of the text are rewritten (We do begin to reprocess the context from the first part that contains something new, this can be triggered by things such as World Info or randomized values).
We also still support the older GGML formats, all the way to the original implementation. Have a changeable sampler order, optional MMQ, etc.
**Not just for story writing**
Our initial releases were fully optimized to story writing, mimicking the behavior of the full KoboldAI client and defaulting to a story mode. The current KoboldAI Lite UI defaults to instruct mode, and both our UI and API no longer blocks the EOS token by default. Because its powerful UI as well as API's, (opt in) multi user queuing and its AGPLv3 license this makes Koboldcpp an interesting choice for a local or remote AI server.
Of course, if you do want to use it for fictional purposes we have a powerful UI for writing, adventure games and chat with different UI modes suited to each use case including character card support.
**OpenAI Emulation out of the box**
While we prefer our own /api we understand there is a large ecosystem of OpenAI compatible integrations and applications out there, With Koboldcpp you do not have to choose as we run everything side by side. If some of your applications are programmed for the KoboldAI API and others for the OpenAI API you can run them without having to relaunch Koboldcpp or changing any settings.
**Go beyond the home**
We optimized the binary to run on a wide range of cloud platforms ([Colab](https://koboldai.org/colabcpp) has its own specialized notebook) this includes cheaper VastAI instances which may run on an older version of CUDA (We support 11.5 and up) allowing you to run larger models at reduced costs.
Because Koboldcpp is a server the location of your client does not matter, the UI can be loaded in the web browser on your own PC or Phone. Firewalled host? No problem, with --remote you can automatically generate a cloudflare tunnel with a working link.
​
I hope this helps more Linux users enjoy Koboldcpp, if you have any questions feel free to share them in the comments.
​ | 2023-12-15T11:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/18ixudn/koboldcpp_now_has_linux_binaries_micromamba_conda/ | henk717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ixudn | false | null | t3_18ixudn | /r/LocalLLaMA/comments/18ixudn/koboldcpp_now_has_linux_binaries_micromamba_conda/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'ZzgnLNF2dmm8s77W3lOyl5rDXGo2pVPXAsQfOcQhkNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=108&crop=smart&auto=webp&s=861246fae858535f85f3bae949f14eeef3e8e03d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=216&crop=smart&auto=webp&s=13c339f391259ee1214e86df793d3abf3aa01df9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=320&crop=smart&auto=webp&s=69f746c6a5c630b61379a2f5f108951e467e902a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=640&crop=smart&auto=webp&s=9f5f9165aad8588c90d52cbcfd156a644c5313c6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=960&crop=smart&auto=webp&s=a39a16e5f1d677ab09ed1f65b6915a6e81c21d62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?width=1080&crop=smart&auto=webp&s=db79be6fe8073264fac237d56875f942c4731db7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SCtM3mt9F_F0AXcV-HgZr3Pyimd_3htheY9Tl1ef3yo.jpg?auto=webp&s=245f95a6f1c61be4fe100fbe02bab2ff9747e959', 'width': 1200}, 'variants': {}}]} |
Best prompt format for fine-tuning | 1 | [removed] | 2023-12-15T11:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/18ixtr4/best_prompt_format_for_finetuning/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ixtr4 | false | null | t3_18ixtr4 | /r/LocalLLaMA/comments/18ixtr4/best_prompt_format_for_finetuning/ | false | false | self | 1 | null |
TigerBot: An Open Multilingual Multitask LLM | 5 | 2023-12-15T11:12:43 | https://arxiv.org/abs/2312.08688 | Elven77AI | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18ixs7g | false | null | t3_18ixs7g | /r/LocalLLaMA/comments/18ixs7g/tigerbot_an_open_multilingual_multitask_llm/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
Automatic iterative code revision | 1 | 2023-12-15T10:54:36 | https://github.com/dmsweetser/CodeReviser | Upset_Acanthaceae_18 | github.com | 1970-01-01T00:00:00 | 0 | {} | 18ixi0l | false | null | t3_18ixi0l | /r/LocalLLaMA/comments/18ixi0l/automatic_iterative_code_revision/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'LnvVyLOnvcW3RaTLD04SLnVuNDO2oFX3o3HzY4D16KA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=108&crop=smart&auto=webp&s=ba6a49573b9ee15f2431d67ac3cd3edb83260b99', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=216&crop=smart&auto=webp&s=aa8c4d81d00639c96ad32a864c0e1d09d53e79ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=320&crop=smart&auto=webp&s=c7483c6c96a873e3d395fe58559a804473fd647f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=640&crop=smart&auto=webp&s=65d3fdd59ec571114fab93b860a86add2700a6a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=960&crop=smart&auto=webp&s=12010be5e6eb475ce2fb88067506cd9337157ae1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?width=1080&crop=smart&auto=webp&s=79ed42cda3c23dcd1cb73bdd1dd2535bb891f1de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nxa4wtkxQhWimcJyV1LJ_lCBCQbWJL6EVm3Jrn9RLNA.jpg?auto=webp&s=ad2c12dece00bc5a5491f8fd42fb43904d398de5', 'width': 1200}, 'variants': {}}]} | ||
I'm building r/MistralAI - feel free to post relevant questions there as well! | 1 | 2023-12-15T10:52:01 | https://www.reddit.com/r/MistralAI/ | anonboxis | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18ixgqk | false | null | t3_18ixgqk | /r/LocalLLaMA/comments/18ixgqk/im_building_rmistralai_feel_free_to_post_relevant/ | false | false | default | 1 | null | |
Mixstral 8x7B jailbreak | 1 | [removed] | 2023-12-15T10:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/18ixf2l/mixstral_8x7b_jailbreak/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ixf2l | false | null | t3_18ixf2l | /r/LocalLLaMA/comments/18ixf2l/mixstral_8x7b_jailbreak/ | false | false | nsfw | 1 | null |
Connect localGPT with Confluence API | 2 | I am a completly newbie and wanted to ask you guys if its possible to connect localGPT with Confluence API/Confluence loader. If so, can you provide steps or a tutorial? This should happen in an enterprise environment, so large data will be in the database. Furthermore can you give recommendations about the vector db and if i will need an document db for this use case?
The goal is to be able to chat with you LLM which then retrieves information from Confluence (with source). I planned to use LLama-2-13b as LLM and I am still unsure which embedding model to use.
​
Thank you in advance! | 2023-12-15T09:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/18iw9gl/connect_localgpt_with_confluence_api/ | mynxsol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iw9gl | false | null | t3_18iw9gl | /r/LocalLLaMA/comments/18iw9gl/connect_localgpt_with_confluence_api/ | false | false | self | 2 | null |
Does Mixtral also use OpenAI data? | 1 | I tried on HFChat and it has the same knowledge cutoff and it says developed by OpenAI. I
https://preview.redd.it/emtjf9i2af6c1.png?width=755&format=png&auto=webp&s=5a211d3fb87f6168943859ea8611db47af1341b5 | 2023-12-15T08:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/18ivtr0/does_mixtral_also_use_openai_data/ | nggakmakasih | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ivtr0 | false | null | t3_18ivtr0 | /r/LocalLLaMA/comments/18ivtr0/does_mixtral_also_use_openai_data/ | false | false | 1 | null | |
Just a gallery of some generations I found interesting from the new Mixtral model. it pulls stuff off that only GPT4 was capable of and some that even it cant. | 35 | 2023-12-15T08:37:13 | https://imgur.com/a/YvekXt8 | Different_Fix_2217 | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 18ivlpf | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 1257, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FYvekXt8%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FYvekXt8&image=https%3A%2F%2Fi.imgur.com%2F6CSM7Ai.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="1257" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/6CSM7Ai.jpg?fb', 'thumbnail_width': 600, 'title': 'Mixtral', 'type': 'rich', 'url': 'https://imgur.com/a/YvekXt8', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_18ivlpf | /r/LocalLLaMA/comments/18ivlpf/just_a_gallery_of_some_generations_i_found/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'kNc9gUKP-srmlU_wydYGJvnPbkePfHHD1WSgslRsKaM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=108&crop=smart&auto=webp&s=6c3e4879f2fc2968a67e8fc26de83757024f89db', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=216&crop=smart&auto=webp&s=989858fea321fb6efd64749f6526fe2774b0229b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=320&crop=smart&auto=webp&s=64dff5b223d68988c65e4191bf090a528bcc9708', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=640&crop=smart&auto=webp&s=fca4c32fb9c9d26b996bb82f6bccb4cb1b944dfe', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=960&crop=smart&auto=webp&s=532b3ae8d4790589498ae98b7c5a32eeede5fb6a', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?width=1080&crop=smart&auto=webp&s=9ef611ae6dc4a58dc7911d82be4a2dff43f5c01b', 'width': 1080}], 'source': {'height': 3690, 'url': 'https://external-preview.redd.it/Vc-buDutq8KYLC4lJ3wpLQ1uyzryMmddqPRirpHei2c.jpg?auto=webp&s=b9cb61e40da503888c7670c7f9c96bf1bc86928f', 'width': 1820}, 'variants': {}}]} | ||
OOM with llama.cpp server, while working fine in CLI mode | 1 | Hi all.
I’ve made a systemd service with llama.cpp server with mixtral 8x7b with q4 quantisation, it worked okay for a day or two, but then started OOM’ing for some reason. If I launch the same model with the same context size and other parameters in CLI mode (i.e. just remove the —host kwarg and change ./server to ./main), it works as expected, generating text quite freely, without OOM’ing.
I have an Intel Xeon e5 2666v3 cpu, 32 gigs of RAM and RTX 4060 8gb with 3 layers offloaded to the GPU. I’ve both tried mlock enabled and disabled, it doesn’t care and still works in CLI and OOMs in server.
What am I doing wrong? | 2023-12-15T08:34:55 | https://www.reddit.com/r/LocalLLaMA/comments/18ivkkw/oom_with_llamacpp_server_while_working_fine_in/ | netikas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ivkkw | false | null | t3_18ivkkw | /r/LocalLLaMA/comments/18ivkkw/oom_with_llamacpp_server_while_working_fine_in/ | false | false | self | 1 | null |
Mac M3 96GB vs 128GB? | 3 | I’m about to upgrade from an old machine to a new MBP, and before I commit to a specific amount of soldered-on RAM, I’m trying to figure out whether there’s anything important I could do with 128GB unified memory that I couldn’t do with 96, in the LLM space. Like run a 70B f16 model? Unquantized 8x7B?
Any other thoughts?
Thanks! | 2023-12-15T08:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/18iv6y0/mac_m3_96gb_vs_128gb/ | SuperMonkeyCollider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iv6y0 | false | null | t3_18iv6y0 | /r/LocalLLaMA/comments/18iv6y0/mac_m3_96gb_vs_128gb/ | false | false | self | 3 | null |
Any way to update LoRAs in real-time or near real-time? | 1 | [removed] | 2023-12-15T07:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/18iuuar/any_way_to_update_loras_in_realtime_or_near/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iuuar | false | null | t3_18iuuar | /r/LocalLLaMA/comments/18iuuar/any_way_to_update_loras_in_realtime_or_near/ | false | false | self | 1 | null |
How to View Logits from LLMs Before Token Selection? | 4 | Hi everyone! I'm delving into the workings of large language models (LLMs) and have a specific question about accessing and viewing logits. I'm curious if there's a way to see the list of tokens and their associated logits before the model selects one and discards the others. This information could be crucial for certain applications, like ensuring correct JSON syntax or preventing the generation of stop tokens.Has anyone here worked on or come across a method or tool that allows for such detailed viewing and manipulation of LLM outputs? Any insights or pointers to relevant resources would be greatly appreciated!
I have already looked into logit-bias, but what I really need is the ability to see the probabilities for each token, similar to OpenAI's playground. | 2023-12-15T07:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/18ium1o/how_to_view_logits_from_llms_before_token/ | IAmBackForMore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ium1o | false | null | t3_18ium1o | /r/LocalLLaMA/comments/18ium1o/how_to_view_logits_from_llms_before_token/ | false | false | self | 4 | null |
Running mistral + qlora adapter through llama.cpp server: f16 model error | 4 | I ran a qlora finetuning of OpenHermes 2.5 (mistral) using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), converted the adapter file to ggml using `convert-lora-to-ggml.py`, and then tried loading the model through llama.cpp server like so:
llama.cpp$ ./build/bin/server -m ./models/7B/openhermes-2.5-mistral-7b.Q4_K_M.gguf --lora ./models/7B/qlora-out/ggml-adapter-model.bin -t 10 -a "openhermes-2.5" -c 4352 -ngl 99 -n 512 -np 2 --host 0.0.0.0 --port 8585
Got the error:
llama_apply_lora_from_file_internal: applying lora adapter from './models/7B/qlora-out/ggml-adapter-model.bin' - please wait ...
llama_apply_lora_from_file_internal: r = 8, alpha = 16, scaling = 2.00
llama_model_apply_lora_from_file: failed to apply lora adapter: llama_apply_lora_from_file_internal: error: the simultaneous use of LoRAs and GPU acceleration is only supported for f16 models. dest_t->type: 12
llama_init_from_gpt_params: error: failed to apply lora adapter
Do I need to change the `convert-lora-to-ggml.py` to write out in f16, or is there some config I need to use for loading with server? I am trying to avoid merging the models in order to having the flexibility of switching out one adapter with another etc. Thanks for any pointer! | 2023-12-15T07:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/18iubos/running_mistral_qlora_adapter_through_llamacpp/ | samikr_2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iubos | false | null | t3_18iubos | /r/LocalLLaMA/comments/18iubos/running_mistral_qlora_adapter_through_llamacpp/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'suoiIV1qrzwyUWjtJWIL1ro-QtjZ8Z-xsXvzSf6FJ64', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=108&crop=smart&auto=webp&s=a64a70a8e062ff5763ee9b3af935c26cea27928a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=216&crop=smart&auto=webp&s=ef314f2b711779059050e4ba9f9aa9bee87c0679', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=320&crop=smart&auto=webp&s=75907c8056de25eda69f3d8dbcd43fbaeb06bf69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=640&crop=smart&auto=webp&s=eab213fb004a5199fafd9b1b627aaeef032da22e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=960&crop=smart&auto=webp&s=4a9918c24eee6b9c86c1874e75c1dfd6db830b70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?width=1080&crop=smart&auto=webp&s=1ebb81cce776a6778a162342e8dad0666e326f24', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KTesrp6w-7akNP9SsEO_SqO01GKpa4TfDLBD5w4mQe4.jpg?auto=webp&s=724036247cb01659f9018c00201a1384721df78c', 'width': 1200}, 'variants': {}}]} |
How to better architect generative agents | 3 | Hello, professional technical buddies, I am a beginner. After seeing projects and papers related to generative agents, I became fascinated by this technology.
I hope I can ask you some questions and look forward to your guidance!
1. If I want to try to build a generative agent myself, what mature framework technologies are available on the market today?
2. In the memory retrieval module, what should the algorithm do to best adapt to a variety of different perceptual environments and models? | 2023-12-15T06:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/18iu1gl/how_to_better_architect_generative_agents/ | elenafeng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18iu1gl | false | null | t3_18iu1gl | /r/LocalLLaMA/comments/18iu1gl/how_to_better_architect_generative_agents/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.