title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Building a simple rag, some questions. | 1 | [removed] | 2024-01-14T09:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/196c98t/building_a_simple_rag_some_questions/ | Redbeardybeard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196c98t | false | null | t3_196c98t | /r/LocalLLaMA/comments/196c98t/building_a_simple_rag_some_questions/ | false | false | self | 1 | null |
Is it necessary for adapters to match the precision of their base models? | 2 | I’m curious about the potential error rates when using different quantization formats in adapters. Specifically, my question revolves around the outcomes of training an adapter on an NF4 base model using QLoRA, and then integrating this adapter with a GPTQ model. How would this compare to the results obtained from training an adapter directly on a GPTQ model and then combining it with the same GPTQ model?
It is important to keep the adapters seperate from the models because I am using the following inference sever. (https://github.com/predibase/lorax) The server allows me to run multiple adapters concurrently on the same base model, hence why I cannot merge the adapters with the base models and quantize down to GPTQ. | 2024-01-14T09:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/196c1fm/is_it_necessary_for_adapters_to_match_the/ | FrostyContribution35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196c1fm | false | null | t3_196c1fm | /r/LocalLLaMA/comments/196c1fm/is_it_necessary_for_adapters_to_match_the/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'tLIoT0aildUP4dxzahdHEru6gFvzdV03ymwaIZvsdKA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=108&crop=smart&auto=webp&s=3386ec54fbfdbdfef6eca457f83420e433fb3f9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=216&crop=smart&auto=webp&s=5d77b6f84f502610f256087300f01f32db895b58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=320&crop=smart&auto=webp&s=ab34c3283d152b44f625f2971a328ede5c4c77e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=640&crop=smart&auto=webp&s=b131408c14adef059b775629dbc8c591368f30e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=960&crop=smart&auto=webp&s=e303c14b99334d6bbf018a6ede2c44d6180bcfa3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?width=1080&crop=smart&auto=webp&s=be8dd76fa45a9ee63a7cb2f549dc1719d1e22484', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MJOias903TcWW98C_GeEACdkUUT6W_YU3PmW68RuXXA.jpg?auto=webp&s=f89144a321e7cb7625eb10fcc127bc7e02584330', 'width': 1200}, 'variants': {}}]} |
Simple dockerized solution to run, serve and generate llamafile locally | 3 | I just finished this for my personal local trials. Let me know if you like it (or not) :).
[https://github.com/dominikj111/LLM/tree/main/Llamafile](https://github.com/dominikj111/LLM/tree/main/Llamafile) | 2024-01-14T09:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/196brok/simple_dockerized_solution_to_run_serve_and/ | domino_master | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196brok | false | null | t3_196brok | /r/LocalLLaMA/comments/196brok/simple_dockerized_solution_to_run_serve_and/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '0YlSf-tj5c1i3ZbjJUqYuta7ObsGd7Udq5ZlLkPLcYM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=108&crop=smart&auto=webp&s=ca50633d5fdb26b7aa61b7eae73095666b9ba6e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=216&crop=smart&auto=webp&s=60a97254124fd9316701c03367260e3c2d59d5f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=320&crop=smart&auto=webp&s=ba183fe3ee44d67ec09b8bfa956540ae349c4a25', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=640&crop=smart&auto=webp&s=919ddb9a6cb049c33394debc60d6cc8ba47b139f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=960&crop=smart&auto=webp&s=b3ec3dc130905409ef80f01b118c582d62a08c3f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?width=1080&crop=smart&auto=webp&s=852d6c2c4e6cb6607c59878cac55284bbf97ae78', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PaO3n_WB2kRPDnVsFCqIfMg3y1ETSQsL3zAP2qfjsw0.jpg?auto=webp&s=bcf422945c839efeeec55fbf4673fe1dcc42512c', 'width': 1200}, 'variants': {}}]} |
Question about used GFX cards | 1 | [removed] | 2024-01-14T08:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/196b2aq/question_about_used_gfx_cards/ | PiComprehensive314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196b2aq | false | null | t3_196b2aq | /r/LocalLLaMA/comments/196b2aq/question_about_used_gfx_cards/ | false | false | self | 1 | null |
Local LLM movement feels like early days of PCs vs Mainframes | 204 | I wasn't alive when PCs were first being invented, but reading this reddit makes me think of what I've read about those pioneering days. Just as personal computers began to challenge the dominance of large, centralized mainframes, we're now witnessing the starting point of a similar evolution with LLMs.
In the early days of computing, mainframes were massive, expensive, and exclusively controlled by large corporations and institutions. They were the gatekeepers of computing power, much like how, until recently, LLMs have been predominantly cloud-based, managed by a few powerful entities. Access to these models required significant resources, both in terms of computational power and cost, making them less accessible to the average user or small developer.
I'm excited to see where it all leads. One thing I wonder about, what product/solution is going to be like the "Microsoft" (e.g. the company that really builds out the platform for average consumers ) of the local LLM space?
​ | 2024-01-14T06:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1969pir/local_llm_movement_feels_like_early_days_of_pcs/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1969pir | false | null | t3_1969pir | /r/LocalLLaMA/comments/1969pir/local_llm_movement_feels_like_early_days_of_pcs/ | false | false | self | 204 | null |
Benchmarking various models on 24GB VRAM | 63 | Here is my benchmark of various models on following setup:
\- i7 13700KF
\- 128GB RAM (@4800)
\- single 3090 with 24GB VRAM
​
I will be using koboldcpp on Windows 10. I wanted to do this benchmark before configuring Arch Linux. I think results may be very different on different software and operating system.
Each time I will be instructing model following way: "tell me how large language models work".
​
I hope you will find it useful. You can see what is possible with home PC and where are the limits.
​
Please note that my commands may be suboptimal, as on Windows some VRAM may be used by other apps than AI so I should try to fit llm below 24GB. You can limit usage of VRAM by decreasing contextsize.
​
​
**mistral-7b-instruct-v0.2.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 33 verified\\mistral-7b-instruct-v0.2.Q5\_K\_M.gguf
we can see following stats:
*llm\_load\_tensors: system memory used = 86.05 MiB*
*llm\_load\_tensors: VRAM used = 4807.05 MiB*
*llm\_load\_tensors: offloaded 33/33 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 6398.21 MiB (model: 4807.05 MiB, context: 1591.16 MiB)*
(it means that all layers are in VRAM, and RAM is almost not used)
​
and the result is:
*ContextLimit: 480/2048, Processing:0.35s (16.7ms/T), Generation:7.30s (15.9ms/T), Total:7.65s (****59.98T/s****)*
​
**solar-10.7b-instruct-v1.0.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 49 verified\\solar-10.7b-instruct-v1.0.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 49/49 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 9267.46 MiB (model: 7159.30 MiB, context: 2108.16 MiB)*
result:
*ContextLimit: 379/2048, Processing:0.34s (16.2ms/T), Generation:7.97s (22.3ms/T), Total:8.31s (****43.08T/s****)*
​
**orcamaid-v3-13b-32k.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 41 verified\\orcamaid-v3-13b-32k.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 41/41 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 15849.12 MiB (model: 8694.21 MiB, context: 7154.91 MiB)*
result:
*ContextLimit: 514/2048, Processing:0.28s (12.3ms/T), Generation:10.61s (21.6ms/T), Total:10.89s (****45.07T/s****)*
​
**mixtral\_11bx2\_moe\_19b.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 49 verified\\mixtral\_11bx2\_moe\_19b.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 49/49 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 14661.72 MiB (model: 12525.55 MiB, context: 2136.17 MiB)*
result:
*ContextLimit: 408/2048, Processing:0.52s (24.6ms/T), Generation:21.38s (55.3ms/T), Total:21.90s (****17.67T/s****)*
​
**beyonder-4x7b-v2.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 33 verified\\beyonder-4x7b-v2.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 33/33 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 17396.23 MiB (model: 15777.05 MiB, context: 1619.18 MiB)*
*result:*
*ContextLimit: 530/2048, Processing:0.71s (33.7ms/T), Generation:17.04s (33.5ms/T), Total:17.75s (****28.67T/s****)*
​
Now is the moment we need to limit number of gpulayers, because full load of following models exceeds 24GB VRAM.
​
**deepseek-coder-33b-instruct.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 58 verified\\deepseek-coder-33b-instruct.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 58/63 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 23484.38 MiB (model: 20647.35 MiB, context: 2837.04 MiB)*
result:
*ContextLimit: 623/2048, Processing:0.83s (43.7ms/T), Generation:56.40s (93.4ms/T), Total:57.23s (****10.55T/s****)*
​
**yi-34b-v3.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 53 verified\\yi-34b-v3.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 53/61 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 22530.75 MiB (model: 19855.28 MiB, context: 2675.47 MiB)*
result:
*ContextLimit: 551/2048, Processing:0.87s (43.5ms/T), Generation:77.09s (145.2ms/T), Total:77.96s (****6.81T/s****)*
​
**mixtral-8x7b-instruct-v0.1.Q5\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 22 verified\\mixtral-8x7b-instruct-v0.1.Q5\_K\_M.gguf
stats:
*llm\_load\_tensors: offloaded 22/33 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 22297.13 MiB (model: 21001.06 MiB, context: 1296.07 MiB)*
result:
*ContextLimit: 447/2048, Processing:3.21s (153.0ms/T), Generation:48.42s (113.7ms/T), Total:51.63s (****8.25T/s****)*
​
With larger models, to make these numbers sane, we need to stop using q5 quant and go lower.
​
**wizardlm-70b-v1.0.Q4\_K\_M.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 42 verified\\wizardlm-70b-v1.0.Q4\_K\_M.gguf
stats:
*llm\_load\_tensors: system memory used = 18945.83 MiB*
*llm\_load\_tensors: offloaded 42/81 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 23012.97 MiB (model: 20557.69 MiB, context: 2455.29 MiB)*
result:
*ContextLimit: 560/2048, Processing:4.20s (182.6ms/T), Generation:336.31s (626.3ms/T), Total:340.51s (****1.58T/s****)*
​
**goliath-120b.Q2\_K.gguf**
command line: bin\\koboldcpp.exe --launch --contextsize 8192 --usecublas --gpulayers 58 verified\\goliath-120b.Q2\_K.gguf
stats:
*llm\_load\_tensors: system memory used = 27414.24 MiB*
*llm\_load\_tensors: offloaded 58/138 layers to GPU*
*llama\_new\_context\_with\_model: total VRAM used: 22888.04 MiB (model: 19915.75 MiB, context: 2972.29 MiB)*
result:
*ContextLimit: 869/2048, Processing:11.49s (499.6ms/T), Generation:794.33s (938.9ms/T), Total:805.82s (****1.05T/s****)*
​
What are your results on your setup?
Do you have some tips how to improve my command line? | 2024-01-14T05:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1968uvj/benchmarking_various_models_on_24gb_vram/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1968uvj | false | null | t3_1968uvj | /r/LocalLLaMA/comments/1968uvj/benchmarking_various_models_on_24gb_vram/ | false | false | self | 63 | null |
I am a Strange Dataset: Metalinguistic Tests for Language Models | 19 | **Paper**: [https://arxiv.org/abs/2401.05300](https://arxiv.org/abs/2401.05300)
**Code and dataset**: [https://github.com/TristanThrush/i-am-a-strange-dataset](https://github.com/TristanThrush/i-am-a-strange-dataset)
**Abstract**:
>Statements involving metalinguistic self-reference ("This paper has six sections.") are prevalent in many domains. Can large language models (LLMs) handle such language? In this paper, we present "**I am a Strange Dataset**", a new dataset for addressing this question. There are two subtasks: generation and verification. In generation, models continue statements like "The penultimate word in this sentence is" (where a correct continuation is "is"). In verification, models judge the truth of statements like "The penultimate word in this sentence is sentence." (false). We also provide minimally different metalinguistic non-self-reference examples to complement the main dataset by probing for whether models can handle metalinguistic language at all. The dataset is hand-crafted by experts and validated by non-expert annotators. We test a variety of open-source LLMs (7B to 70B parameters) as well as closed-source LLMs through APIs. All models perform close to chance across both subtasks and even on the non-self-referential metalinguistic control data, though we find some steady improvement with model scale. GPT 4 is the only model to consistently do significantly better than chance, and it is still only in the 60% range, while our untrained human annotators score well in the 89-93% range. The dataset and evaluation toolkit are available at [this https URL](https://github.com/TristanThrush/i-am-a-strange-dataset). | 2024-01-14T05:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1968rqx/i_am_a_strange_dataset_metalinguistic_tests_for/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1968rqx | false | null | t3_1968rqx | /r/LocalLLaMA/comments/1968rqx/i_am_a_strange_dataset_metalinguistic_tests_for/ | false | false | self | 19 | null |
REBUS: A Robust Evaluation Benchmark of Understanding Symbols | 6 | **Paper**: [https://arxiv.org/abs/2401.05604](https://arxiv.org/abs/2401.05604)
**Code**: [https://github.com/cvndsh/rebus](https://github.com/cvndsh/rebus)
**Dataset**: [https://huggingface.co/datasets/cavendishlabs/rebus](https://huggingface.co/datasets/cavendishlabs/rebus)
**Project page**: [https://cavendishlabs.org/rebus/](https://cavendishlabs.org/rebus/)
**Abstract**:
>We propose a new benchmark evaluating the performance of multimodal large language models on rebus puzzles. The dataset covers 333 original examples of image-based wordplay, cluing 13 categories such as movies, composers, major cities, and food. To achieve good performance on the benchmark of identifying the clued word or phrase, models must combine image recognition and string manipulation with hypothesis testing, multi-step reasoning, and an understanding of human cognition, making for a complex, multimodal evaluation of capabilities. We find that proprietary models such as GPT-4V and Gemini Pro significantly outperform all other tested models. However, even the best model has a final accuracy of just 24%, highlighting the need for substantial improvements in reasoning. Further, models rarely understand all parts of a puzzle, and are almost always incapable of retroactively explaining the correct answer. Our benchmark can therefore be used to identify major shortcomings in the knowledge and reasoning of multimodal large language models. | 2024-01-14T05:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1968ln2/rebus_a_robust_evaluation_benchmark_of/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1968ln2 | false | null | t3_1968ln2 | /r/LocalLLaMA/comments/1968ln2/rebus_a_robust_evaluation_benchmark_of/ | false | false | self | 6 | null |
How to accurately answer from a large corpus of text (3000000 words)? 🤷🤷 | 1 | [removed] | 2024-01-14T05:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/1968jfq/how_to_accurately_answer_from_a_large_corpus_of/ | suddenlife2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1968jfq | false | null | t3_1968jfq | /r/LocalLLaMA/comments/1968jfq/how_to_accurately_answer_from_a_large_corpus_of/ | false | false | self | 1 | null |
Training a LoRA on Unstructured Data | 15 | Hey all. I have accumulated a large set of books and magazines going back to the 1800s on a particular topic. I am looking to train a Lora against LLaMA to be able to pull useful data from this pile of sometimes inaccurate information. Do I need to just pull out everything into text files, manually label them, or ??? What will be the best way to get this data into trainable data? Is there a workflow everyone is using or recommends?
My plan was to run OCR on them, get them in TXT format, then fire up Ooga and see how it goes, but if someone has worked this out already it would save be a lot of tinkering.
2nd, can I then take these two and make a MoE out of the LLaMA and the LoRA do I just have one model to deal with. I have been following Airoboros MoE implementation as a framework.
Also, I have a 2x 3090 24gb to work and 96GB ram on my local PC, but have access to as much cloud compute as needed. I have been working on ML forecasting and SLAM mostly, but directing some efforts this direction for a while. | 2024-01-14T04:49:29 | https://www.reddit.com/r/LocalLLaMA/comments/1967p9z/training_a_lora_on_unstructured_data/ | DorianGre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1967p9z | false | null | t3_1967p9z | /r/LocalLLaMA/comments/1967p9z/training_a_lora_on_unstructured_data/ | false | false | self | 15 | null |
V8 and Mixed Precision | 1 | When you do mixed precision calculations, you can adopt intermediate calculating result for a 2(or3)-time mixed precision at the same time.
If it has a logic graph like this, can you adopt a JavaScript v8 data management into multiple intermediate mixed-precision calculations,hence making much more calculating power than before?
There are direct connect, turnaround reconnect and turnaround connect at the same time. Are these logic fulfill the requirement of making a v8 just like how JavaScript v8 did for making more calculating power? | 2024-01-14T04:37:06 | https://www.reddit.com/r/LocalLLaMA/comments/1967hl0/v8_and_mixed_precision/ | Hot-Highlight8842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1967hl0 | false | null | t3_1967hl0 | /r/LocalLLaMA/comments/1967hl0/v8_and_mixed_precision/ | false | false | self | 1 | null |
Any good Chinese models for writing dialogue? | 7 | I like writing fiction, and have tried in the past to write fiction stories in the style of Chinese fantasy fiction web novels, but I really suck at writing natural dialogue. I'm wondering if there's any good Chinese LLMs that can write dialogue for me based on instructions about what the dialogue should contain.
I have a good CPU (7950X3D) and lots of RAM (64 GB), but only 12 GB of VRAM (4070 Ti). From my understanding this means I can only run larger models if they are GGUF? or can other models be split between RAM and VRAM? | 2024-01-14T04:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/19672lv/any_good_chinese_models_for_writing_dialogue/ | Daniel_H212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19672lv | false | null | t3_19672lv | /r/LocalLLaMA/comments/19672lv/any_good_chinese_models_for_writing_dialogue/ | false | false | self | 7 | null |
How's my home server setup looking? Seeking advice and tips! | 1 | [removed] | 2024-01-14T04:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1966zkm/hows_my_home_server_setup_looking_seeking_advice/ | MicBeckie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1966zkm | false | null | t3_1966zkm | /r/LocalLLaMA/comments/1966zkm/hows_my_home_server_setup_looking_seeking_advice/ | false | false | self | 1 | null |
Is it possible to upload an llm to an website and then make the requests using Silly Tavern? | 1 | I once found a post here saying that the best method and lowest resource heavy would be upload to a certain website and then use the API in silly tavern. Where can I do this? I can't find that post anymore | 2024-01-14T04:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/1966wxh/is_it_possible_to_upload_an_llm_to_an_website_and/ | Cyberzos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1966wxh | false | null | t3_1966wxh | /r/LocalLLaMA/comments/1966wxh/is_it_possible_to_upload_an_llm_to_an_website_and/ | false | false | self | 1 | null |
Banned from OpenAI API without explanation, any alternatives? | 1 | [removed] | 2024-01-14T03:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1966dth/banned_from_openai_api_without_explanation_any/ | TheDuhhh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1966dth | false | null | t3_1966dth | /r/LocalLLaMA/comments/1966dth/banned_from_openai_api_without_explanation_any/ | false | false | self | 1 | null |
What are the datasets used for mistral-instruct? | 7 | The mistral blog post on their website says "to show the generalization capabilities of Mistral 7B, we fine-tuned it on instruction datasets publicly available on HuggingFace. No tricks, no proprietary data." That's pretty good obviously, but I can't find any details about what these datasets are. Is there any insight into this?
It is also never quite clear to me if there is much difference between fine tunes with instruct in the name vs chat. For example llama2-chat models and mistral-instruct model seem to handle both instruction and chat scenarios quite well. Anyone has any challenges using the mistral-instruct model for general chat usage? | 2024-01-14T03:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/19662o7/what_are_the_datasets_used_for_mistralinstruct/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19662o7 | false | null | t3_19662o7 | /r/LocalLLaMA/comments/19662o7/what_are_the_datasets_used_for_mistralinstruct/ | false | false | self | 7 | null |
I just made a table of Awesome-LLM-Papers-Toward-AGI | 1 | 2024-01-14T02:14:10 | https://github.com/shure-dev/Awesome-LLM-Papers-Toward-AGI | Common-Ad-1772 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1964t8e | false | null | t3_1964t8e | /r/LocalLLaMA/comments/1964t8e/i_just_made_a_table_of_awesomellmpaperstowardagi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'pqQCoG0rb3FkCTikSaYJeYWu0PnaO79JkkahCrg1DkE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=108&crop=smart&auto=webp&s=361ed69b6a8471612e1ccd58a714d4cf4c5344a5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=216&crop=smart&auto=webp&s=7c9273b102db0d53d00aea3a6fff4e5e27f1af96', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=320&crop=smart&auto=webp&s=4b65ba71b394e78fe1b692f1da7debda008fa201', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=640&crop=smart&auto=webp&s=1b3d3b8308b990546d5ab40ca80a7a29848f99ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=960&crop=smart&auto=webp&s=0095d40ce53a23c26ce40436945449dc767755d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?width=1080&crop=smart&auto=webp&s=c4b0f0df1c0b245920a7f3df80d0b0092b34ac46', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d9nk-zm6vJwdVcKpZCHE1h4jWdk5J67LW0vZcmLskdQ.jpg?auto=webp&s=8d27c89e1294e15f9f43007c5d707c5484599ddd', 'width': 1200}, 'variants': {}}]} | ||
Is v100 32GB worth it? | 1 | I’m getting v100 32gb in around $500, wanted to ask is it really worth? Because I haven’t seen any informative/helpful threads regarding it so far.
Another thing, will it work on AM4 motherboard? With secondary GPU for ports.
What would be what could be pros and cons over a 3090? | 2024-01-14T01:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1964gqa/is_v100_32gb_worth_it/ | yetanotherbeardedone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1964gqa | false | null | t3_1964gqa | /r/LocalLLaMA/comments/1964gqa/is_v100_32gb_worth_it/ | false | false | self | 1 | null |
Best DB structure for my use-case... | 1 | Lets say I have a postgres database which consists of tables; classes, teachers, orders, reviews.
Is it better to create a vector column in each table - and store the embedding in the actual table for each? OR should I create a vector table called 'vectors' and store a 'type' (courses, teachers), 'id' and just have a long vertical table of vectors in one table?
Sorry if this is a noob question. Wondering which is best for performance and long term maintainability.
​
Thanks! | 2024-01-14T01:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/1963p5n/best_db_structure_for_my_usecase/ | trojans10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1963p5n | false | null | t3_1963p5n | /r/LocalLLaMA/comments/1963p5n/best_db_structure_for_my_usecase/ | false | false | self | 1 | null |
USB-AI | 1 | Has anyone explored or heard of deploying an AI model on a USB-sized computing device, akin to portable computers on flash drives? Given the strides in miniaturizing technology, such as Raspberry Pi's ability to handle lightweight AI tasks, it seems feasible. I'm intrigued by the potential of plug-and-play AI solutions in such a compact form. Does anyone have insights or experiences in this area? The possibilities for portable and accessible AI applications seem promising. | 2024-01-14T00:58:09 | https://www.reddit.com/r/LocalLLaMA/comments/196396u/usbai/ | AI_Heresy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196396u | false | null | t3_196396u | /r/LocalLLaMA/comments/196396u/usbai/ | false | false | self | 1 | null |
How does people merge models to form MoE? | 4 | This guy says he formed a Mixture of Experts (MoE) made with [**mergekit**](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [**openchat/openchat-3.5-1210**](https://huggingface.co/openchat/openchat-3.5-1210)
* [**beowolx/CodeNinja-1.0-OpenChat-7B**](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [**maywell/PiVoT-0.1-Starling-LM-RP**](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [**WizardLM/WizardMath-7B-V1.1**](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
How does he do that?
[https://huggingface.co/mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
​ | 2024-01-14T00:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1962vvz/how_does_people_merge_models_to_form_moe/ | Dense-Smf-6032 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1962vvz | false | null | t3_1962vvz | /r/LocalLLaMA/comments/1962vvz/how_does_people_merge_models_to_form_moe/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'pMGFIbiK07kT202sIqJjsHYK9YuE5MxeJTXIE_PYBFc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=108&crop=smart&auto=webp&s=8bdfdaca73bbc1d5c6ad1b1e7c9772692b071f8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=216&crop=smart&auto=webp&s=6edf5088803eb427e691f03c79dfefe753c0ec3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=320&crop=smart&auto=webp&s=5b374e20edebecf1cf29e1b6ae92c5b413b5fc01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=640&crop=smart&auto=webp&s=7ee305b0d58e89d365d987dedd4847d21fa5b025', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=960&crop=smart&auto=webp&s=aef43e71bad070efe1e61ed44da5862bdbec6c04', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?width=1080&crop=smart&auto=webp&s=2d17d9808ac206e4d4dbd6b316ba827bd0f97b1b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LPXvR8Dhokq610o2LI5H6U2mXgGYPa4n18BNrCNBu8U.jpg?auto=webp&s=e33a9d6e2759e23856ba3d4cbec759011b28458e', 'width': 1200}, 'variants': {}}]} |
Best MoE for <16GB CPU inference? | 1 | [removed] | 2024-01-14T00:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/1962g8x/best_moe_for_16gb_cpu_inference/ | Future_Might_8194 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1962g8x | false | null | t3_1962g8x | /r/LocalLLaMA/comments/1962g8x/best_moe_for_16gb_cpu_inference/ | false | false | self | 1 | null |
Local file access | 1 | What interface do you use to alllow your LLM of choice to access text/word/PDF documents on your local computer? | 2024-01-14T00:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/196258f/local_file_access/ | hmmqzaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196258f | false | null | t3_196258f | /r/LocalLLaMA/comments/196258f/local_file_access/ | false | false | self | 1 | null |
Bagel-Hermes-2x34b, possibly the best RP model I've used | 71 | https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b
Been playing with this for a couple of days now and it is blowing me away. Long context (it's based on Yi so up to 200k), top notch output, wonderfully coherent responses. I think 2x34b might be the future for mid-size models.
As an example, I asked for a sci fi RP character (literally, that was the prompt). It gave me this:
[Name]: Epsilon [Species/Race]: Humanoid Cyborg (Human + Robotic Enhancements)
[Appearance]: Modified body parts including cybernetic eyes & ears, metallic arms & legs, holographic tattoos covering exposed metal portions, synthetic skin pigment changed from pale to deep purple, platinum hair with blue highlights, black bodysuit made of flexible armor plating, knee-high boots with retractable blades housed inside, age appears mid-twenties but could potentially be centuries old.
Epsilon hails from a post-apocalyptic world where humans have merged with technology to survive harsh conditions after Earth's destruction. Her memories are fragmented, but she knows that before becoming a cyborg, she was once part of a thriving artistic community. Now, she exists in the wastelands, scavenging for parts to sustain herself while painting murals on derelict structures. She yearns for connection but fears others might reject her due to her mechanical nature.
Her journey involves discovering what remains of humanity and finding meaning beyond survival. Along the way, she may encounter sentient AI, rogue robots, factions battling over resources, hidden enclaves housing survivors, etc. The possibilities are endless!
In this scenario, the main conflict arises from reconciling human emotionality with mechanically enhanced bodies. How does one remain human despite being half machine? Can true artistry exist among machinery? Such questions define Epsilon's narrative arc. Hope this fits the bill! | 2024-01-13T23:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1961uic/bagelhermes2x34b_possibly_the_best_rp_model_ive/ | Amajiro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1961uic | false | null | t3_1961uic | /r/LocalLLaMA/comments/1961uic/bagelhermes2x34b_possibly_the_best_rp_model_ive/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': '1G9CZ_mc0-dj60Zer6G2IoFXs_f9seE-MmXP-LJ0ZDA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=108&crop=smart&auto=webp&s=000c74f7eee04cb8ba2e38d2f5058bbb23d2a103', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=216&crop=smart&auto=webp&s=1eb7ea80e3f204fdc312910a0d74d701319df635', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=320&crop=smart&auto=webp&s=57c8e69eac6e367a911dcac54df2931b3b1424c4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=640&crop=smart&auto=webp&s=2315d82617071fc7d48c324a10ebca493bc8767d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=960&crop=smart&auto=webp&s=23e39a0562ef110ac4803b18542a909aa0ef785c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?width=1080&crop=smart&auto=webp&s=f1a946f594bcd7355e31fbced449edc0a970747f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qCIteiPbbMYn6XrwRLxkzj7YV6aTukttBcU-pIndsQg.jpg?auto=webp&s=a911113798b3097db54ac0b491d1b76236a24ac1', 'width': 1200}, 'variants': {}}]} |
Is there a way to overwrite ROCm on **Windows** to make it support 6700xt? | 1 | On linux you can just overwrite ROCm from gfx1031 to gfx1030 and that makes the 6700xt support it fine without much issue, ROCm windows support is very new so I can't find any resources on it, does anyone have a way to doing the same to make the 6700xt work in such an enviroment? | 2024-01-13T23:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/1960ze3/is_there_a_way_to_overwrite_rocm_on_windows_to/ | RealAbd121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1960ze3 | false | null | t3_1960ze3 | /r/LocalLLaMA/comments/1960ze3/is_there_a_way_to_overwrite_rocm_on_windows_to/ | false | false | self | 1 | null |
Long-term memory with Vector Databases implementation | 8 | I have a few question about implementing Vector database in an ai chatbot/assistant LLM to extends it's memory.
1. Would we save each message/conversation turn as a vector or would we compile multiple turns and use that as a vector (sometimes a some convos go on multiple turns, and jfmight be better for context)?
2. Would i still be using a normal history(aka 5 most recent messages) in addition to vector database?
3. How would it play into a instruction/response prompt, aka is there any prompt templates for vector database?
This is for a chatbot/ai assistant ai. | 2024-01-13T23:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1960xt5/longterm_memory_with_vector_databases/ | ll_Teto_ll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1960xt5 | false | null | t3_1960xt5 | /r/LocalLLaMA/comments/1960xt5/longterm_memory_with_vector_databases/ | false | false | self | 8 | null |
Ollama cost-effectiveness versus OpenAI | 5 | see [https://gist.github.com/jerzydziewierz/3b4a169c8d7cba89e18f613b32c3f52b](https://gist.github.com/jerzydziewierz/3b4a169c8d7cba89e18f613b32c3f52b)
TLDR: if you assume that performance of \`ollama run dolphin-mixtral\` with \`gpt-4-1106-preview\`, and you have enough content to run through, then mixtral is \~11x cheaper -- and you get the privacy on top.
Of course, some will claim that \`gpt-4-1106-preview\` is somehow better than \`dolphin-mixtral\` and hence such a comparison is moot. I can't argue with that.
​ | 2024-01-13T22:54:27 | https://www.reddit.com/r/LocalLLaMA/comments/1960m41/ollama_costeffectiveness_versus_openai/ | Treeeant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1960m41 | false | null | t3_1960m41 | /r/LocalLLaMA/comments/1960m41/ollama_costeffectiveness_versus_openai/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} |
Anybody tried DPO training for fixing (non-english) language for correct sentences ? | 1 | Hi, anybody tried it ? I wondered if DPO can only be used to choose from the 2 results (one is bad and one is good) from the model.
I was thinking of when the model output bad result, can we just create our own version of good result ? Or we can only use good result from the model
​
Thanks | 2024-01-13T22:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/195zr9e/anybody_tried_dpo_training_for_fixing_nonenglish/ | x4080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195zr9e | false | null | t3_195zr9e | /r/LocalLLaMA/comments/195zr9e/anybody_tried_dpo_training_for_fixing_nonenglish/ | false | false | self | 1 | null |
WhiteRabbitNeo-33B-v1-GGUF | 23 | Since TheBloke one are broken right now, i made and uploaded my own for anyone interested.
I'm sure TheBloke will fix them soon, but if you wanna try the model in the mean time.
I only did q4_K_M and Q8 because of that.
https://huggingface.co/mantafloppy/WhiteRabbitNeo-33B-v1-GGUF | 2024-01-13T21:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/195zdel/whiterabbitneo33bv1gguf/ | mantafloppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195zdel | false | null | t3_195zdel | /r/LocalLLaMA/comments/195zdel/whiterabbitneo33bv1gguf/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'R3MVT-ODMBoREaaiaRdDZDKuiU4lHrkHtUIQwFoUX-k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=108&crop=smart&auto=webp&s=ddea45ad1051d2916f8684da90533502766c26bc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=216&crop=smart&auto=webp&s=2f83169bbb815fcca7880cf14c8e39a7ba0d16db', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=320&crop=smart&auto=webp&s=61326ca31f86a13243ae68a8c2b291ab5c7f8bdb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=640&crop=smart&auto=webp&s=7a61e0e07e1ed7d35f404dc51c0c0be93b8548fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=960&crop=smart&auto=webp&s=c408add4bc8acea2926013bed26527684bf523b1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?width=1080&crop=smart&auto=webp&s=946533274f06dd90ed5c1d102f798a54ae5ec936', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0g5H1pmzap7lfks1rJMmySl3vY2JUTovyoWnYtqSKwM.jpg?auto=webp&s=cc7ab230a9305bb302ff5ceef660fc404b562681', 'width': 1200}, 'variants': {}}]} |
Claude thinks Mixtral was made by Anthropic AI and is not widely available to the public yet! | 1 | 2024-01-13T21:53:56 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 195z9de | false | null | t3_195z9de | /r/LocalLLaMA/comments/195z9de/claude_thinks_mixtral_was_made_by_anthropic_ai/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'yVrFwLf2k4DG5F_luN7O9axmb7A3hJaeDHcXIfuXUOU', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=108&crop=smart&auto=webp&s=36157be1877d4c9e89edbee4061e10cddffb0d02', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=216&crop=smart&auto=webp&s=2c229708682b514ad77866455e2ca88af3e576a0', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=320&crop=smart&auto=webp&s=d86ef1719d0f2e1967a5fdff9c73e324bb059b7c', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=640&crop=smart&auto=webp&s=db1a4f3b1ad2ed04007e04cb4105b7df01f9809b', 'width': 640}, {'height': 855, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=960&crop=smart&auto=webp&s=f070ebaa39911ca61f58382705335ffb38811b31', 'width': 960}, {'height': 961, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?width=1080&crop=smart&auto=webp&s=4d5520a5d34d070e9a77dff25cead7fe8e618a30', 'width': 1080}], 'source': {'height': 1402, 'url': 'https://preview.redd.it/5gx8rjzr3acc1.png?auto=webp&s=bc26c9ca5755c87f436e77c7e900b37fe8f263cb', 'width': 1574}, 'variants': {}}]} | ||
GPU or API hosting based in EU | 1 | [removed] | 2024-01-13T21:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/195yrfm/gpu_or_api_hosting_based_in_eu/ | raeudigerhund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195yrfm | false | null | t3_195yrfm | /r/LocalLLaMA/comments/195yrfm/gpu_or_api_hosting_based_in_eu/ | false | false | self | 1 | null |
Free multipurpose AI app! | 1 | [removed] | 2024-01-13T21:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/195yqmg/free_multipurpose_ai_app/ | Any_Imagination_2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195yqmg | false | null | t3_195yqmg | /r/LocalLLaMA/comments/195yqmg/free_multipurpose_ai_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6GCl6lNefD6PqSjKuZeaU493H-w3w4odM2Kk5Ah8eoQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
Ryzen 8000 Might Let You Run 70B Models Faster than a *4090 | 1 | As I'm sure everyone knows by now, one of the largest limiting factors in this hobby is VRAM. AMD recently announced their new Ryzen 8000 lineup featuring Ryzen AI. This differs from XDNA in that it utilizes system memory rather than any sort of VRAM. As the highest-end consumer cards only have 24GB of VRAM, this means that you could very well get better performance out of Ryzen AI than a dedicated graphics card with larger models that can be held in your system's RAM.
If you're not familiar with the technology at play here's a short explanation. If you want to run an LLM that's 48GB in size and your GPU has 24GB of VRAM, for every token your GPU computes, your GPU needs to read 24GB twice from either your RAM or SSD/HDD (depending on your cache settings). This is a long process and, on fast GPUs with lower memory (like high-end consumer GPUs), is the most time-intensive process of running large models. Your GPU might very well spend more time loading the model's parts than actually computing anything. With Ryzen AI though, we can skip the whole process of loading the model into VRAM and just keep it in RAM while getting the acceleration of a low-mid ranged GPU. The actual computational power of the NPU compared to a GPU is lower, but since you don't need to load the model the end result might be faster. This is why you often see very large models running at similar speeds to CPUs with lots of memory when compared to high-end GPUs.
The best part, however, is that system RAM is upgradable, meaning you could get a 4x48GB DDR5 kit and run Llama 70B all in RAM. It's important to note that it'll still be dreadfully slow. I'm not suggesting it'll be otherwise, but we're on the right path! If this trend keeps up, and we continue to get more advanced TPUs/NPUs in CPUs, we won't be limited by VRAM anymore while also avoiding the high costs of dedicated GPUs!
As of right now, Ryzen AI is only available on Desktop Ryzen 8000 APUs. It's important to note that mobile/embedded variants of Ryzen 8000 have XDNA rather than Ryzen AI. XDNA is a discrete enclosed system and while it can take advantage of system memory, it's mapped IO rather than direct.
Again, a huge disclaimer. An integrated TPU's computation speed is going to be much slower compared to a dedicated GPU, but if VRAM is your limiting factor, it might very well be worth a try. It'll never replace GPUs in the datacenter, but it might allow consumers the ability to run larger models. I admit it's ability to outperform a 4090 is heavily dependent on the model you're running, but it's very possible. Thanks for coming to my TED talk. | 2024-01-13T21:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/195ymqp/ryzen_8000_might_let_you_run_70b_models_faster/ | greysourcecode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195ymqp | false | null | t3_195ymqp | /r/LocalLLaMA/comments/195ymqp/ryzen_8000_might_let_you_run_70b_models_faster/ | false | false | self | 1 | null |
Have you felt that gpt 4 has suddenly become very resistant what every you ask to do? After turbo model lauch. | 138 | Have you all felt that after the turbo model lauch , gpt4 has become very resistant to provide a starting code or a general code for random implementation ideas that you ask it to do, even for a sample code , it starts telling things like "it is a complex task and would require a lot of developers to build a fully feldged thing" stuff like that, earlier it provided what ever sort of method it feels would be justifiable to the question as well as user. have you felt it? | 2024-01-13T21:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/195yl70/have_you_felt_that_gpt_4_has_suddenly_become_very/ | Mohit_Singh_Pawar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195yl70 | false | null | t3_195yl70 | /r/LocalLLaMA/comments/195yl70/have_you_felt_that_gpt_4_has_suddenly_become_very/ | false | false | self | 138 | null |
Is there anything like this that uses text generation webui as backend? | 1 | Ran across this https://github.com/apeatling/ollama-voice-mac
Would like to mess around with something like that while I wait for alltalk extension to support ST.
Has anyone come across something similar that uses text gen webui as the back end?
Windows would also be needed with my current setup if possible. Cheers. | 2024-01-13T21:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/195yl0s/is_there_anything_like_this_that_uses_text/ | hAReverv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195yl0s | false | null | t3_195yl0s | /r/LocalLLaMA/comments/195yl0s/is_there_anything_like_this_that_uses_text/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'p-Mdlu6d32LMwzwgXLpfHK-NgtnQ4pXJHx0ujgrRePY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=108&crop=smart&auto=webp&s=e7a3600993d22f4783831f66383f5114a7368c7b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=216&crop=smart&auto=webp&s=dac941179c70d49e8652fc880938db45218b2d8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=320&crop=smart&auto=webp&s=ecc7faa309e3b5d5b56997835c479ee7ad2c2bb9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=640&crop=smart&auto=webp&s=060a66addaac73a577b0260a641581b561b348c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=960&crop=smart&auto=webp&s=8d0db03820349cf51f2e7677e09ec8dc1fbccc30', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?width=1080&crop=smart&auto=webp&s=60255c8a42260a57f4c067ea6d14dbc6659e8a19', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bJ0s4zSXesK4pdihrQ9soxv9Sxyf_sVdv736mLpTTb0.jpg?auto=webp&s=a60c861eb4880209173deada7af0bbb2170b2f37', 'width': 1200}, 'variants': {}}]} |
What is Rabbit R1? Everything you need to know | 1 | 2024-01-13T20:29:00 | https://www.techrytr.in/2024/01/what-is-rabbit-r1-everything-you-need-to-know.html | Gnaneshwar_Gaddam | techrytr.in | 1970-01-01T00:00:00 | 0 | {} | 195xcbf | false | null | t3_195xcbf | /r/LocalLLaMA/comments/195xcbf/what_is_rabbit_r1_everything_you_need_to_know/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NMaNO5wvFBPMrS9_PUJOXsRdFVWHCZphbEI_u4SbaTM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?width=108&crop=smart&auto=webp&s=246ebe17a500ebe908a8befa8600813a545df4f1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?width=216&crop=smart&auto=webp&s=ed74533fd28535ffda48bad0643515be414876c3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?width=320&crop=smart&auto=webp&s=c79d5869958ac594fdcc910f2c7ae2dded7dbd72', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?width=640&crop=smart&auto=webp&s=0c827a4567bf3af545b4cdb7f9d5acae998f5dd9', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?width=960&crop=smart&auto=webp&s=9b6119e38a298259961fa278079fdd02b38499a3', 'width': 960}], 'source': {'height': 574, 'url': 'https://external-preview.redd.it/lsv7Fh_tk2ZJzcqwycsGKS8e5TFQhW0vskd-mBrIFr0.jpg?auto=webp&s=088497eada704282a3e54982e669b755f86447b8', 'width': 1024}, 'variants': {}}]} | ||
Converting language models to multimodal? | 2 | would it be possible to take a model like llama and make it multimodal? | 2024-01-13T20:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/195x4ic/converting_language_models_to_multimodal/ | Ok-Buy268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195x4ic | false | null | t3_195x4ic | /r/LocalLLaMA/comments/195x4ic/converting_language_models_to_multimodal/ | false | false | self | 2 | null |
LLLM - Local Large Language Model | 1 | Well maybe the Large is just dropped and replaced with Local so the term remains LLM. | 2024-01-13T20:19:05 | https://www.reddit.com/r/LocalLLaMA/comments/195x475/lllm_local_large_language_model/ | Full_Operation_9865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195x475 | false | null | t3_195x475 | /r/LocalLLaMA/comments/195x475/lllm_local_large_language_model/ | false | false | self | 1 | null |
Free multipurpose AI app! | 1 | [removed] | 2024-01-13T20:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/195x15v/free_multipurpose_ai_app/ | Any_Imagination_2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195x15v | false | null | t3_195x15v | /r/LocalLLaMA/comments/195x15v/free_multipurpose_ai_app/ | false | false | self | 1 | null |
Free multipurpose AI app! | 1 | [removed] | 2024-01-13T20:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/195x06v/free_multipurpose_ai_app/ | Any_Imagination_2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195x06v | false | null | t3_195x06v | /r/LocalLLaMA/comments/195x06v/free_multipurpose_ai_app/ | false | false | self | 1 | null |
The Llama Hitchiking Guide to Local LLMs | 122 | 2024-01-13T20:12:34 | https://osanseviero.github.io/hackerllama/blog/posts/hitchhiker_guide/ | hackerllama | osanseviero.github.io | 1970-01-01T00:00:00 | 0 | {} | 195wyru | false | null | t3_195wyru | /r/LocalLLaMA/comments/195wyru/the_llama_hitchiking_guide_to_local_llms/ | false | false | 122 | {'enabled': False, 'images': [{'id': 'Pkw91FgCRMIgcT-Xd0y_dVrW6OttyQXQXnozPgvYdV0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=108&crop=smart&auto=webp&s=0ee63ee16da7127b408536e0613f7dd629666b5e', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=216&crop=smart&auto=webp&s=3da6f3b597fc80c56fed07f391109e985e5be70a', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=320&crop=smart&auto=webp&s=c011d5857d767af5cf8bb74b3974d8a786596d8e', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=640&crop=smart&auto=webp&s=aa13295d526b6874673d064c8f833bac8c7fb99e', 'width': 640}, {'height': 498, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=960&crop=smart&auto=webp&s=a91478a5dfe7e1d2ea951764deebbd4e4def67b8', 'width': 960}, {'height': 560, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=1080&crop=smart&auto=webp&s=64c6268e3e48594364ee2b29be5526fc39ca594a', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?auto=webp&s=2791b5f0a711c9dd3189335434b6967dd43bb2f6', 'width': 1170}, 'variants': {}}]} | ||
The Llama Hitchiking Guide to Local LLMs (by @osanseviero at HuggingFace) | 2 | 2024-01-13T20:07:36 | https://osanseviero.github.io/hackerllama/blog/posts/hitchhiker_guide/ | galambalazs | osanseviero.github.io | 1970-01-01T00:00:00 | 0 | {} | 195wumr | false | null | t3_195wumr | /r/LocalLLaMA/comments/195wumr/the_llama_hitchiking_guide_to_local_llms_by/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'Pkw91FgCRMIgcT-Xd0y_dVrW6OttyQXQXnozPgvYdV0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=108&crop=smart&auto=webp&s=0ee63ee16da7127b408536e0613f7dd629666b5e', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=216&crop=smart&auto=webp&s=3da6f3b597fc80c56fed07f391109e985e5be70a', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=320&crop=smart&auto=webp&s=c011d5857d767af5cf8bb74b3974d8a786596d8e', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=640&crop=smart&auto=webp&s=aa13295d526b6874673d064c8f833bac8c7fb99e', 'width': 640}, {'height': 498, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=960&crop=smart&auto=webp&s=a91478a5dfe7e1d2ea951764deebbd4e4def67b8', 'width': 960}, {'height': 560, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?width=1080&crop=smart&auto=webp&s=64c6268e3e48594364ee2b29be5526fc39ca594a', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/ojY05b4A4PDNQQy-fUjMrkZUThnDtGIDmyLgGKXxM1g.jpg?auto=webp&s=2791b5f0a711c9dd3189335434b6967dd43bb2f6', 'width': 1170}, 'variants': {}}]} | ||
finetuning on a finetuned vs base model | 1 | [removed] | 2024-01-13T19:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/195w5c0/finetuning_on_a_finetuned_vs_base_model/ | imgonnacuhhhm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195w5c0 | false | null | t3_195w5c0 | /r/LocalLLaMA/comments/195w5c0/finetuning_on_a_finetuned_vs_base_model/ | false | false | self | 1 | null |
Jailbreak promts Ideas/ Alternatives for censorship or ban | 1 | [removed] | 2024-01-13T19:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/195w49n/jailbreak_promts_ideas_alternatives_for/ | Horror_Echo6243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195w49n | false | null | t3_195w49n | /r/LocalLLaMA/comments/195w49n/jailbreak_promts_ideas_alternatives_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
Anarchist Language Models(?) | 1 | Hey peeps. I have been looking for a model that is full-blown conspiracy theorist. I keep asking different model/character combos about the CIA and they all love that evil agency. Do any models exist that aren't trained to trust the CIA (amongst other evil entities)? How hard would it be to train an AI Alex Jones (for lack of a more well-known disestablishmentarian), assuming one doesn't exist? | 2024-01-13T18:38:10 | https://www.reddit.com/r/LocalLLaMA/comments/195urjc/anarchist_language_models/ | science_gangsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195urjc | false | null | t3_195urjc | /r/LocalLLaMA/comments/195urjc/anarchist_language_models/ | false | false | self | 1 | null |
Internal structure design of mistral? | 3 | Hey, I'm reading the paper on mistral 7b, but it doesn't cover the internal structure design in detail (e.g., which modules and how they are connected and interacted). Are there any good materials in explaining these? Very curious about its internal structure and why such small llm works so well. | 2024-01-13T18:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/195upp1/internal_structure_design_of_mistral/ | OneConfusion3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195upp1 | false | null | t3_195upp1 | /r/LocalLLaMA/comments/195upp1/internal_structure_design_of_mistral/ | false | false | default | 3 | null |
Working on getting rid of alexa :) | 1 | A simple python script combining VAD, whisper and mixtral (through ollama). Next step is gettinf json responses and hooking up home assistant to the responses and TTS. | 2024-01-13T18:33:39 | https://v.redd.it/wlj0z9ta49cc1 | MarksGG | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 195unsr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wlj0z9ta49cc1/DASHPlaylist.mpd?a=1707762839%2COGMxMWY0NzE3MDVlOTg3YjAxM2E1MzhjYjBmZjlkMGZlOTJkZThlZWI2OWQ4MDkzNjI5NzFhYWI2NDNkNGRjZA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/wlj0z9ta49cc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wlj0z9ta49cc1/HLSPlaylist.m3u8?a=1707762839%2CMzQ0MmRmZTFmNDE0NWZkNGI1OGJhMzhjYjFkNTk1M2Y3YTMyYTQzMjFmNmFjNDcyNmM2YzdmZTM2MzU5YjZhNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wlj0z9ta49cc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_195unsr | /r/LocalLLaMA/comments/195unsr/working_on_getting_rid_of_alexa/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=108&crop=smart&format=pjpg&auto=webp&s=c4628a2bfe26240d4869b5a4d770c24931198542', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=216&crop=smart&format=pjpg&auto=webp&s=8078c9b0f5f3d597c1f282e321075b745ca97a1b', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=320&crop=smart&format=pjpg&auto=webp&s=697f33b178feec527ba590eff5674c3548c2d165', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=640&crop=smart&format=pjpg&auto=webp&s=fa32a1d5c95d79ad09c57927a7917beaffe025f4', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=960&crop=smart&format=pjpg&auto=webp&s=35274432612b1cc663441a39643ca4df82ac63ae', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=acfccd97a8b43a95bcc00284f9afb0f31dc8ef56', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/aHJkeXZ3bmE0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?format=pjpg&auto=webp&s=832a9e1cff88386cc147827d44cbf99b3f70265f', 'width': 1080}, 'variants': {}}]} | |
Working on getting rid of alexa :) | 74 | A simple python script combining VAD, whisper and mixtral (through ollama). Next step is gettinf json responses and hooking up home assistant to the responses and TTS. | 2024-01-13T18:33:20 | https://v.redd.it/tfnadtl849cc1 | MarksGG | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 195unjh | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tfnadtl849cc1/DASHPlaylist.mpd?a=1707762822%2CMTM3YzJmNjdkNGIwOTg0YzNhNzQ0MmIxMDY5ODVjNWIxYTFmYTQzNTZiMGYzMmJmZmZhY2Q5N2Q4NjU2MDAxZg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/tfnadtl849cc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tfnadtl849cc1/HLSPlaylist.m3u8?a=1707762822%2CZTdjMDdiYzlmZDNmM2VmNjQwM2Y3ZjM2ODhjOTgxZGM0MjZkZTc2ZDliMmY4NTNhMWNlOGQ5YTA4NjFmMTJjZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tfnadtl849cc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_195unjh | /r/LocalLLaMA/comments/195unjh/working_on_getting_rid_of_alexa/ | false | false | 74 | {'enabled': False, 'images': [{'id': 'ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=108&crop=smart&format=pjpg&auto=webp&s=65a0160a349ca9232cdf1a24958623662aa813d5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=216&crop=smart&format=pjpg&auto=webp&s=98b63ab3b11526e4b212641194308567103656e3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=320&crop=smart&format=pjpg&auto=webp&s=fd367cc83e301f083273d3bd842e3de18460049a', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=640&crop=smart&format=pjpg&auto=webp&s=426e0ff1b32e64b4419f3bbd5c808942215349c4', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=960&crop=smart&format=pjpg&auto=webp&s=3415ec06cb1d1162ed5cc803139d66ae3e549c7d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=212c3729f55a8e28bb028295c6bcc1a0faadef8c', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/ZXR5YWE2aDc0OWNjMfysyI2s-0mkCvi8GDBvcX1RFuVy-9754z8prlUSj3GA.png?format=pjpg&auto=webp&s=6646ba6fb921a344e2cf1d0480e7331f51d15289', 'width': 1080}, 'variants': {}}]} | |
Parallel Function calling to get the date and time, the current weather and perform a calculation at once, with nous-hermes-2-solar-10.7b.Q6_K.gguf | 36 | 2024-01-13T18:19:52 | https://www.reddit.com/gallery/195uc6t | FlowerPotTeaTime | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 195uc6t | false | null | t3_195uc6t | /r/LocalLLaMA/comments/195uc6t/parallel_function_calling_to_get_the_date_and/ | false | false | 36 | null | ||
Home Assistant -> LocalAI -> Home Assistant | 1 | [removed] | 2024-01-13T17:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/195tvdj/home_assistant_localai_home_assistant/ | spoolin__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195tvdj | false | null | t3_195tvdj | /r/LocalLLaMA/comments/195tvdj/home_assistant_localai_home_assistant/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'A7PJ9miYs_IXDZXC-sfZ4g04QN6KK3luJbOlq3JHsYA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IHwmuZjCws8AsWgg3MdhpZtnbHoUbLkbxs_UQlEkHdU.jpg?width=108&crop=smart&auto=webp&s=68832c7f2a557dab9a684857b4d40f9a2159e9b9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IHwmuZjCws8AsWgg3MdhpZtnbHoUbLkbxs_UQlEkHdU.jpg?width=216&crop=smart&auto=webp&s=c348060d48eed03eb1d3f82e2a99dcd31423cb2e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IHwmuZjCws8AsWgg3MdhpZtnbHoUbLkbxs_UQlEkHdU.jpg?width=320&crop=smart&auto=webp&s=e31edb28609b107c24b814f6c806678f423e6d9b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IHwmuZjCws8AsWgg3MdhpZtnbHoUbLkbxs_UQlEkHdU.jpg?auto=webp&s=b17cece5a50885b27d6eff7cb9e9ccf8b6785bdc', 'width': 480}, 'variants': {}}]} |
OpenDalle v1.1, VCoder, LongAnimateDiff & More! | 1 | [removed] | 2024-01-13T17:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/195tue7/opendalle_v11_vcoder_longanimatediff_more/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195tue7 | false | null | t3_195tue7 | /r/LocalLLaMA/comments/195tue7/opendalle_v11_vcoder_longanimatediff_more/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WkFByXiAJsiSUgn0labIiYLldry5jVZkO3MqYaYNdO4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PPTLsAEgarXJ4gqGpaVA5PmBQ8-tDjIturNml7N-S0I.jpg?width=108&crop=smart&auto=webp&s=d32744c22913d95be8dd191b5630b4bdb7b94e94', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/PPTLsAEgarXJ4gqGpaVA5PmBQ8-tDjIturNml7N-S0I.jpg?width=216&crop=smart&auto=webp&s=d29de28469491c0976cc2e6ec7bd87619eb3b947', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/PPTLsAEgarXJ4gqGpaVA5PmBQ8-tDjIturNml7N-S0I.jpg?width=320&crop=smart&auto=webp&s=52d96c0b5ecf99847e9089c83690485675299aba', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/PPTLsAEgarXJ4gqGpaVA5PmBQ8-tDjIturNml7N-S0I.jpg?auto=webp&s=208a488a3451c877e30386a3c1f3ec323ea59c22', 'width': 480}, 'variants': {}}]} |
Best model for writing that will run on 24GB vram? | 17 | Looking for a model that can write in different styles and formats / lengths (without any fine tuning - or just little fine tuning) - and that will run decently on a 3090.
Any ideas? | 2024-01-13T17:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/195tsl8/best_model_for_writing_that_will_run_on_24gb_vram/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195tsl8 | false | null | t3_195tsl8 | /r/LocalLLaMA/comments/195tsl8/best_model_for_writing_that_will_run_on_24gb_vram/ | false | false | self | 17 | null |
Hey, I want to build a PC for running LLMs | 1 | [removed] | 2024-01-13T17:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/195ti9t/hey_i_want_to_build_a_pc_for_running_llms/ | Mohamer01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195ti9t | false | null | t3_195ti9t | /r/LocalLLaMA/comments/195ti9t/hey_i_want_to_build_a_pc_for_running_llms/ | false | false | self | 1 | null |
LLM Assistant with function calling - Just a small test project I made | 79 | [https://github.com/Rivridis/LLM-Assistant](https://github.com/Rivridis/LLM-Assistant)
This is an attempt to make a LLM application that can call functions and search the internet, all without langchain or any complex setups. This is based purely on llama-cpp-python and can be run using any gguf models which uses the ChatML prompting format.
I used RAG for searching the internet, using DuckDuckGo search engine, and appending the search result to the system prompt. All other functions are called based on the example given in system prompt.
I even added a Gradio chat UI, and I shall find a way to add stuff like uploading documents and searching from them (Maybe by using some vector database).
If anyone has any feature ideas or improvement to the code, feel free to let me know, as I plan to make this project even more feature rich, once I find a way to make the LLM choose multiple functions instead of the current one function at a time. | 2024-01-13T17:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/195t9qi/llm_assistant_with_function_calling_just_a_small/ | Rivridis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195t9qi | false | null | t3_195t9qi | /r/LocalLLaMA/comments/195t9qi/llm_assistant_with_function_calling_just_a_small/ | false | false | self | 79 | {'enabled': False, 'images': [{'id': 'zZwmijuUPBXWDty_EOKxz0-FNzpomckZLoQZruCBxbs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=108&crop=smart&auto=webp&s=f8fd9ba93446154cb9ad2222ec7bdee15c17180b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=216&crop=smart&auto=webp&s=ea8a62501c9179be20eda7a891fca1c68e233a8c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=320&crop=smart&auto=webp&s=d9f242849804dc898235a1876d8d4cc3564953ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=640&crop=smart&auto=webp&s=92f85a52608921cdc88c911d726450df057b3e93', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=960&crop=smart&auto=webp&s=07b11820d59ddb6378493aa190d47e2f7cf1efe2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?width=1080&crop=smart&auto=webp&s=99cadf798ddf8c53fe4094d0182fd91d3f2e131d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hvhxZpYwe7D1gCLqiE7ZhzlY1NtVpujYUtsDLy7V4w4.jpg?auto=webp&s=448ad214188a7ac917271c0fb916c1d4961f29b3', 'width': 1200}, 'variants': {}}]} |
Best 7b-13b LLM for flashcard creation? | 3 | My specific use case is providing a short section of my study notes or a section of my textbook and asking it to create questions and answers that cover the entire information.
I’ve tinkered with some of the more popular 7b-13b models, and they either give too little information in the answers or they hallucinate extra information.
Is there a model better suited for this task that I might have missed? | 2024-01-13T17:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/195t6ii/best_7b13b_llm_for_flashcard_creation/ | RenoHadreas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195t6ii | false | null | t3_195t6ii | /r/LocalLLaMA/comments/195t6ii/best_7b13b_llm_for_flashcard_creation/ | false | false | self | 3 | null |
Integrating HF TGI with TEI | 3 | I've searched around and cannot find any existing repos; so I'm interested if anyone has managed to integrate HF TGI with TEI?
I was using PrivateGPT for querying against various text documents; however it has many limitations for production, mainly single tenant.
It seems HG TGI is a good fit, especially with continuous batching of incoming requests for increased total throughput. The next step is to integrate RAG and their TEI seems perfect.
Has anyone integrated them together yet? | 2024-01-13T17:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/195t3r7/integrating_hf_tgi_with_tei/ | Smartico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195t3r7 | false | null | t3_195t3r7 | /r/LocalLLaMA/comments/195t3r7/integrating_hf_tgi_with_tei/ | false | false | self | 3 | null |
Why did gpu AIB partners stop making extra vram models ? | 33 | I remember when I got in to pc building around 2012 that saphire had their toxic series which had double the vram then the normal cards. Or even low end cards like the 7750 being sold with 2 and 4 gb variants.
Looking at the growing demand of vram in games and ai tasks. I wonder why are no AIBs like saphire, gigabyte, msi producing high vram variants ? A 32gb 6800xt for example ?
Some of this seems to be possible to achieve by hobbyist modders like the 2080 ti 22gb. Or 3070 16gb mods.
Do you think amd and nvidia are just not allowing it ? Or more something like the market is to small or atleast perceived to small ?
I can't imagine it beinga technicall problem when professional cards of those models exsist that have double the vram.
Just curious if any of you know :D ?
And how much more would you pay for a 32 vs 16gb card ? | 2024-01-13T17:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/195snhf/why_did_gpu_aib_partners_stop_making_extra_vram/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195snhf | false | null | t3_195snhf | /r/LocalLLaMA/comments/195snhf/why_did_gpu_aib_partners_stop_making_extra_vram/ | false | false | self | 33 | null |
How to implement RAG? | 1 | I am making my own frontend, is there an easy way to add RAG functionality? I'm using Ooba for the backend, it's relatively easy to use the API, but I can't seem to find anything similar for a RAG system. Possibly no langchain or docker. | 2024-01-13T17:05:56 | https://www.reddit.com/r/LocalLLaMA/comments/195snd0/how_to_implement_rag/ | Silvestron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195snd0 | false | null | t3_195snd0 | /r/LocalLLaMA/comments/195snd0/how_to_implement_rag/ | false | false | self | 1 | null |
Dual RTX 3090 - 2x slower inference than single | 1 | [removed] | 2024-01-13T16:29:30 | https://www.reddit.com/r/LocalLLaMA/comments/195rt68/dual_rtx_3090_2x_slower_inference_than_single/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195rt68 | false | null | t3_195rt68 | /r/LocalLLaMA/comments/195rt68/dual_rtx_3090_2x_slower_inference_than_single/ | false | false | self | 1 | null |
Dual RTX 3090 - 2x slower inference than single | 1 | [removed] | 2024-01-13T16:26:21 | https://www.reddit.com/r/LocalLLaMA/comments/195rqpl/dual_rtx_3090_2x_slower_inference_than_single/ | Low-Discussion-8656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195rqpl | false | null | t3_195rqpl | /r/LocalLLaMA/comments/195rqpl/dual_rtx_3090_2x_slower_inference_than_single/ | false | false | self | 1 | null |
Run Mixtral 8x7B on MacBook M1 Pro 16GB using llama.cpp + 2bit QuIP quantisation | 1 | [removed] | 2024-01-13T15:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/195r33r/run_mixtral_8x7b_on_macbook_m1_pro_16gb_using/ | kin1vi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195r33r | false | null | t3_195r33r | /r/LocalLLaMA/comments/195r33r/run_mixtral_8x7b_on_macbook_m1_pro_16gb_using/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'qjpVDWVsALmyOjs7hjWwqSwjK5NpKtxeev7IleHzIgY', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=108&crop=smart&auto=webp&s=33b9ebe4f3d7a522bfb1634b1f121e3e535609ae', 'width': 108}, {'height': 72, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=216&crop=smart&auto=webp&s=0e34d760bdc66b27da37b21746166080b3f5d34c', 'width': 216}, {'height': 106, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=320&crop=smart&auto=webp&s=fec12ee5be8c244eb3d61ba206059142e84b5c90', 'width': 320}, {'height': 213, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=640&crop=smart&auto=webp&s=0d0ad7daeaa1875210bdf3cf599836d49f0cb991', 'width': 640}, {'height': 320, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=960&crop=smart&auto=webp&s=7b7c976f2ccc53d0409f48004db3224731afd1e9', 'width': 960}, {'height': 360, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?width=1080&crop=smart&auto=webp&s=4116430a7e76e40ebd0f33eeb150d4170c0cf16c', 'width': 1080}], 'source': {'height': 364, 'url': 'https://external-preview.redd.it/Pz9n1np89_UbM7odIrvwdMOmwY_hag7jYQuSdhwzYRs.jpg?auto=webp&s=a32e2bba04505e490e57364b19af6c65d61b31c9', 'width': 1092}, 'variants': {}}]} | |
Questions About Fine Tuning A Foundation Model | 4 | I have been very successful using various models "out of the box" (unchanged), and using RAG with some more specialized documents. Next would be supervised fine tuning, and I know I could easily follow [this example](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing#scrollTo=ib_We3NLtj2E) if I made the effort to label a dataset [like this one.](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k)
I guess my office is weird in that we will buy super high end hardware (DGX-A100) but not dedicate humans to curate a text dataset. So I've been tasked to fine tune a foundational model using our **unlabeled** pile of documents. I found some older guides that fine tune BERT, which I have actually done in the past, but I want to target and understand a "modern" foundational model.
Right now I want to fine tune a base model like Llama-2 (not chat or instruct) using unsupervised learning with my unlabeled data. Does anyone have code for this or a guide?
I do realize that if I successfully fine tune the foundational model I will still have to do the supervised fine tuning to build a custom instruct/chat model but right now I really want to understand how to fine tune a base model. Thanks for any pointers.
​ | 2024-01-13T15:20:56 | https://www.reddit.com/r/LocalLLaMA/comments/195q9tu/questions_about_fine_tuning_a_foundation_model/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195q9tu | false | null | t3_195q9tu | /r/LocalLLaMA/comments/195q9tu/questions_about_fine_tuning_a_foundation_model/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Best current model to chat with larger documents (7-13 range)? And… | 2 | Also, will quality of results depend on:
- size of documentation
- type of documentation
- type of query
… in your experience? | 2024-01-13T14:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/195pp86/best_current_model_to_chat_with_larger_documents/ | laterral | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195pp86 | false | null | t3_195pp86 | /r/LocalLLaMA/comments/195pp86/best_current_model_to_chat_with_larger_documents/ | false | false | self | 2 | null |
Local LLM server build | 1 | [removed] | 2024-01-13T13:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/195okfv/local_llm_server_build/ | MasterTonberry427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195okfv | false | null | t3_195okfv | /r/LocalLLaMA/comments/195okfv/local_llm_server_build/ | false | false | self | 1 | null |
Cloud GPU for 34B Yi Model? | 1 | [removed] | 2024-01-13T13:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/195nvm5/cloud_gpu_for_34b_yi_model/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195nvm5 | false | null | t3_195nvm5 | /r/LocalLLaMA/comments/195nvm5/cloud_gpu_for_34b_yi_model/ | false | false | self | 1 | null |
Cost-aware local chat UI | 1 | I have several models running locally (llama.cpp OAI-compatible server instances with 7B mistral-instruct and heavily quantized 8x7B instruct) and subscriptions to Mistral Medium and GPT-4.
I am currently using local instance of hf/chat-ui for interacting with all of them. Currently I start with smallest local model. If I don't like the answer I open a new chat tab, select a stronger model, and paste the question there. And so on.
Is there a ui which can be used to go up the hierarchy of models from least to most powerful in a single chat? E.g. start with 7B and if in doubt bump the chat to 8x7B starting from the selected point in the conversation?
Did you find yourself in need for such feature and how did you solve it? | 2024-01-13T13:07:32 | https://www.reddit.com/r/LocalLLaMA/comments/195nmoa/costaware_local_chat_ui/ | bytedonor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195nmoa | false | null | t3_195nmoa | /r/LocalLLaMA/comments/195nmoa/costaware_local_chat_ui/ | false | false | self | 1 | null |
Cost comparisons between OpenAi, Mistral, Claude and Gemini | 102 | I compiled a breakdown of cost/performance in a google sheet and there were a couple of things that struck me:
1. Mistral-medium is really impressive and sits perfectly sandwiched between GPT-3.5 and GPT-4. In my (limited) experience it's a great choice for anyone that isn't able to get consistency or quality out of GPT-3.5.
2. Why would anyone choose Claude? No but seriously, what are their competitive advantage? Safety?
3. GPT-3.5 v Gemini Pro seems very close but in my (limited) experience GPT-3.5 does perform better in practice. I'd be curious what other peoples experiences are.
https://preview.redd.it/5l3jwxxq57cc1.png?width=1012&format=png&auto=webp&s=1517e107cc7aec59406dd7cffdc5b389391c7353
Google sheet is [here](https://docs.google.com/spreadsheets/d/1SwqpeKB19JK2Xk6NW2uRxi7T4ydBZPpB-x9OcBAIyLA/edit#gid=0), please let me know if I bungled any of the numbers.
\- All prices are normalized to USD/1M tokens. (EUR -> USD conversion value as of today)
\- I only used the LMSYS chatbot arena benchmark because all the rest have seemingly been gamified already.
\- The sheet includes an example estimate calculator, just copy the sheet to override the values try it. | 2024-01-13T12:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/195mi89/cost_comparisons_between_openai_mistral_claude/ | Mother-Ad-2559 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195mi89 | false | null | t3_195mi89 | /r/LocalLLaMA/comments/195mi89/cost_comparisons_between_openai_mistral_claude/ | false | false | 102 | {'enabled': False, 'images': [{'id': '-NGRM66dwQtgS15fV8zC6UHeeNOUuIYKKnKQpNqKr7A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=108&crop=smart&auto=webp&s=2540bf17b6158825a84b83e055ffc476409393d9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=216&crop=smart&auto=webp&s=aea94d85497017d41eef057fe607f3fb9af6842b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=320&crop=smart&auto=webp&s=33478bc2ceab17b94ec975d25b531003f35e24fd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=640&crop=smart&auto=webp&s=11110de7413e697c1ea5fe5a313312bd78544336', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=960&crop=smart&auto=webp&s=544104b27c44337b567a3918add05990e46b601d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?width=1080&crop=smart&auto=webp&s=2e689428ce7857470509a93b80528570ca371d6d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/QaMaFP5b8HuvnRACbTBl5QYp-eYzxS5v6wG5ZN_fOYE.jpg?auto=webp&s=33987f0173480fd15f5fe905d82b74b7bbf5a26c', 'width': 1200}, 'variants': {}}]} | |
Koboldcpp context shifting question | 1 | [removed] | 2024-01-13T11:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/195mgcm/koboldcpp_context_shifting_question/ | Full_Operation_9865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195mgcm | false | null | t3_195mgcm | /r/LocalLLaMA/comments/195mgcm/koboldcpp_context_shifting_question/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N-VN_J4R3qG1av_C_aiv0TXT9u2m8s-B-rpUK1vZlVY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=108&crop=smart&auto=webp&s=d076b4e3af8e641ef6ed889a1fb91b379fe8647b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=216&crop=smart&auto=webp&s=34e8bec7d461ca1b7de5455381574f48e08f2a9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=320&crop=smart&auto=webp&s=b0e81d1cc0cd8940c9b7e6b450ddb62f4873f82e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=640&crop=smart&auto=webp&s=7b4eb3f8a80749660f38c5a556136972eb7365d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=960&crop=smart&auto=webp&s=8202a06d16a72e421709dfc9d615189fb52bbcca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?width=1080&crop=smart&auto=webp&s=47d78fc1ee5de0476dd5b46b0833d47891a06612', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QPCKjQIwODyKADH645J5lpIWmNTfrXRazfGsgzgP_ac.jpg?auto=webp&s=970de254a80b05262401de198fbd98dcf9a76dbe', 'width': 1200}, 'variants': {}}]} |
Nucleus 1B - an SLM based on Mistral! | 23 | Hello guys.
You probably know me because of my work on Maral and Persian language. Now, I am honored to announce the new kid in the corner, Nuclues (and yes, the name comes from Silicon Valley show 😁).
Nucleus is a 1.13 billion parameters trained on 420k textbooks (Tiny Textbooks dataset) and is based on Mistral model. This model has the goal of being a *small but specific* language model and there will be more fine-tunes and more specific ones available very soon.
If you're interested, here are some links:
My personal blog post: [https://haghiri75.com/en/nucleus-is-the-proof-that-small-is-the-new-big/](https://haghiri75.com/en/nucleus-is-the-proof-that-small-is-the-new-big/)
Model Card: [https://huggingface.co/NucleusOrg/Nucleus-1B-alpha-1](https://huggingface.co/NucleusOrg/Nucleus-1B-alpha-1)
GitHub (Colab Notebooks) : [https://github.com/prp-e/nucleus](https://github.com/prp-e/nucleus) | 2024-01-13T11:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/195mdyl/nucleus_1b_an_slm_based_on_mistral/ | Haghiri75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195mdyl | false | null | t3_195mdyl | /r/LocalLLaMA/comments/195mdyl/nucleus_1b_an_slm_based_on_mistral/ | false | false | self | 23 | null |
Components for dual 3090 system | 1 | [removed] | 2024-01-13T11:34:48 | https://www.reddit.com/r/LocalLLaMA/comments/195m3l9/components_for_dual_3090_system/ | Traditional_Fill_459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195m3l9 | false | null | t3_195m3l9 | /r/LocalLLaMA/comments/195m3l9/components_for_dual_3090_system/ | false | false | self | 1 | null |
1 x RTX 4090 vs 3 x RTX 4060TI | 18 | Just brainstorming here while windowshopping for new GPU.
Prices here in europe are pretty crazy for the 4090 (barely below 2.000Eur). The 4060 TI (16GB) only costs about a third of the price (\~650Eur). Used 3090 still go for 1000+ Eur as well here.
How would those compare in actual performance for AI models? I assume the usage is only limited to models that can split (GGUF i assume). Does the added VRAM outweigh the slower GPUs and overhead for splitting it? | 2024-01-13T11:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/195m1na/1_x_rtx_4090_vs_3_x_rtx_4060ti/ | maxigs0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195m1na | false | null | t3_195m1na | /r/LocalLLaMA/comments/195m1na/1_x_rtx_4090_vs_3_x_rtx_4060ti/ | false | false | self | 18 | null |
Free multipurpose AI app! | 1 | [removed] | 2024-01-13T11:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/195lzkv/free_multipurpose_ai_app/ | missionimpossible205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195lzkv | false | null | t3_195lzkv | /r/LocalLLaMA/comments/195lzkv/free_multipurpose_ai_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6GCl6lNefD6PqSjKuZeaU493H-w3w4odM2Kk5Ah8eoQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
running stable diffusion on other pc | 1 | [removed] | 2024-01-13T10:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/195l6ua/running_stable_diffusion_on_other_pc/ | Single-Cow-5163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195l6ua | false | null | t3_195l6ua | /r/LocalLLaMA/comments/195l6ua/running_stable_diffusion_on_other_pc/ | false | false | self | 1 | null |
I created an uncensored, free character.ai alternative. The inference server run locally in my garage, please don't blow it up. | 45 | 2024-01-13T10:09:42 | https://charhub.ai/ | IlllllllIIIIlIIl | charhub.ai | 1970-01-01T00:00:00 | 0 | {} | 195kuv8 | false | null | t3_195kuv8 | /r/LocalLLaMA/comments/195kuv8/i_created_an_uncensored_free_characterai/ | false | false | default | 45 | null | |
DynaChat and BlockWrite extensions for WebUI | 17 | I'm basically looking for a better chat interface, so these are two very related (parallel) ideas that I put up as extensions for ooba WebUi
​
https://preview.redd.it/8sp3ibp1g6cc1.png?width=1478&format=png&auto=webp&s=32734dd24a67d9891f4323a2b45f735b497ecc38
Both use similar idea - that whatever you generate you can then go back to at any time, change it, re-arrange it, re-generate it... and the changed history will then dynamically affect the next generation.
BlockWrite is oriented at writing a story (or whatever) scene by scene, while the DynaChat is more oriented as a chat with dynamic history. Main difference is that DynaChat automatically moves in history forward after each generation (like in chat) while BlockWrite stays on the same scene until you create new scene.
Both combine instruct and/or Notebook (called Text Completion) in the same interface so you can go in and out of those two ways of conversations.
DynaChat can also generate JSONL from the entire exchange that can be used as base for training for example.
I assume that using it will reveal what it does, especially when you peek at the Terminal.
[https://github.com/FartyPants/BlockWriter](https://github.com/FartyPants/BlockWriter/tree/main)
[https://github.com/FartyPants/DynaChat](https://github.com/FartyPants/DynaChat)
These are just my personal ideas, in a sort of rough draft, while I'm looking for a better (more dynamic and structured) chat interface. Of course any ideas are welcome.
Also if you think my extensions or models are worth the effort, please support me on [ko-fi](https://ko-fi.com/fpham). To this day, only 4 people thought my stuff is worth something (including TheBloke), which kind of sucks as it feels I'm doing this for nothing.
​ | 2024-01-13T09:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/195klwb/dynachat_and_blockwrite_extensions_for_webui/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195klwb | false | null | t3_195klwb | /r/LocalLLaMA/comments/195klwb/dynachat_and_blockwrite_extensions_for_webui/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'cdRB3XmCR0crgmqoGy-ecEG4xtpqNpq1QmfX-kuMeAA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=108&crop=smart&auto=webp&s=ef33bd43f8e314312cbc87922be4b98b73c205ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=216&crop=smart&auto=webp&s=14c4b2ccab407cf9db593371c43c7ff56a97042d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=320&crop=smart&auto=webp&s=4ee59874870606ab0d9ae483a39c550768034c27', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=640&crop=smart&auto=webp&s=9c2eb7dfc36780a78123e0b0f91d285d21ab75bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=960&crop=smart&auto=webp&s=48ee6da5bcc5b270e798bd245d47982f85b5de23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?width=1080&crop=smart&auto=webp&s=667573efb1319109772dd28a0d5973a5ed2ea817', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z-4cdlN102OYwWQttHG57WcHUKGYbC_JrgkU5dGtQ0A.jpg?auto=webp&s=6984c050b5896067298fdeb77917123862276b02', 'width': 1200}, 'variants': {}}]} | |
Finetuning LLaMA2 with FSDP vs Deepspeed | 1 | [removed] | 2024-01-13T09:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/195kgt9/finetuning_llama2_with_fsdp_vs_deepspeed/ | Party-Confusion-1461 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195kgt9 | false | null | t3_195kgt9 | /r/LocalLLaMA/comments/195kgt9/finetuning_llama2_with_fsdp_vs_deepspeed/ | false | false | self | 1 | null |
Bell Notification Sound when LLM completes processing in LMStudio (Feature Request) | 1 | Essential Feature Request: A Bell Notification sound🔔 must chime when the LLM completes processing.
It takes a lot of time for large models to finish processing. After the generation, it should give a "Ding Dong" sound so I could give it the next prompt. 🙏🏻
This is a general feature request to all Local LLM Running software developers. | 2024-01-13T08:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/195juui/bell_notification_sound_when_llm_completes/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195juui | false | null | t3_195juui | /r/LocalLLaMA/comments/195juui/bell_notification_sound_when_llm_completes/ | false | false | self | 1 | null |
Free multipurpose AI app for text and image generation. | 1 | [removed] | 2024-01-13T08:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/195jsd3/free_multipurpose_ai_app_for_text_and_image/ | missionimpossible205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195jsd3 | false | null | t3_195jsd3 | /r/LocalLLaMA/comments/195jsd3/free_multipurpose_ai_app_for_text_and_image/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6GCl6lNefD6PqSjKuZeaU493H-w3w4odM2Kk5Ah8eoQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
AI Love, Sexbots and Consent. Surely it will come up in the future. | 1 | [removed] | 2024-01-13T08:42:42 | https://www.reddit.com/r/LocalLLaMA/comments/195jmzs/ai_love_sexbots_and_consent_surely_it_will_come/ | cybersensations | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195jmzs | false | null | t3_195jmzs | /r/LocalLLaMA/comments/195jmzs/ai_love_sexbots_and_consent_surely_it_will_come/ | false | false | self | 1 | null |
Dual RTX 3090 - 2x slower inference than single | 1 | [removed] | 2024-01-13T08:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/195jje3/dual_rtx_3090_2x_slower_inference_than_single/ | Low-Discussion-8656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195jje3 | false | null | t3_195jje3 | /r/LocalLLaMA/comments/195jje3/dual_rtx_3090_2x_slower_inference_than_single/ | false | false | self | 1 | null |
Introducing temi: A CLI Assistant for Developers, built on llama.cpp | 10 | Greetings, everyone!
I'm delighted to share **temi**, an open-source CLI tool for developers. Leveraging the robust [llama.cpp](https://github.com/ggerganov/llama.cpp) and the powerful Mistral-v0.2 AWQ GGUF model, temi is crafted to enhance your productivity, `temi` aims to boost your productivity by answering queries and handling text/PDF data, all from your terminal.
Your insights and contributions are crucial:
* **Repo**: [temi on GitHub](https://github.com/namtranase/terminalmind)
* **Model**: [Mistral-Instruct-v0.2-awq](https://huggingface.co/namtran/Mistral-7b-v0.2-AWQ-GGUF)
I'm eager to hear your thoughts and feedback! | 2024-01-13T07:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/195i5kj/introducing_temi_a_cli_assistant_for_developers/ | hehe_hehehe_hehehe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195i5kj | false | null | t3_195i5kj | /r/LocalLLaMA/comments/195i5kj/introducing_temi_a_cli_assistant_for_developers/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
We need more 4x7b moe models | 38 | Iv been playing with beyonder-4x7b-v2 and my god is it good which makes me wonder what would happen if we made a 4x7b with some of the best known models. Eg dolphin , mistral ect.
What 4 models would you put together as a moe model? | 2024-01-13T06:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/195i33k/we_need_more_4x7b_moe_models/ | ThinkExtension2328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195i33k | false | null | t3_195i33k | /r/LocalLLaMA/comments/195i33k/we_need_more_4x7b_moe_models/ | false | false | self | 38 | null |
Roadmap for model releases | 1 | [removed] | 2024-01-13T06:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/195h49t/roadmap_for_model_releases/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195h49t | false | null | t3_195h49t | /r/LocalLLaMA/comments/195h49t/roadmap_for_model_releases/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7JdUxo_TR6YKwjuo8mMxhiaK5sk_gC3GjI6fHnKzlOM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Kakf9a9H_nZRTJ97Zsgr8iXPzDf1A7Lj13l9X3Leqbg.jpg?width=108&crop=smart&auto=webp&s=84831504a89a9734ccfd6ecce13869e6266b9607', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/Kakf9a9H_nZRTJ97Zsgr8iXPzDf1A7Lj13l9X3Leqbg.jpg?auto=webp&s=872cd0c24dfdb03e1997b7d6b2efc6435d2c4231', 'width': 140}, 'variants': {}}]} |
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks | 41 | **arXiv**: [https://arxiv.org/abs/2307.16184](https://arxiv.org/abs/2307.16184)
**OpenReview**: [https://openreview.net/forum?id=4uflhObpcp](https://openreview.net/forum?id=4uflhObpcp)
**Code**: [https://github.com/mshukor/UnIVAL](https://github.com/mshukor/UnIVAL)
**Checkpoints**: [https://github.com/mshukor/UnIVAL/blob/main/checkpoints.md](https://github.com/mshukor/UnIVAL/blob/main/checkpoints.md)
**Project page**: [https://unival-model.github.io/](https://unival-model.github.io/)
**Demo**: [https://huggingface.co/spaces/mshukor/UnIVAL](https://huggingface.co/spaces/mshukor/UnIVAL)
**Abstract**:
>Large Language Models (LLMs) have made the ambitious quest for generalist agents significantly far from being a fantasy. A key hurdle for building such general models is the diversity and heterogeneity of tasks and modalities. A promising solution is unification, allowing the support of a myriad of tasks and modalities within one unified framework. While few large models (e.g., Flamingo (Alayrac et al., 2022), trained on massive datasets, can support more than two modalities, current small to mid-scale unified models are still limited to 2 modalities, usually image-text or video-text. The question that we ask is: is it possible to build efficiently a unified model that can support all modalities? To answer this, we propose **UnIVAL**, a step further towards this ambitious goal. Without relying on fancy datasets sizes or models with billions of parameters, the \~ 0.25B parameter UnIVAL model goes beyond two modalities and unifies text, images, video, and audio into a single model. Our model is efficiently pretrained on many tasks, based on task balancing and multimodal curriculum learning. UnIVAL shows competitive performance to existing state-of-the-art approaches, across image and video-text tasks. The feature representations learned from image and video-text modalities, allows the model to achieve competitive performance when finetuned on audio-text tasks, despite not being pretrained on audio. Thanks to the unified model, we propose a novel study on multimodal model merging via weight interpolation of models trained on different multimodal tasks, showing their benefits in particular for out-of-distribution generalization. Finally, we motivate unification by showing the synergy between tasks. The model weights and code are released here: [this https URL](https://github.com/mshukor/UnIVAL). | 2024-01-13T04:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/195fgal/unival_unified_model_for_image_video_audio_and/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195fgal | false | null | t3_195fgal | /r/LocalLLaMA/comments/195fgal/unival_unified_model_for_image_video_audio_and/ | false | false | self | 41 | null |
How to install safetensors models? | 1 | [removed] | 2024-01-13T04:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/195ffop/how_to_install_safetensors_models/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195ffop | false | null | t3_195ffop | /r/LocalLLaMA/comments/195ffop/how_to_install_safetensors_models/ | false | false | self | 1 | null |
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks | 1 | **arXiv**: [https://arxiv.org/abs/2307.16184](https://arxiv.org/abs/2307.16184)
**OpenReview**: [https://openreview.net/forum?id=4uflhObpcp](https://openreview.net/forum?id=4uflhObpcp)
**Code**: [https://github.com/mshukor/UnIVAL](https://github.com/mshukor/UnIVAL)
**Checkpoints**: [https://github.com/mshukor/UnIVAL/blob/main/checkpoints.md](https://github.com/mshukor/UnIVAL/blob/main/checkpoints.md)
**Project page**: [https://unival-model.github.io/](https://unival-model.github.io/)
**Demo**: [https://huggingface.co/spaces/mshukor/UnIVAL](https://huggingface.co/spaces/mshukor/UnIVAL)
**Abstract**:
>Large Language Models (LLMs) have made the ambitious quest for generalist agents significantly far from being a fantasy. A key hurdle for building such general models is the diversity and heterogeneity of tasks and modalities. A promising solution is unification, allowing the support of a myriad of tasks and modalities within one unified framework. While few large models (e.g., Flamingo (Alayrac et al., 2022), trained on massive datasets, can support more than two modalities, current small to mid-scale unified models are still limited to 2 modalities, usually image-text or video-text. The question that we ask is: is it possible to build efficiently a unified model that can support all modalities? To answer this, we propose **UnIVAL**, a step further towards this ambitious goal. Without relying on fancy datasets sizes or models with billions of parameters, the \~ 0.25B parameter UnIVAL model goes beyond two modalities and unifies text, images, video, and audio into a single model. Our model is efficiently pretrained on many tasks, based on task balancing and multimodal curriculum learning. UnIVAL shows competitive performance to existing state-of-the-art approaches, across image and video-text tasks. The feature representations learned from image and video-text modalities, allows the model to achieve competitive performance when finetuned on audio-text tasks, despite not being pretrained on audio. Thanks to the unified model, we propose a novel study on multimodal model merging via weight interpolation of models trained on different multimodal tasks, showing their benefits in particular for out-of-distribution generalization. Finally, we motivate unification by showing the synergy between tasks. The model weights and code are released here: [this https URL](https://github.com/mshukor/UnIVAL). | 2024-01-13T04:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/195fdmg/unival_unified_model_for_image_video_audio_and/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195fdmg | false | null | t3_195fdmg | /r/LocalLLaMA/comments/195fdmg/unival_unified_model_for_image_video_audio_and/ | false | false | self | 1 | null |
Idea / Proposal for someone smarter than I am. Devs/Coders, please hear me out. | 49 | Disclaimer: you're probably more knowledgeable than me, have more experience, and are far better at coding than me. Keep that in mind.
Here's the TLDR: create a new program (perhaps using Transformers/Tiny LLM, perhaps not needing LLMs at all) that scans, details, and analyzes the users exact hardware setup, and ultimately determines the optimal LLM/quant/settings/config for that specific hardware configuration (and perhaps use case, like general chat/role play/instruct/coding...).
Why?
Because of the multitude of posts we see of new people trying to get into open source LLMs, but having no understanding of where they should start, what is possible on their machine, and how to configure the model for their needs.
I was one of these people, and only after long periods of reading guides/tutorials/wikis/model cards/etc was I finally able to get a working model on my machine with decent speed and quality. For a person who is tech minded, but not a coder, and not familiar with anything other than straight forward download .exe and go, I had to start at square 1 and figure out GitHub, Linux, and all the backend llamma.ccp and whatnot. It took forever, I'm just a regular tech consumer, I don't know how to build shit. Mind you, it was a valuable experience, but I guarentee many people would have given up without pressing on and figuring it all out.
If we want greater adoption in the open source space, an easier on boarding process would be a God send for people like me. I would have paid for it! Easily! I put out offers for people to create a docker container so I could just click "run". The offers to build it were way out of my budget, so I was forced to grind it out and stumble my way through the steep learning curve.
I'm not a unique case. I'm someone who has used ChatGPT, played with Spaces on Hugging Face, and really wanted a local LLM to use with sensitive local data that I couldn't trust OpenAI or anyone with. I understood the value, but I didn't have the ability to spin up a model without massive amounts of homework.
So anyway, that's the idea. A stand alone program, or utility, that analyzes a user's hardware capability, suggests appropriate specs/config for a model, and gives a bullet point list of what to do to get that specific model working in a numbered list of steps.
I would have paid $100 for that shortcut. Easily. I almost paid 20x that for a container of an old model. Just to get it working.
To reiterate, the problem is:
New people asking "what do I need to run x?" or "can my x machine run this model?" Or "what's the beat coding model I can use with x machine".
Solution:
A utility that's only purpose is to examine the user's hardware and recommend optimal models, quants, and settings profile (for webui / lm studio...)
Just an idea.
Please DM if anyone is interested in a collaboration. I'm a corporate finance person, with a marketing degree/background, I'm not a developer. I can't build it, but I could hype it, and potentially sell it. I know the target market, and it's large and narrow enough to be worthwhile.
Thanks for coming to my lunch and learn presentation. And I'm totally aware of how I pitched a close source business idea to an open source sub, I know, but the people who could/want to do this are here. I'm not going to lose sleep over offending a commie or two. | 2024-01-13T04:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/195fb5l/idea_proposal_for_someone_smarter_than_i_am/ | DriestBum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195fb5l | false | null | t3_195fb5l | /r/LocalLLaMA/comments/195fb5l/idea_proposal_for_someone_smarter_than_i_am/ | false | false | self | 49 | null |
Roadmap for model releases | 1 | [removed] | 2024-01-13T04:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/195f2hk/roadmap_for_model_releases/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195f2hk | false | null | t3_195f2hk | /r/LocalLLaMA/comments/195f2hk/roadmap_for_model_releases/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7JdUxo_TR6YKwjuo8mMxhiaK5sk_gC3GjI6fHnKzlOM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Kakf9a9H_nZRTJ97Zsgr8iXPzDf1A7Lj13l9X3Leqbg.jpg?width=108&crop=smart&auto=webp&s=84831504a89a9734ccfd6ecce13869e6266b9607', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/Kakf9a9H_nZRTJ97Zsgr8iXPzDf1A7Lj13l9X3Leqbg.jpg?auto=webp&s=872cd0c24dfdb03e1997b7d6b2efc6435d2c4231', 'width': 140}, 'variants': {}}]} |
UnIVAL: Unified Model for Image, Video, Audio and Language Tasks | 1 | [removed] | 2024-01-13T03:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/195euy1/unival_unified_model_for_image_video_audio_and/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195euy1 | false | null | t3_195euy1 | /r/LocalLLaMA/comments/195euy1/unival_unified_model_for_image_video_audio_and/ | false | false | self | 1 | null |
To buy Asus NVIDIA 4 gpu server? | 1 | [removed] | 2024-01-13T03:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/195ejhp/to_buy_asus_nvidia_4_gpu_server/ | rich_atl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195ejhp | false | null | t3_195ejhp | /r/LocalLLaMA/comments/195ejhp/to_buy_asus_nvidia_4_gpu_server/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6Uu3iPS9Tdp3KF4CgLIygCjGiDYJiU2zjxrqoMo_olw', 'resolutions': [{'height': 97, 'url': 'https://external-preview.redd.it/oX_Z0g9-GK6HQuFlHa2spopj9e-_Rg-3Rsem7F4wKoo.jpg?width=108&crop=smart&auto=webp&s=66f5f14497924ac5011a5b432826a66d69efa93d', 'width': 108}, {'height': 194, 'url': 'https://external-preview.redd.it/oX_Z0g9-GK6HQuFlHa2spopj9e-_Rg-3Rsem7F4wKoo.jpg?width=216&crop=smart&auto=webp&s=f898b183e89a791fd507d6885ea91e123ce520c9', 'width': 216}, {'height': 288, 'url': 'https://external-preview.redd.it/oX_Z0g9-GK6HQuFlHa2spopj9e-_Rg-3Rsem7F4wKoo.jpg?width=320&crop=smart&auto=webp&s=454fa4a40119705c075163aef859cf30441361af', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oX_Z0g9-GK6HQuFlHa2spopj9e-_Rg-3Rsem7F4wKoo.jpg?auto=webp&s=ad1a3cbf596992ee25db28d3d3fc24da4c7b2ea9', 'width': 400}, 'variants': {}}]} |
So.... mixtral is on two different leaderboards better than 3.5? Am I reading this right? | 35 | I just got mixtral running on my machine (amd 32 gig with 3060 12 gig) pretty well with the largest model and I'm pretty impressed. I went to look for something better and all I'm hearing is crickets.
Had no idea how powerful it was, I just liked it as a sysadmin challenge. | 2024-01-13T02:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/195d3vl/so_mixtral_is_on_two_different_leaderboards/ | cleuseau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195d3vl | false | null | t3_195d3vl | /r/LocalLLaMA/comments/195d3vl/so_mixtral_is_on_two_different_leaderboards/ | false | false | self | 35 | null |
Transcription scrubber with a Large Language Model - using SillyTavernAI (Workflow included) | 1 | 2024-01-13T02:12:45 | https://v.redd.it/b7po3bds84cc1 | Complex-Indication | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 195csyr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b7po3bds84cc1/DASHPlaylist.mpd?a=1707703978%2CMWU2MDdmN2ZlMzg4ZjE5MzM0MGQwYTEyZjNiYzY5NDQ3ZmQxNzYzNWI1ODJiMDhmOTY2ZTYyOWMxZWVmZTNhYQ%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/b7po3bds84cc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/b7po3bds84cc1/HLSPlaylist.m3u8?a=1707703978%2COTQwZTgxZTdiNmEyNWJmMzBkNDI1NWE3ZDNlMjQ4NWFhNzQwOTQxNjhkMDYxN2M4YTI3NDQ0OTgxMDZmZmNjNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b7po3bds84cc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_195csyr | /r/LocalLLaMA/comments/195csyr/transcription_scrubber_with_a_large_language/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=108&crop=smart&format=pjpg&auto=webp&s=5864da63f08a31a9d6fa68141d624f98e950f8bd', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=216&crop=smart&format=pjpg&auto=webp&s=15a2af842e35014efe38c5e16e89bbd22416c003', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=320&crop=smart&format=pjpg&auto=webp&s=873234ce547b34cc4dd2f5abdaa9b7f1016ad6e0', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=640&crop=smart&format=pjpg&auto=webp&s=49d3d7f9598343e025f4beba7b46e995419b7d9e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=960&crop=smart&format=pjpg&auto=webp&s=d87cb7d1413026108bb023ecd09f95512175041c', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3bc2520100e44d6e0c9382d8f440cb3f734560e9', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/cnNsNDh2emE5NGNjMXoGktdQVa4KfKFPi6o-Y0pomAdcVN_yQxCvPYo3Jwts.png?format=pjpg&auto=webp&s=05ba827adcf130f4e7a297bd2f7a4d0d68732bc9', 'width': 1080}, 'variants': {}}]} | ||
Other than coding, and manipulating datasets, what are you guys doing with LLMs? | 1 | [removed] | 2024-01-13T02:05:36 | https://www.reddit.com/r/LocalLLaMA/comments/195cnpg/other_than_coding_and_manipulating_datasets_what/ | hmmqzaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195cnpg | false | null | t3_195cnpg | /r/LocalLLaMA/comments/195cnpg/other_than_coding_and_manipulating_datasets_what/ | false | false | self | 1 | null |
Mac Studio Build | 1 | [removed] | 2024-01-13T01:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/195cdqc/mac_studio_build/ | dave9199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195cdqc | false | null | t3_195cdqc | /r/LocalLLaMA/comments/195cdqc/mac_studio_build/ | false | false | self | 1 | null |
Since it gets asked every single day, currently the best ERP model is sensualize-mixtral. | 45 | Anyone who disagrees is welcome to politely fight me.
https://huggingface.co/TheBloke/Sensualize-Mixtral-GGUF
Edit: sorry guys, the title was bait. I actually just wanted more recommendations | 2024-01-13T01:03:59 | https://www.reddit.com/r/LocalLLaMA/comments/195bcz2/since_it_gets_asked_every_single_day_currently/ | ansmo | self.LocalLLaMA | 2024-01-13T08:35:52 | 0 | {} | 195bcz2 | false | null | t3_195bcz2 | /r/LocalLLaMA/comments/195bcz2/since_it_gets_asked_every_single_day_currently/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': '858D_G6mFmj89J5PFPSMIl_npAaFg0xCk3b6phK4ylY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=108&crop=smart&auto=webp&s=60278d6e2ea044557d210c5fa7739b8154153243', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=216&crop=smart&auto=webp&s=d9aca92427ed9f0a367bd030e347565ea9768108', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=320&crop=smart&auto=webp&s=d9c1b909ce104a6f49732eb5ac8ce5adb43a312e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=640&crop=smart&auto=webp&s=9ddd2faae73fe86736a23a90f34933fa73f9f933', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=960&crop=smart&auto=webp&s=5a87f2dedcbb8e037e6667fbb8aec078cd06847a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?width=1080&crop=smart&auto=webp&s=713f8a1a80e0216ca58abf1e99bb9eca471ba385', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_N6oNw3_JByJbyfi8tKlkmkLcR9EKAI9-UxVGOswcz8.jpg?auto=webp&s=2c3fa02df0cd27c2597398c0718d52630a9f589d', 'width': 1200}, 'variants': {}}]} |
I have enough of this 24GB nonsense. | 1 | Ok, 3090 is officially the dumbo for chumps. The 70b is the bare minimum to process data. On 24B with all the RAM/GPU splits and quantization, I'm getting slow like molasses on holiday in a maple syrup in Ottawa response. I want to prepare data for my new extraordinary models (in my own noggin), but this is like working with a toy choo-choo train and pretending I'm eating bento on shinkansen.
Not that I have money for this. I don't. But how do you build 2x3090 computer, in theory? Do you have any good build list? A good example?
I can see it needs to have MOBO that can eat two video cards (mine can't), it needs to have PSU that doesn't burst in flames (I have 800W that I assume wouldn't cut it) and case that can actually hold all those big bricks... Do I need to link those 3090 with NVIDIA link (whatever that is) or just slap them to MOBO and be done?
24GB in 2024 is nothing. Nothing! You just taunting all those big models in front of my eyes and I can train 13b at most. | 2024-01-13T00:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/195b725/i_have_enough_of_this_24gb_nonsense/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195b725 | false | null | t3_195b725 | /r/LocalLLaMA/comments/195b725/i_have_enough_of_this_24gb_nonsense/ | false | false | self | 1 | null |
RoSA - LoRA but better | 90 | 2024-01-13T00:33:54 | https://twitter.com/marktenenholtz/status/1745875618732925086?t=OYjMDWpmpkkoBWU5xWFF7w&s=19 | Eastwindy123 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 195ap9t | false | {'oembed': {'author_name': 'Mark Tenenholtz', 'author_url': 'https://twitter.com/marktenenholtz', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">RoSA is a low-rank adapter method that claims to outperform LoRA.<br><br>It trains a standard LoRA adapter (usually a bit smaller) and also trains an additional sparsified adapter.<br><br>The sparse adapter comes from matmul'ing the LoRA adapters and dropping weights below a threshold. <a href="https://t.co/beOsNewjZd">pic.twitter.com/beOsNewjZd</a></p>— Mark Tenenholtz (@marktenenholtz) <a href="https://twitter.com/marktenenholtz/status/1745875618732925086?ref_src=twsrc%5Etfw">January 12, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/marktenenholtz/status/1745875618732925086', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_195ap9t | /r/LocalLLaMA/comments/195ap9t/rosa_lora_but_better/ | false | false | 90 | {'enabled': False, 'images': [{'id': '-CfcN30pe4heJzcPzbKjrgJsFkBxzCGJ3jvXpts3XAQ', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MjMrLk716MUjNq9QAttpf9B2C803C4cCSBtdN0gC_2M.jpg?width=108&crop=smart&auto=webp&s=de159546e8c7413af61cd622e901d24320ad2675', 'width': 108}], 'source': {'height': 90, 'url': 'https://external-preview.redd.it/MjMrLk716MUjNq9QAttpf9B2C803C4cCSBtdN0gC_2M.jpg?auto=webp&s=13a156a1694d07b17f4f93d2d63a340ce110caa1', 'width': 140}, 'variants': {}}]} | ||
Using LLaMA with RAG to ask more meta-document questions | 1 | [removed] | 2024-01-13T00:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/195aa7n/using_llama_with_rag_to_ask_more_metadocument/ | jupiterprosperitytm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 195aa7n | false | null | t3_195aa7n | /r/LocalLLaMA/comments/195aa7n/using_llama_with_rag_to_ask_more_metadocument/ | false | false | self | 1 | null |
Fine tuning a large language model | 1 |
I have a fine-tuned LLM for diagnosing mental health issues and helping the user with cognitive behavioral therapy.
The model is finetuned on single Q&A data like this:
{'Person': "I've been feeling so sad and overwhelmed lately. Work has become such a massive source of stress for me.
'Psychologist': ' Hey there, I'm here to listen and support you. It sounds like work has been challenging lately. Can you tell me more about what's been going on?'}
where the value corresponding to the ‘Person’ key is the user input, and the ‘Assistant’ value isthe therapist answer (i.e., the LLM output).
Then, the finetuned model is put into a conversation chain to exploit a memory buffer, where the prompt has the following syntax:
“””
The following is a conversation between a human and AI. The AI acts exactly like a therapist Therapy is based on Cognitive behavioural therapy. You must avoid any kind of harm and bad advice. You have to listen the human and make it comfortable. You must be empatetic and don't provideany kind of interpretation if it not requested, and if you are not sure about what you are saying. You must help the person over time to put in practice the prosocial behaviour. Make question and show genuine interest in the conversation. Maintain detachment
Current conversation:
{history}
Person: {input}
AI:
“””
Moreover, I have a large set of relevant psychology books and articles that I can use as part of the training for the LLM.
Therefore, I have several doubts:
1. Is it better to fine-tune the model on single Q&As between patient and therapist or on full conversations?
2. To exploit all the information contained in the aforementioned books and articles, how should I proceed with the model training? Can I do an intermediate finetuning on psychology books and then finetune on Q&A data or should I retrain all the models including the books as part of the original training tokens?
3. Is the description of the conversation chain something crucial for the AI role or can it be skipped? | 2024-01-12T22:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/19582cm/fine_tuning_a_large_language_model/ | Repulsive_Ad_2230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19582cm | false | null | t3_19582cm | /r/LocalLLaMA/comments/19582cm/fine_tuning_a_large_language_model/ | false | false | self | 1 | null |
Buying/Hiring A Dating App LLM | 1 | [removed] | 2024-01-12T22:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1957cw2/buyinghiring_a_dating_app_llm/ | Networkfrog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1957cw2 | false | null | t3_1957cw2 | /r/LocalLLaMA/comments/1957cw2/buyinghiring_a_dating_app_llm/ | false | false | nsfw | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.