title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Custom presentation of objects and custom layouts in chat interface | 3 | I've seen a lot of discussion around custom chat interfaces, but haven't seen much about integrating in a custom presentation of certain types of data without building one from scratch.
For example, a google search for "best laptop" will present products in their own special component that is optimized for it, rather than trying to fit it all into a basic table layout or using bulleted lists.
[Google Generative AI for \\"Best Laptop\\"](https://preview.redd.it/lzoqpmy8smcc1.png?width=1412&format=png&auto=webp&s=f79f35fc02f9f5145b003d12660fa0c056ae4d9b)
I feel like I've seen two types of projects:
1. general UI toolbox for building our own fully custom chat UI
2. a general-purpose UI for interacting with various models, including prompt templates, uploading docs, etc (ollama-webui)
What I haven't been able to find is the 2nd type that includes 90% of what you'd need, where you can then extend the chat ui only for special cases. Ideally you'd be able to register some kind of hook similar to a tool that is leveraged to render content types. Then you point to maybe a react or whatever component that handles rendering the special content type.
Possible custom presentations you might want:
* Product
* Product List (similar to the google example)
* Product Comparison
* Recipe
* Flight
* Calendar Event
* Contact
* Stock | 2024-01-15T16:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/197da8h/custom_presentation_of_objects_and_custom_layouts/ | rothnic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197da8h | false | null | t3_197da8h | /r/LocalLLaMA/comments/197da8h/custom_presentation_of_objects_and_custom_layouts/ | false | false | 3 | null | |
Quick Q - On GGUF models it lists max RAM required - does this mean VRAM or can it be VRAM + system RAM combined? | 1 | [removed] | 2024-01-15T15:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/197c0za/quick_q_on_gguf_models_it_lists_max_ram_required/ | Interesting-Light-13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197c0za | false | null | t3_197c0za | /r/LocalLLaMA/comments/197c0za/quick_q_on_gguf_models_it_lists_max_ram_required/ | false | false | self | 1 | null |
There's an way to run model bigger than my GPU memory? | 1 | [removed] | 2024-01-15T15:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/197b0hj/theres_an_way_to_run_model_bigger_than_my_gpu/ | Massive-Signature849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197b0hj | false | null | t3_197b0hj | /r/LocalLLaMA/comments/197b0hj/theres_an_way_to_run_model_bigger_than_my_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_clR5lo0uUzBmmOefgsOcrqCYpHgjkxrStKzWLzjtqg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=108&crop=smart&auto=webp&s=a34a9c017a9872303c87fdbe0bca0b95846bd110', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=216&crop=smart&auto=webp&s=67047021fa80720833499c426641e059a6e86bbc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=320&crop=smart&auto=webp&s=dcdb4bb0148442648e65b68e6865207cac0b2fc2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=640&crop=smart&auto=webp&s=3d3bda8cadf2426facfc15184a6c812a572eb706', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=960&crop=smart&auto=webp&s=f614f9911aa52ab8c5e6e44dfaca8927167be907', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=1080&crop=smart&auto=webp&s=2a44840e4665263ae5e36cf6702fffebc2b295df', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?auto=webp&s=8e5e6d929427207a2483692d89844ab933417a5f', 'width': 1200}, 'variants': {}}]} |
How to properly run Mistral-7B-OpenOrca in llama.cpp? | 1 | Hello,
​
I'm trying to run [mistral-7b-openorca.Q5\_K\_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF/blob/main/mistral-7b-openorca.Q5_K_M.gguf) (TheBloke's 5-bit quantized model, to be run in CPU), but I don't really know which configuration should I use for llama.cpp, specially since I haven't managed to get the \`<|im\_start|>\` and \`<|im\_end|>\` prompt tokens to work at all.
​
Does anyone have a llama.cpp configuration for one of these models?
​
Thanks! | 2024-01-15T15:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/197aspx/how_to_properly_run_mistral7bopenorca_in_llamacpp/ | Kyonftw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197aspx | false | null | t3_197aspx | /r/LocalLLaMA/comments/197aspx/how_to_properly_run_mistral7bopenorca_in_llamacpp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BSrU-JpsBBiJvypZmVjoCeU8uQG2krLp_zott_fCTB8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=108&crop=smart&auto=webp&s=722996b223da59fbd1208ee4c97a60d779a335d9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=216&crop=smart&auto=webp&s=e04dd7937bd269a1b9cef026b27aacf7de623c41', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=320&crop=smart&auto=webp&s=d0573765d5f4a1d80dda00f90e061c765f19555d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=640&crop=smart&auto=webp&s=f1b1d2f57eb146b85ccde9e237c9c18299db9c72', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=960&crop=smart&auto=webp&s=76190ccd783dba7fdd212309c9b43a403f48e59b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?width=1080&crop=smart&auto=webp&s=e5af628124d605ea38db297999723276d4c77f74', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NiT2wfTf1bFa_mt4_c728eEHuKagOULgxiwbiy-iPD4.jpg?auto=webp&s=02353d8318f53105d350dd9dc8738b2e818ee624', 'width': 1200}, 'variants': {}}]} |
Prompt formatting -why? | 13 | Can anyone tell me why prompt formats like the alpaca format exist?
Doesn't it make more sense to train an LLM to receive a Python dictionary with more structure to it, even though data used to train the LLM is unstructured?
If nothing else, wouldn't it make understanding the prompt and location of context easier?
This is alpaca example is so open ended, doesnt specifically differentiate between the actual instruction and any context.
If you use Transformers chat template to unroll a messages dict into a string and insert it as the prompt There is a little bit more structure but not a lot.
I figure there's a reason and I hope someone can explain it to me
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Vise
```
prompt = {
"instruction": "Write a poem about a cat",
"context": "",
"chat_history": [
{
"timestamp": "2024-01-15T12:00:00Z",
"username": "Bard",
"entry": "Welcome! How can I help you today?"
},
{
"timestamp": "2024-01-15T12:01:00Z",
"username": "User",
"entry": "I'd like you to write a poem about a cat."
},
]
}
``` | 2024-01-15T15:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/197aqah/prompt_formatting_why/ | BriannaBromell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197aqah | false | null | t3_197aqah | /r/LocalLLaMA/comments/197aqah/prompt_formatting_why/ | false | false | self | 13 | null |
LLAMA-2 sucks, reccomend me the good LLAMA-1 models. | 1 | LLAMA-2 is full of GPTslop, aving flowery prose and "Happily ever afters", it's great but its the extreme floweryness is too much for me. | 2024-01-15T14:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/197ae14/llama2_sucks_reccomend_me_the_good_llama1_models/ | International-Try467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197ae14 | false | null | t3_197ae14 | /r/LocalLLaMA/comments/197ae14/llama2_sucks_reccomend_me_the_good_llama1_models/ | false | false | self | 1 | null |
Fine-tuning DeepSeek-MoE-16B with XTuner | 12 | 20GB GPU memory is enough for QLoRA fine-tuning, and 4x80GB for full-parameter fine-tuning.
Quick Start
git clone https://github.com/InternLM/xtuner.git
cd xtuner
pip install -e '.[deepspeed]'
xtuner train deepseek_moe_16b_chat_qlora_oasst1_e3 --deepspeed deepspeed_zero2
[https://github.com/InternLM/xtuner](https://github.com/InternLM/xtuner) | 2024-01-15T14:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/197acx2/finetuning_deepseekmoe16b_with_xtuner/ | LZHgrla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197acx2 | false | null | t3_197acx2 | /r/LocalLLaMA/comments/197acx2/finetuning_deepseekmoe16b_with_xtuner/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'Igx3LkCajYmTX8MDc-TguV6fXT7_PpjEQpr63xjdjk8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=108&crop=smart&auto=webp&s=32c06df6bcdb7f198074f285e925f7cf60dc53ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=216&crop=smart&auto=webp&s=308befdd240fcede4707e6c1b6258aca91db4a7a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=320&crop=smart&auto=webp&s=5be3848a9246aebfa883afccc6ca242d22aff1d8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=640&crop=smart&auto=webp&s=db6e69092818beff50f2d0392f10c831083edd06', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=960&crop=smart&auto=webp&s=27f6070d12f47493f1a9005de5ce663caee2f404', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?width=1080&crop=smart&auto=webp&s=7a6ba70b352ddd6544501b7e1c1ca4689f3c0ae1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Vumphxwcmut4zHRc2mkof0RN445RIHzR0bgiHD-RuIM.jpg?auto=webp&s=65ef3d590f5952ffcaee5255ab564a146eba4f85', 'width': 1200}, 'variants': {}}]} |
Small model inquiry | 1 | [removed] | 2024-01-15T14:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/197a4i2/small_model_inquiry/ | Sl33py_4est | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 197a4i2 | false | null | t3_197a4i2 | /r/LocalLLaMA/comments/197a4i2/small_model_inquiry/ | false | false | self | 1 | null |
python errors when trying to load model "TheBloke_Llama-2-7B-Chat-GGML" | 1 | [removed] | 2024-01-15T14:01:47 | https://www.reddit.com/r/LocalLLaMA/comments/1979eni/python_errors_when_trying_to_load_model_thebloke/ | Individual_Row_9419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1979eni | false | null | t3_1979eni | /r/LocalLLaMA/comments/1979eni/python_errors_when_trying_to_load_model_thebloke/ | false | false | 1 | null | |
Possible to add Dutch to Mistral 7B? | 1 | Right now it looks Mistral 7B is best for English inputs and outputs.
For my understanding: is it possible to fine-tune it for another language like Dutch? If so, what would it take? What is needed for that? | 2024-01-15T13:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1978nbv/possible_to_add_dutch_to_mistral_7b/ | jsmits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1978nbv | false | null | t3_1978nbv | /r/LocalLLaMA/comments/1978nbv/possible_to_add_dutch_to_mistral_7b/ | false | false | self | 1 | null |
Best (and smallest) model for concise answers in Spanish RAG? | 7 | What would you say is the best and smallest model I could use for a Spanish RAG app? Context window should be around 4k, but output can be quite low, as answers should be as concise as possible, so maybe 512 is enough.
I just want it to output the answer to the query as concise as possible. No greetings to the user, no wishes for good luck, no "according to the context provided, the answer is X.
Say, if I ask "Who is the president of Fizbuzz Inc." I want the answer in the format "The president of Fizbuzz Inc. is Mr. Fiz Buzz.". Nothing less, nothing more.
I've had the best success so far with Mixtral8x7b (to no one's surprise), tuning down temperature, explicitly directing for no greetings and requesting every answer to start with "The answer is...", but it's still annoyingly verbose, always including more information or opinions than requested.
It's also not a small LLM by any measure, but also smaller models tend to perform poorly in Spanish (I'm looking at you TinyLlama). Any recommendations? Thanks! | 2024-01-15T13:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1978n7x/best_and_smallest_model_for_concise_answers_in/ | everydayislikefriday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1978n7x | false | null | t3_1978n7x | /r/LocalLLaMA/comments/1978n7x/best_and_smallest_model_for_concise_answers_in/ | false | false | self | 7 | null |
Compliance check on change requests using a local LLM feasible? | 1 | [removed] | 2024-01-15T13:20:51 | https://www.reddit.com/r/LocalLLaMA/comments/1978kq8/compliance_check_on_change_requests_using_a_local/ | w_60 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1978kq8 | false | null | t3_1978kq8 | /r/LocalLLaMA/comments/1978kq8/compliance_check_on_change_requests_using_a_local/ | false | false | self | 1 | null |
Model with the least amount of GPT in its training? | 1 | I'm looking for a model that could help me "humanize" GPT written text, like essays report etc. I have tried using dynamic temperature with mixtral\_11bx2\_moe\_19b which is a Yi finetune from what i can tell and even if it's paraphrasing is detectable in most AI detectors, it still gets traced in Winston AI and turnitin. I'm doing the text writing as a side hustle for a freelancing essay writing website, and even if the quality of the text is great, it still gets blocked as AI generated content. Can someone give me some advice on different models with the least GPT training and maybe different settings? My salary is dependent on it and i dont think im doing something imoral as the quality of the essays is great and i'm getting very positive feedback from customers! Its the website's automated turnitin check that blocks the text! | 2024-01-15T13:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/1978hcl/model_with_the_least_amount_of_gpt_in_its_training/ | Exotic-Investment110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1978hcl | false | null | t3_1978hcl | /r/LocalLLaMA/comments/1978hcl/model_with_the_least_amount_of_gpt_in_its_training/ | false | false | self | 1 | null |
Mixtral & AutoGPT? | 2 | Was anyone ever successful in local AutoGPT connection? I guess Mixtral 8x7B is most powerful local model which can (potentially) work with AutoGPT. So far I was able to connect them via LiteLLM, but it can't parse the response. Any suggestions please? Or is it deadend? | 2024-01-15T13:14:20 | https://www.reddit.com/r/LocalLLaMA/comments/1978fwx/mixtral_autogpt/ | Extender7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1978fwx | false | null | t3_1978fwx | /r/LocalLLaMA/comments/1978fwx/mixtral_autogpt/ | false | false | self | 2 | null |
Any way to speed up text generation a bit? | 1 | I'm new to AI, just installed ollama and running dolphin mixtral model. My specs are i3 12100f 64gb RAM and RTX 3060 12GB. The generations seem somewhat slow, definitely not as fast as ChatGPT. Is there a way to optimize output speed, or is this the best I can get from these specs? | 2024-01-15T12:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1977kf1/any_way_to_speed_up_text_generation_a_bit/ | C_umputer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1977kf1 | false | null | t3_1977kf1 | /r/LocalLLaMA/comments/1977kf1/any_way_to_speed_up_text_generation_a_bit/ | false | false | self | 1 | null |
Any way to speed up text generation a bit? | 1 | I'm new to AI, just installed ollama and running dolphin mixtral model. My specs are i3 12100f 64gb RAM and RTX 3060 12GB. The generations seem somewhat slow, definitely not as fast as ChatGPT. Is there a way to optimize output speed, or is this the best I can get from these specs? | 2024-01-15T12:26:00 | https://www.reddit.com/r/LocalLLaMA/comments/1977kdj/any_way_to_speed_up_text_generation_a_bit/ | C_umputer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1977kdj | false | null | t3_1977kdj | /r/LocalLLaMA/comments/1977kdj/any_way_to_speed_up_text_generation_a_bit/ | false | false | self | 1 | null |
Python errors when trying to start up "TheBloke_Llama-2-7B-Chat-GGML" model | 1 | [removed] | 2024-01-15T11:48:27 | https://www.reddit.com/r/LocalLLaMA/comments/1976wx7/python_errors_when_trying_to_start_up_thebloke/ | Individual_Row_9419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1976wx7 | false | null | t3_1976wx7 | /r/LocalLLaMA/comments/1976wx7/python_errors_when_trying_to_start_up_thebloke/ | false | false | 1 | null | |
Needhelp Want attempt to live translation speech tò speech from italian to English | 1 | [removed] | 2024-01-15T11:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1976not/needhelp_want_attempt_to_live_translation_speech/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1976not | false | null | t3_1976not | /r/LocalLLaMA/comments/1976not/needhelp_want_attempt_to_live_translation_speech/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'o2lvvObCGbIQAQpF_xwzo73qvbLa1eVCYVhMqA0GLqo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=108&crop=smart&auto=webp&s=2fff648356ce313159754f43676dc468dcf3cdce', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=216&crop=smart&auto=webp&s=6101008ac34aed5edad4933d0ef62a990903d2d1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=320&crop=smart&auto=webp&s=43fdb8e0cac4fc2e21d78d0f7b5a5ecfcc1359e9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=640&crop=smart&auto=webp&s=76957b58389128b01c985265b14f19782299559a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=960&crop=smart&auto=webp&s=1a0065b2190c923f9d941a48907b55d2e5acb395', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?width=1080&crop=smart&auto=webp&s=4f33cd8ffed43262f93475bebdb45d567fde8721', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/yfZbhzb3G3W_rqqHKQs6PQOFbt98jBTSOpopWMPdcIY.jpg?auto=webp&s=30fa3a767bd6dce66bac02a7b7897f7d9ed0001c', 'width': 2000}, 'variants': {}}]} |
Model loading | 1 | Hi, I am new with LLMs. Can someone explain me what happens when I run my LLM in terms of offloading "data" to RAM/VRAM. I did my research but I haven't got past loading parameters in RAM... | 2024-01-15T11:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/1976c1n/model_loading/ | reddiamond69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1976c1n | false | null | t3_1976c1n | /r/LocalLLaMA/comments/1976c1n/model_loading/ | false | false | self | 1 | null |
Is Mixtral based on GPT3? | 1 | Is it hallucinating, or did [Mistral.ai](https://Mistral.ai) use GPT3 as the base?
It's very hard to find any info on that because Google just floods me with search results about "Mixtral beating ChatGPT".
https://preview.redd.it/105q22bt3lcc1.png?width=1592&format=png&auto=webp&s=9c4b15d70345e7a58119ffef30c4d6116b2bc3e2 | 2024-01-15T10:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1976166/is_mixtral_based_on_gpt3/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1976166 | false | null | t3_1976166 | /r/LocalLLaMA/comments/1976166/is_mixtral_based_on_gpt3/ | false | false | default | 1 | null |
Best Value For Money Option For Local LLM? What to look for in GPU? Bandwidth? Core Speed? | 1 | I'm looking to build a budget rig to run some local LLMS (small or average ones, 33b max - for now).
When i say budget, i don't mean extremely cheap, but i mean the best value for money.
I know 24GB vram is enough (i keep renting 3090 and 4090) - but i wanted to ask if 2x gpus with 12gb would do a similar job to a 24 gb one (now that we can split the model onto multiple gpus)?
What should i be really looking for in a GPU in terms of specs for the money?
Cuda cores?
Clock speed?
Memory bus width?
Memory bandwidth?
Memory clock speed?
Assuming i'm looking at new(er) gens of gpus (30 and 40 series) and that TDP is not an issue (already have some solid PSUs.
Would it also be an option to run multiple 8GB cards on a mining rig mobo (for like 24 or 32gb vram)? Would the PCI lanes / speed be a limiting factor?
I know most people would run a 3090 right now (or maybe 2x 3090) but since most of the used ones are mined to death - i'm afraid of investing into one right now (although it's my first option - i'm looking for alternatives - like the new super cards or older 3060 - 3080s). Been also looking into older P40 K80 and such, but they seem slow, and not really useful for anything else than LLMs (while a consumer grade GPU is also good at gaming) | 2024-01-15T10:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/1975y8p/best_value_for_money_option_for_local_llm_what_to/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1975y8p | false | null | t3_1975y8p | /r/LocalLLaMA/comments/1975y8p/best_value_for_money_option_for_local_llm_what_to/ | false | false | self | 1 | null |
Building a State-of-the-Art Video Summarizer: Part 1 - Semantic Chunking and Building a Chunker | 1 | Over the last four months, I've been developing a state-of-the-art video summarizer using Large Language Models (LLMs). I have worked on this project. And I can't OSS the model (its clients) but I will share my learnings.
This is the first part of a six-part series where I'll share my process and insights, aiming to guide others interested in this field.
**Understanding Semantic Chunking: Why It's Key**
Before delving into how LLMs summarize, it's important to understand semantic chunking. This step is often overlooked, but it's crucial. Everybody jumps this steps. They take a bunch of big huge blob of text and ask LLM to summarize. Thinking because LLMs content length is increasing; many think this is a good approach. I strongly recommend against this.
Without proper chunking, feeding a large text to an LLM usually leads to subpar summaries. Semantic chunking means breaking down content into smaller parts based on the main ideas, enhancing content navigation, filtering out irrelevant sections, and grouping related parts for a cohesive summary.
Lets take a practical example.
**Practical Example: Podcast Summarization**
Consider a podcast with various elements like an introduction, discussion, ads, and many main topics being discussed. Semantic chunking here helps in three things:
1. **Breaking into Chapters:** Dividing the podcast into sections for easy navigation.
2. **Filtering Out Ads or irrevelant portions:** Once we have the chunks. It helps in identifying and removing ad sections from the final summary. Also, sometimes discussion might go off totally irrelevant to the topics. With chunking we can later decide which to keep and which to throw away based on heuristics.
3. **Grouping for Summary:** Clustering all segments discussing the a specific topic, ensuring a comprehensive summary. In a health podcast episode, they might talk about sleep in first 5 minutes. In middle and and in the end. Chunk gives you way to identify related sections together. Tie them together and summarize together. This makes a huge difference in the quality.
How to do (2) and (3) I will talk about that in future sections. But for now I want to emphasise start with semantic chunking and its important!
**Building a Semantic Chunker for Amateurs**
Building a semantic chunker is feasible even for those new to AI. I am an amateur. Maybe, ones with phd, can come with up really awesome technique to do this with math and stuff. But there is a simple (probably not the most computationally optimal ), way to get State of the art chunking models for your usecase.
Here’s how to do it. Simple. Just pick an LLM and train it specifically ONLY to be semantic chunking engine. Here are the steps I recommend. :
1. **Define Your Goal:** Decide what your chunker should achieve. For instance, chunking for podcasts and videos would differ from books. I highly recommend building chunking LLMs for your use case.
2. **Collect High-Quality Data:** Gather data. Although not kosher, there is plenty of public data you can scrap from initially. Say I want to do podcast splitter. Scrap YouTube video data. Input -> transcripts and Output-> human annotated chapter information, which serves as input and output data for training. And you can train a LLM with it.
3. **Data Engineering:** Once you have this data, the next step is to filter it out and clean it up. This could mean selecting chapters of a specific length – say, averaging between 4 to 7 minutes. This helps in standardizing the training data for your model. Tailor to what you want and how you want the final chunker to look like. Data is everything! This is the most important but over ofteroverlooked step.
4. **Train Your LLM:** Use the refined data to train an LLM. There are techniques. Pick the right size. There are some nuances here.
5. **Iterative Improvement:** Continuously improve the model based on its performance, enhancing its chunking accuracy.
By following these steps, you can create a basic yet functional semantic chunker for your use case. I think I might have the SoTA for this use case. I initially skipped this and went directly to summarisation. But when I introduced this in my pipeline, man the quality was good. But more important, the people reading said this summary is super USEFUL!
If there is interest, I'll delve into other aspects of video summarization later. I had lots of fun in the last 4 months with this project; so happy to share learnings :)
​ | 2024-01-15T10:45:52 | https://www.reddit.com/r/LocalLLaMA/comments/1975wza/building_a_stateoftheart_video_summarizer_part_1/ | phoneixAdi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1975wza | false | null | t3_1975wza | /r/LocalLLaMA/comments/1975wza/building_a_stateoftheart_video_summarizer_part_1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N7OVwHB8guLsgDy_fqqVkjuLvVnhm9RcDTvOZ23Qd6s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/3tWbnoRvV-hqwjjSr4TZads428MzgmBave3daB9abjs.jpg?width=108&crop=smart&auto=webp&s=61e0bb401432d762fc29991355db5f4290f92ad2', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/3tWbnoRvV-hqwjjSr4TZads428MzgmBave3daB9abjs.jpg?auto=webp&s=27703279fc4d1229464ebacd7a4959a211dba7dc', 'width': 200}, 'variants': {}}]} |
Very very fast inference | 1 | [removed] | 2024-01-15T10:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1975fsy/very_very_fast_inference/ | Efficient_Rise_8914 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1975fsy | false | null | t3_1975fsy | /r/LocalLLaMA/comments/1975fsy/very_very_fast_inference/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MyCg_o18bpLOQZPlf9qJzYxuSGNoLgr9CoghRolb0uI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=108&crop=smart&auto=webp&s=1596348342e9308d2caa9be8295df0e069f044e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=216&crop=smart&auto=webp&s=da0b2f78ecc66d5355e245a3b4e2df09141a6946', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=320&crop=smart&auto=webp&s=ab5e4209e57e8dfa68799115ef65168d07664d97', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=640&crop=smart&auto=webp&s=c3fd3e0db82c1db23ecc8310d12d9d09603af444', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=960&crop=smart&auto=webp&s=b129eaa489e0c8d00bf32abab6a0b25f61b65da9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?width=1080&crop=smart&auto=webp&s=0d99162ca258433dcc72d57671dea4717ae9e3c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YYiqQKwe7D_mg1PKzLewNlCIGJ7Wty24-PURa2-xgts.jpg?auto=webp&s=caf84be2efb2acc37307ccbd0e762d3cd60a6cca', 'width': 1200}, 'variants': {}}]} |
Mixtral breaking down/stopping mid sentence. | 1 | [removed] | 2024-01-15T09:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1974yba/mixtral_breaking_downstopping_mid_sentence/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974yba | false | null | t3_1974yba | /r/LocalLLaMA/comments/1974yba/mixtral_breaking_downstopping_mid_sentence/ | false | false | self | 1 | null |
Mergekit for newbies | 1 | [removed] | 2024-01-15T09:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1974t8o/mergekit_for_newbies/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974t8o | false | null | t3_1974t8o | /r/LocalLLaMA/comments/1974t8o/mergekit_for_newbies/ | false | false | default | 1 | null |
Llama help needed - paid | 1 | I'm currently developing a chatbot using Llama 2 with 13B parameters, which I have deployed on AWS. However, I'm encountering a challenge: the chatbot isn't delivering responses that are as accurate or intelligent as I need. My goal is for the chatbot to handle a variety of interactions effectively. These include basic conversations, farewells, and a wide range of queries related to products, orders, and other e-commerce topics. For instance, if a user asks to see products in red, the chatbot should be able to display a slider of relevant products. Moreover, it should be capable of engaging in follow-up queries, such as inquiring about size, material, etc., if a user expresses interest in a particular product.
Can you help me to achieve this? | 2024-01-15T09:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1974rze/llama_help_needed_paid/ | ahmedmobinhq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974rze | false | null | t3_1974rze | /r/LocalLLaMA/comments/1974rze/llama_help_needed_paid/ | false | false | self | 1 | null |
Light GUI to use the LMStudio API? | 2 | Hi all, I have LMStudio running Mixtral really well on my Windows PC which is pretty beefy, but I want to use it from my laptop which is not powerful at all. I thought that I could access the LMStudio API via a light GUI client on my Windows laptop, but I can't find anything suitable that I can get to work. Has anyone one tried this or have any suggestions about a GUI that can use the LMStudio API? | 2024-01-15T09:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1974kdf/light_gui_to_use_the_lmstudio_api/ | x_flashpointy_x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974kdf | false | null | t3_1974kdf | /r/LocalLLaMA/comments/1974kdf/light_gui_to_use_the_lmstudio_api/ | false | false | self | 2 | null |
Awesome repos and papers about LLM for robotics agents toward AGI | 1 | [removed] | 2024-01-15T09:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/1974dwe/awesome_repos_and_papers_about_llm_for_robotics/ | Common-Ad-1772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974dwe | false | null | t3_1974dwe | /r/LocalLLaMA/comments/1974dwe/awesome_repos_and_papers_about_llm_for_robotics/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mProHVuZuLuw8AmgOwOULZAY_uRglbb1sN7MY1tugYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=108&crop=smart&auto=webp&s=979dac7f3d0ec08154add4c61b5a1f766ed94566', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=216&crop=smart&auto=webp&s=ec91e143d7590fe50231dff35886e434790a8395', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=320&crop=smart&auto=webp&s=37498455608819822117a20cee4b418bbae57a45', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=640&crop=smart&auto=webp&s=195109deee04a9cd092263cb51299cc61a4d5f19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=960&crop=smart&auto=webp&s=c60e3f30dd881e839cb2999902159221404ba48b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?width=1080&crop=smart&auto=webp&s=8572d23e08ce0c05a91f9dcfc1603f54cfee2215', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7KaJok8HJgZ4_YBpvyJZZ7Z4qwvHQB70IwI6gvA_Vmk.jpg?auto=webp&s=d52d4106fcea27426cdaed8e4b8685cbec37db80', 'width': 1200}, 'variants': {}}]} |
API standards and software using LLMs | 5 | Let's say I want to make software or a web app that makes 'calls' to LLMs. I want the user to be able to choose to use a web API like ChatGPT, or a local model. Potentially there could be calls to different models for different things.
Do current libraries and standards make this feasible? It doesn't need to all be automatic. For example, for each model selected, the user could manually choose an appropriate prompt/interface format from a list. The other aspect is about the software to actually run the inference using the model data. What is the best 'backend' software which could receive calls to use a particular model with a prompt, load up that model, run inference, then return the result? | 2024-01-15T08:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/19747on/api_standards_and_software_using_llms/ | EvokerTCG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19747on | false | null | t3_19747on | /r/LocalLLaMA/comments/19747on/api_standards_and_software_using_llms/ | false | false | self | 5 | null |
Speed difference not matching file size between quants? | 4 | I was playing around with different quant sizes of a 34B gguf model when I noticed the speeds aren’t matching the file sizes.
For example, the Q3_K_M variant gets an average of 19t/s across long and short outputs, while the Q6_K one get an average of 15t/s.
The Q6_K is 1.7x in file size, and afaik it’s also roughly the amount of computing it needs.
Am I wrong about this, or is it a software/hardware issue?
I am using a finetuned Yi-34B @200k gguf with llama.ccp in Oobabooga on a M2 Ultra(76c) 192G.
Btw I’m also curious about why RAM usage is capped at around 150G for AI inferencing on this device. Is it something to do with runtime or llama.cpp itself? | 2024-01-15T08:46:14 | https://www.reddit.com/r/LocalLLaMA/comments/1974614/speed_difference_not_matching_file_size_between/ | Tree-Sheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1974614 | false | null | t3_1974614 | /r/LocalLLaMA/comments/1974614/speed_difference_not_matching_file_size_between/ | false | false | self | 4 | null |
Are people submitting "Pretrained" models to the LLM Leaderboard even though they are clearly merges to rank higher? | 47 | 2024-01-15T08:27:45 | soham1996 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1973wax | false | null | t3_1973wax | /r/LocalLLaMA/comments/1973wax/are_people_submitting_pretrained_models_to_the/ | false | false | 47 | {'enabled': True, 'images': [{'id': 'l-EzLZR9E2CG2enTKWnU_p8YZKqxwqfUaw-vzPtMXUM', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=108&crop=smart&auto=webp&s=ed2c78f350859becab00dadc4dec528a454d3899', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=216&crop=smart&auto=webp&s=9d26a7cbfa666c8a4ce25a505374953436b58ef4', 'width': 216}, {'height': 99, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=320&crop=smart&auto=webp&s=1c811c9cc3a29e7e3bbec42ac0e6d6afc522754d', 'width': 320}, {'height': 198, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=640&crop=smart&auto=webp&s=a7d5471845053ce28387f39dfff91e19c078251b', 'width': 640}, {'height': 297, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=960&crop=smart&auto=webp&s=978b7b856ed740888ad7793b897ab07d85299e37', 'width': 960}, {'height': 335, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?width=1080&crop=smart&auto=webp&s=6d46ce43bbc647071bdf509034f8d9ae298fde13', 'width': 1080}], 'source': {'height': 501, 'url': 'https://preview.redd.it/1x024d7fdkcc1.png?auto=webp&s=b20a10f98f40c47edc49d30780e48d9953a78fb3', 'width': 1614}, 'variants': {}}]} | |||
Test Extraction (?) using LLM | 1 | [removed] | 2024-01-15T08:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1973s7e/test_extraction_using_llm/ | Different_Star9899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1973s7e | false | null | t3_1973s7e | /r/LocalLLaMA/comments/1973s7e/test_extraction_using_llm/ | false | false | self | 1 | null |
Running CogVLM on Paperspace | 1 | [removed] | 2024-01-15T08:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1973gr8/running_cogvlm_on_paperspace/ | Revolutionary_Fan786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1973gr8 | false | null | t3_1973gr8 | /r/LocalLLaMA/comments/1973gr8/running_cogvlm_on_paperspace/ | false | false | self | 1 | null |
Am I looking at this right | 1 | [removed] | 2024-01-15T07:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1973byf/am_i_looking_at_this_right/ | Psychological-Ad5390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1973byf | false | null | t3_1973byf | /r/LocalLLaMA/comments/1973byf/am_i_looking_at_this_right/ | false | false | self | 1 | null |
Merge Large Language Models with mergekit | 1 | 2024-01-15T07:47:34 | https://huggingface.co/blog/mlabonne/merge-models | thenameless7741 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1973a4y | false | null | t3_1973a4y | /r/LocalLLaMA/comments/1973a4y/merge_large_language_models_with_mergekit/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'JNVmzpOzOO2R1d5bXKqIjeJ8U5amNaKk9F3iW2uZGFc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=108&crop=smart&auto=webp&s=31d672201fd06a214202b7cc09635bbe39ded225', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=216&crop=smart&auto=webp&s=cea786347e0be2bf8130bff85ee79a689e0dd977', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=320&crop=smart&auto=webp&s=440517e823a44faa9db382adb733e3f57fc6b130', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=640&crop=smart&auto=webp&s=ddaeda6470c7a172b76b978daac9522db84cf3f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=960&crop=smart&auto=webp&s=40d4fb35813907fbd601fa40f7e6c91016241b9d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?width=1080&crop=smart&auto=webp&s=6ec7aa922c526a71ccf05c4ef24884e86326267f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eSW37N0mZlrexuwOJyc6mTV2JWJz4At2mbuI97bepcw.jpg?auto=webp&s=45f14d0230469e9182804e0cbbbafd65fb81e5bb', 'width': 1200}, 'variants': {}}]} | ||
Imported my ChatGPT history since Dec 2022 and continued convos with Mistral | 12 | 2024-01-15T07:38:25 | https://v.redd.it/bzs8a6ho3kcc1 | NomadicRotator | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 19735ct | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bzs8a6ho3kcc1/DASHPlaylist.mpd?a=1707896322%2CMDg0YWY5MGQxZDYxMmY5ODU2MTNiY2U5YTExMDA3ZWYwZWJkOTNmNTJiMDEyNjVhN2ExNGNjZWFhZDhlZDYxNQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/bzs8a6ho3kcc1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/bzs8a6ho3kcc1/HLSPlaylist.m3u8?a=1707896322%2CMzYyMjE4OGU1NGIzMjc1MTcwYjJlOWE2MmYzYjg5MTQ5ZDQzNmUwYjYxZWFjMDQzNjZiNWQ0MWRiZDMyZThkMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bzs8a6ho3kcc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_19735ct | /r/LocalLLaMA/comments/19735ct/imported_my_chatgpt_history_since_dec_2022_and/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=108&crop=smart&format=pjpg&auto=webp&s=a0dc2c9bc3cbd31201212f22d3215d205fa10e80', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=216&crop=smart&format=pjpg&auto=webp&s=ada98ed21626a1c8e1bdaf88f2c3a3e0514f03f8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=320&crop=smart&format=pjpg&auto=webp&s=1da7192b70805b4ac8a4fa91cad822b780972b6a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=640&crop=smart&format=pjpg&auto=webp&s=28f9a6336148ba3fae799f6d23047f08aee4d1f0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=960&crop=smart&format=pjpg&auto=webp&s=219d257b3ccf94abfd253328a706c759a8de384c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d66af4c38e39c69e615ac6b756bf4e2bcba4d5c4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejEwa2h3czQ0a2NjMfcAPelzeXDa92aZ8P9NBiHskl02vP98Twa5Fm479Zmb.png?format=pjpg&auto=webp&s=ef918a427e5e1edf30d97b11ddcf06cbfd36e631', 'width': 1920}, 'variants': {}}]} | ||
Beyonder and other 4x7B models producing nonsense at full context | 9 | Howdy everyone! I read recommendations about Beyonder and wanted to try it out myself for my roleplay. It showed potential on my test chat with no context, however, whenever I try it out in my main story with full context of 32k, it starts producing nonsense (basically, spitting out just one repeating letter, for example).
I used the exl2 format, 6.5 quant, link below.
https://huggingface.co/bartowski/Beyonder-4x7B-v2-exl2/tree/6_5
This happens with other 4x7B models too, like with DPO RP Chat by Undi.
Has anyone else experienced this issue? Perhaps my settings are wrong? At first, I assumed it might have been a temperature thingy, but sadly, lowering it didn’t work. I also follow the ChatML instruct format. And I only use Min P for controlling the output.
Will appreciate any help, thank you! | 2024-01-15T07:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/19732vw/beyonder_and_other_4x7b_models_producing_nonsense/ | Meryiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19732vw | false | null | t3_19732vw | /r/LocalLLaMA/comments/19732vw/beyonder_and_other_4x7b_models_producing_nonsense/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'FDzcZfxxoO2yS0XC1p6W7tYHwOBUcx0Ti3N1DYHxZnQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=108&crop=smart&auto=webp&s=6bd4c6240bbc478e2d07d2656ad2f4144795e421', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=216&crop=smart&auto=webp&s=2a2377761d3b6f761d454a9d4dfc5c1772118ef0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=320&crop=smart&auto=webp&s=6f0324ef5afbb28a76a3b4f17affd3910d76b7d7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=640&crop=smart&auto=webp&s=7581be614d81f8fc16a9f478dd700fd064713d40', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=960&crop=smart&auto=webp&s=149e4e6b7dbb10fbc0cbed52fe8c2d0d68935acd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?width=1080&crop=smart&auto=webp&s=165c405c1c3dbf39fffb2e6dce8e73849eb99907', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WNQvVBg9kyMoAtkSw-Hij4WSn4ebW19SjC87HhE9XsU.jpg?auto=webp&s=021f7dc4d58fcd0156441edf22cf0b7c2b65283a', 'width': 1200}, 'variants': {}}]} |
llama.cpp help understanding | 4 | Does anybody know what the llama.cpp settings should look like? Like what are the parameters that other people are using? I have a 24GB GPU if that helps. The model I am trying to use is [https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GGUF)
Most of the settings for llama.cpp I'm confused even when looking at the wiki/doc [https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab#llamacpp](https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab#llamacpp)
Any help is appreciated. THANKS | 2024-01-15T07:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/1972o43/llamacpp_help_understanding/ | 426Dimension | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1972o43 | false | null | t3_1972o43 | /r/LocalLLaMA/comments/1972o43/llamacpp_help_understanding/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'mKtAEdfrpsu9vHCQrUA6A9uNLkoN5dgVUorzcwk37ak', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=108&crop=smart&auto=webp&s=ee7557c01fe9b0e56bc4d21c89d26f87ac381649', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=216&crop=smart&auto=webp&s=e9a56d300ba43fef6446e6094572f0066ea6f1b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=320&crop=smart&auto=webp&s=0030c1df3825f4344513a6716dab969d3f81bbb7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=640&crop=smart&auto=webp&s=e31875c29ac242216907b2eb90367ecef3159c33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=960&crop=smart&auto=webp&s=80c48dae3bc0b79a84b1d60b08c46db190110876', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?width=1080&crop=smart&auto=webp&s=c99d5d951399c373be18ee6ed8159aa28e56797b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wmmWelqUoyqhGISagsM2XoJAkGSNp5iaCeS8iQf2Zww.jpg?auto=webp&s=5522492a89dd243444ad9c5b658e457925d3b030', 'width': 1200}, 'variants': {}}]} |
i have some questions on GPTQ and peft...plz help! | 1 | [removed] | 2024-01-15T06:28:27 | https://www.reddit.com/r/LocalLLaMA/comments/19721qy/i_have_some_questions_on_gptq_and_peftplz_help/ | ko_lIlBrother | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19721qy | false | null | t3_19721qy | /r/LocalLLaMA/comments/19721qy/i_have_some_questions_on_gptq_and_peftplz_help/ | false | false | self | 1 | null |
I Made a Web-based HTML-to-Markdown Converter Because Pages Get Mangled | 26 | I finally got fed-up bringing HTML and raw text into ChatGPT today and decided to write a converter for myself. It takes raw HTML or a URL (when CORS is cooperating) and converts it into Markdown for copy or download. Use it to make nice clean copy for your LLMs to work from. Hope someone here finds use from it!
[https://htmltomarkdown.top/](https://htmltomarkdown.top/) | 2024-01-15T06:26:00 | https://www.reddit.com/r/LocalLLaMA/comments/19720di/i_made_a_webbased_htmltomarkdown_converter/ | cddelgado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19720di | false | null | t3_19720di | /r/LocalLLaMA/comments/19720di/i_made_a_webbased_htmltomarkdown_converter/ | false | false | self | 26 | null |
What size models runs on RTX 3060 ti (8GB)? | 1 | Hi!
I’d like to know if I’d be able to run any models (llms or other) on my desktop GPU and which.
Thanks! | 2024-01-15T06:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1971q6j/what_size_models_runs_on_rtx_3060_ti_8gb/ | Desperate_Cookie_759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1971q6j | false | null | t3_1971q6j | /r/LocalLLaMA/comments/1971q6j/what_size_models_runs_on_rtx_3060_ti_8gb/ | false | false | self | 1 | null |
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks | 30 | 2024-01-15T06:00:08 | https://arxiv.org/abs/2401.02731 | Aaaaaaaaaeeeee | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1971kdd | false | null | t3_1971kdd | /r/LocalLLaMA/comments/1971kdd/parameterefficient_sparsity_crafting_from_dense/ | false | false | default | 30 | null | |
Options for running LLMs on laptop - better than ollama | 9 | I currently use ollama with ollama-webui (which has a look and feel like ChatGPT). It works really well for the most part though can be glitchy at times. There are a lot of features in the webui to make the user experience more pleasant than using the cli. Even using the cli is simple and straightforward.
Looking to see if there are other tools that make local LLM runs smoother than what I currently have. | 2024-01-15T05:54:03 | https://www.reddit.com/r/LocalLLaMA/comments/1971gnm/options_for_running_llms_on_laptop_better_than/ | o_rdt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1971gnm | false | null | t3_1971gnm | /r/LocalLLaMA/comments/1971gnm/options_for_running_llms_on_laptop_better_than/ | false | false | self | 9 | null |
COSMO: COntrastive Streamlined MultimOdal Model with Interleaved Pre-Training | 11 | **Paper**: [https://arxiv.org/abs/2401.00849](https://arxiv.org/abs/2401.00849)
**Code**: [https://github.com/showlab/cosmo](https://github.com/showlab/cosmo)
**Models**: [https://huggingface.co/Awiny](https://huggingface.co/Awiny)
**Dataset**: [https://huggingface.co/datasets/Awiny/Howto-Interlink7M](https://huggingface.co/datasets/Awiny/Howto-Interlink7M)
**Project page**: [https://fingerrec.github.io/cosmo/](https://fingerrec.github.io/cosmo/)
**Abstract**:
>In the evolution of Vision-Language Pre-training, shifting from short-text comprehension to encompassing extended textual contexts is pivotal. Recent autoregressive vision-language models like \[Flamingo, PaLM-E\], leveraging the long-context capability of Large Language Models, have excelled in few-shot text generation tasks but face challenges in alignment tasks. Addressing this gap, we introduce the contrastive loss into text generation models, presenting the COntrastive-Streamlined MultimOdal framework (**CosMo**), strategically partitioning the language model into dedicated unimodal text processing and adept multimodal data handling components. CosMo, our unified framework, merges unimodal and multimodal elements, enhancing model performance for tasks involving textual and visual data while notably reducing learnable parameters. However, these models demand extensive long-text datasets, yet the availability of high-quality long-text video datasets remains limited. To bridge this gap, this work introduces **Howto-Interlink7M**, an inaugural interleaved video-text dataset featuring comprehensive captions, marking a significant step forward. Demonstrating its impact, we illustrate how Howto-Interlink7M enhances model performance in image-text tasks. With 34% learnable parameters and utilizing 72% of the available data, our model demonstrates significant superiority over OpenFlamingo. For instance, in the 4-shot flickr captioning task, performance notably improves from 57.2% to 65.1%. The contributions of CosMo and Howto-Interlink7M are underscored by notable performance gains across 14 diverse downstream datasets encompassing both image-text and video-text tasks. | 2024-01-15T05:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1971a93/cosmo_contrastive_streamlined_multimodal_model/ | APaperADay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1971a93 | false | null | t3_1971a93 | /r/LocalLLaMA/comments/1971a93/cosmo_contrastive_streamlined_multimodal_model/ | false | false | self | 11 | null |
The LLM Serving Engine Showdown | 1 | 2024-01-15T05:38:31 | https://friendli.ai/blog/friendli-engine-tensorrt-llm-vllm/ | Antique_Battle_4337 | friendli.ai | 1970-01-01T00:00:00 | 0 | {} | 19716xd | false | null | t3_19716xd | /r/LocalLLaMA/comments/19716xd/the_llm_serving_engine_showdown/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZUsAsnIcAlxZOOBPi3PVh97vsb4j-fat2OIoKt5sH0E', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=108&crop=smart&auto=webp&s=0c6246ecbbcfa91f7b75448f099064f318ff935b', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=216&crop=smart&auto=webp&s=ead002f6f101df3c3f0263085f16a6f75ce3b8e5', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=320&crop=smart&auto=webp&s=63a8bfe092a1a330335a0b5e1f5cb783c914097c', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=640&crop=smart&auto=webp&s=b00a18a4c56e0c238ebf46936b3a3cf5e7f829e3', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=960&crop=smart&auto=webp&s=19b34b13515fb335d07e00edeba231663fecc58e', 'width': 960}, {'height': 599, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?width=1080&crop=smart&auto=webp&s=e2d65acc92170fc9732b40ffc96969ddd431d6f8', 'width': 1080}], 'source': {'height': 833, 'url': 'https://external-preview.redd.it/07ov7qXtwM1CF1AHWzIDG_JCn2VC1EYi3Vcd8OMWhw0.jpg?auto=webp&s=0c2a0d4e95bfe7ebc679ec61eecdbb6cc239a8d9', 'width': 1500}, 'variants': {}}]} | ||
Merging Mistral with Whisper to make a multimodal model at home on a single GPU | 201 | It's popular these days to make frankenmodels by merging unrelated LLMs together. I think it's much more interesting to graft non-LLM models onto LLMs to make multimodal models. Here's a guy who did it with Mistral and Whisper, all at home on his 3090! https://paul.mou.dev/posts/2023-12-31-listening-with-llm/ | 2024-01-15T05:26:57 | https://www.reddit.com/r/LocalLLaMA/comments/1970zhf/merging_mistral_with_whisper_to_make_a_multimodal/ | modeless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1970zhf | false | null | t3_1970zhf | /r/LocalLLaMA/comments/1970zhf/merging_mistral_with_whisper_to_make_a_multimodal/ | false | false | self | 201 | null |
Common Chat bot UI with multiple agents to interact with in the background | 1 | [removed] | 2024-01-15T05:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1970uah/common_chat_bot_ui_with_multiple_agents_to/ | beebrox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1970uah | false | null | t3_1970uah | /r/LocalLLaMA/comments/1970uah/common_chat_bot_ui_with_multiple_agents_to/ | false | false | default | 1 | null |
Any thoughts on Local LLMs that can use the mouse or keyboard? | 4 | I've noticed that a few companies this year are working on similar projects and was curious if any smaller ones have become available yet. I'm sure it's a little niche at the moment but just the novelty of seeing it in action sounds exciting. | 2024-01-15T05:11:36 | https://www.reddit.com/r/LocalLLaMA/comments/1970pi9/any_thoughts_on_local_llms_that_can_use_the_mouse/ | oversettDenee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1970pi9 | false | null | t3_1970pi9 | /r/LocalLLaMA/comments/1970pi9/any_thoughts_on_local_llms_that_can_use_the_mouse/ | false | false | self | 4 | null |
Hermes stans, which one are you using? | 1 | [removed] | 2024-01-15T04:46:13 | Future_Might_8194 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 19708c5 | false | null | t3_19708c5 | /r/LocalLLaMA/comments/19708c5/hermes_stans_which_one_are_you_using/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kfeTzN1SdwynUMa23VAcQuH6RhRJsy2GXvq85ZNGOEM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=108&crop=smart&auto=webp&s=406af19c735cbf49881f30626cb9cfc89c2bf7f5', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=216&crop=smart&auto=webp&s=3fdd30271ff25ad171709cbf00270c99979fa290', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=320&crop=smart&auto=webp&s=de3da6fdcee7ce1b6c9716488ca86852cc5a1d1b', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=640&crop=smart&auto=webp&s=a8866cdc38c3154c5f41b400b452d9dda9f844d5', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=960&crop=smart&auto=webp&s=bd59f23b0c63cc2c2268bde44409c7928e4f2052', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?width=1080&crop=smart&auto=webp&s=5e195e4c9fdc36070fa22edfd7ce5d1e06e4f064', 'width': 1080}], 'source': {'height': 1724, 'url': 'https://preview.redd.it/uzeiszbiajcc1.jpeg?auto=webp&s=7ef4a99e908ebc57c8e5cb9574df5e79e062203d', 'width': 3072}, 'variants': {}}]} | ||
Generating Training Data for Lora, locally? | 3 | All the guides I find about generating training data suggest that if you need to automate the process of generating training data, you should use GPT4. I'm wondering if anyone has had any luck getting local LLMs to generate decent training data? If so, what model and process worked best for you?
I'm trying this with Mixtral 8x07 Q8, and am having trouble getting it to stay focused on generating Q&A pairs for one data passage at a time. I give it \~1500 tokens of context as part of my system prompt, and then give it a user prompt asking it to generate Q&A pairs for training an LLM, based on just the passage in my prompt (about 1000 tokens per passage), but it invariably starts generating questions about the context data instead, and then eventually veers off course onto completely unrelated topics.
Thanks for any ideas you have on this! | 2024-01-15T03:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/196z7mh/generating_training_data_for_lora_locally/ | SuperMonkeyCollider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196z7mh | false | null | t3_196z7mh | /r/LocalLLaMA/comments/196z7mh/generating_training_data_for_lora_locally/ | false | false | self | 3 | null |
Do u know this FireAttention. 4x faster than VLLM? | 1 | [removed] | 2024-01-15T03:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/196z5ae/do_u_know_this_fireattention_4x_faster_than_vllm/ | TranslatorMoist5356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196z5ae | false | null | t3_196z5ae | /r/LocalLLaMA/comments/196z5ae/do_u_know_this_fireattention_4x_faster_than_vllm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1C83-lMhvsjxBUuSdVab8i7qiP0cBwIUxXnTX9tcj7E', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=108&crop=smart&auto=webp&s=ce55976b5946a245d890a23e6912d2b09d8a2f6d', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=216&crop=smart&auto=webp&s=18fcefa9ec45c0abba3133adc39a990058c6af78', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=320&crop=smart&auto=webp&s=f5520ca13c3a2bc808c541da7ac885a0af53ce40', 'width': 320}, {'height': 392, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=640&crop=smart&auto=webp&s=8548a28284d1f152da24bd4fa9b55c74891e44d2', 'width': 640}, {'height': 589, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=960&crop=smart&auto=webp&s=6c9a86b153818ee72f590aeca67736ac8d4ca17f', 'width': 960}, {'height': 662, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?width=1080&crop=smart&auto=webp&s=353de4d9138b0fc932b704d9b01a95645a5a000d', 'width': 1080}], 'source': {'height': 724, 'url': 'https://external-preview.redd.it/fQlpkiGdmfZyOnwzpGRFWYJlOgBBRaYwOzxNz7z6Yfs.jpg?auto=webp&s=4508d95d5f06f1aec127353fa0cc28d6c2b53022', 'width': 1180}, 'variants': {}}]} |
Predictable downstream impact of recently announced Nvidia Super GPUs | 40 | While perhaps underwhelming, the normalization of higher consumer performance 16GB VRAM will create a niche in the ecosystem supporting models somewhat bigger than the 13B that fit within 12GB but clearly smaller than the 30B that require 24GB for comfort. The first models to fill the gap will probably be frankenmerges, although models distilled from larger 20+B models should follow soon. It remains to be seen if there will be a path for direct 18B parameter models to become a norm, given that 20B GGUF models are typically a bit too large for full offloading to 16GB VRAM without severe quantization penalty.
I for one would welcome distilled 30B+ models being squeezed into 16GB VRAM that are capable of competent code assistance. | 2024-01-15T03:25:27 | https://www.reddit.com/r/LocalLLaMA/comments/196ynvw/predictable_downstream_impact_of_recently/ | grimjim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ynvw | false | null | t3_196ynvw | /r/LocalLLaMA/comments/196ynvw/predictable_downstream_impact_of_recently/ | false | false | self | 40 | null |
best value motherboard/cpu/ram for dual 3090 for inference only? | 1 | [removed] | 2024-01-15T03:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/196ynck/best_value_motherboardcpuram_for_dual_3090_for/ | tessatrigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ynck | false | null | t3_196ynck | /r/LocalLLaMA/comments/196ynck/best_value_motherboardcpuram_for_dual_3090_for/ | false | false | self | 1 | null |
recommendations for best-bang-for-buck 2x 3090 for inference only | 1 | [removed] | 2024-01-15T01:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/196wuyf/recommendations_for_bestbangforbuck_2x_3090_for/ | tessatrigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196wuyf | false | null | t3_196wuyf | /r/LocalLLaMA/comments/196wuyf/recommendations_for_bestbangforbuck_2x_3090_for/ | false | false | self | 1 | null |
Is preference data more data efficient when the responses are more similar? | 3 | Existing open DPO datasets typically have rows such that the two responses in each row share little in common. My guess is that data efficiency would be improved if the two responses are similar (example below). Has anyone studied this?
Here's an illustrative example.
Response A:
> Preference data for large language models (LLMs) refers to data that captures human choices or judgments about certain outputs or behaviors that are more preferable or desirable.
Response B:
> In the context of Large Language Models (LLMs), "preference data" refers to the information that captures end-user preferences, which can be utilized to tune or personalize the behavior of the model.
Response C:
> In the context of Large Language Models (LLMs), "preference data" refers to the information that captures end-user preferences which can be utilized to tune or personalize the behavior of the model.
Compare the preference "A > C" to the preference "B > C". "A > C" is hard to interpret, because there are many differences, but "B > C" is easy to interpret: the only difference is the missing comma before the "which". Even an arbitrarily smart model would not be able to deduce the intended lesson from "A > C" alone, but a human can easily deduce the intended meaning of "B > C", and plausibly today's LLMs could too.
If closely paired data such as the above is in fact useful, it could be produced by a "generate and correct" UI:
1. Generate a response from the model.
2. Improve the response manually (such as by fixing errors).
3. Insert "edited response > original response" as a row of preference data. | 2024-01-15T01:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/196wj4p/is_preference_data_more_data_efficient_when_the/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196wj4p | false | null | t3_196wj4p | /r/LocalLLaMA/comments/196wj4p/is_preference_data_more_data_efficient_when_the/ | false | false | self | 3 | null |
Install NVIDIA Drivers on AWS EC2 Windows Instance | 1 | [removed] | 2024-01-15T00:51:37 | https://www.reddit.com/r/LocalLLaMA/comments/196vjco/install_nvidia_drivers_on_aws_ec2_windows_instance/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196vjco | false | null | t3_196vjco | /r/LocalLLaMA/comments/196vjco/install_nvidia_drivers_on_aws_ec2_windows_instance/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6f3IuS2QGDZVtm9Uv4VjggNCwUQFgVpEYH_42Xm3Gqk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/k8tHltNpveHJn5XeElFpZFvmkgMeWvUQMud2CGUP7io.jpg?width=108&crop=smart&auto=webp&s=58df60382a0f4f3376bf8fb748ae2227e15c9f2a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/k8tHltNpveHJn5XeElFpZFvmkgMeWvUQMud2CGUP7io.jpg?width=216&crop=smart&auto=webp&s=1afab41864ce157739a4ed66908edfed6f183a3c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/k8tHltNpveHJn5XeElFpZFvmkgMeWvUQMud2CGUP7io.jpg?width=320&crop=smart&auto=webp&s=e1a4e5ccaaa4a002dc54333673543df709436e63', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/k8tHltNpveHJn5XeElFpZFvmkgMeWvUQMud2CGUP7io.jpg?auto=webp&s=336bfa119bff4b180268201bae0f96215af7b70e', 'width': 480}, 'variants': {}}]} |
strange mixtral behaviour, not finishing the answer and breaking mid sentence. | 1 | [removed] | 2024-01-15T00:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/196uw2s/strange_mixtral_behaviour_not_finishing_the/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196uw2s | false | null | t3_196uw2s | /r/LocalLLaMA/comments/196uw2s/strange_mixtral_behaviour_not_finishing_the/ | false | false | self | 1 | null |
what is the easiest way to deploy llama2 on cloud?? | 1 | [removed] | 2024-01-15T00:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/196uo2t/what_is_the_easiest_way_to_deploy_llama2_on_cloud/ | murphy12f | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196uo2t | false | null | t3_196uo2t | /r/LocalLLaMA/comments/196uo2t/what_is_the_easiest_way_to_deploy_llama2_on_cloud/ | false | false | self | 1 | null |
oobabooga auto-devices uses? | 2 | Hi there,
So I'm assuming auto-devices allows me to use both my gpu and cpu? Would this be recommended since I don't have enough VRAM but I still want to use my gpu? Also what type of models should I be using if I want to use auto-devices? I've been using mainly AWQ models but I never touched on GPTQ or GGUF models. Any help would be appreciated thank you.
Thanks again | 2024-01-14T23:53:37 | https://www.reddit.com/r/LocalLLaMA/comments/196u8z8/oobabooga_autodevices_uses/ | 426Dimension | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196u8z8 | false | null | t3_196u8z8 | /r/LocalLLaMA/comments/196u8z8/oobabooga_autodevices_uses/ | false | false | self | 2 | null |
Local Translation? | 1 | [removed] | 2024-01-14T23:45:41 | https://www.reddit.com/r/LocalLLaMA/comments/196u2l2/local_translation/ | ninomatsu92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196u2l2 | false | null | t3_196u2l2 | /r/LocalLLaMA/comments/196u2l2/local_translation/ | false | false | self | 1 | null |
Poll: What is more important: Fast smaller limited models or larger smarter slower models? | 8 | There is a dichotomy in discussions about the future of inferencing. Is it more important to focus on large LLMs using slower CPU memory or focus on faster more limited GPU memory?
Don't take this too seriously. Obviously there are advantages to both and can use both at the same time. I suppose the purpose of the poll is to show that inferencing on the CPU is going to be important especially as AMD and Intel add NPUs, which will make them faster. Several years down the road, there will be CPU architectures that resemble current GPU architectures, because [LLMs are the fastest growing application](https://www.theverge.com/2023/11/6/23948386/chatgpt-active-user-count-openai-developer-conference) in history and everyone is expecting A.I. to eat the world.
[View Poll](https://www.reddit.com/poll/196tg3o) | 2024-01-14T23:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/196tg3o/poll_what_is_more_important_fast_smaller_limited/ | danielcar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196tg3o | false | null | t3_196tg3o | /r/LocalLLaMA/comments/196tg3o/poll_what_is_more_important_fast_smaller_limited/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'SxdMLXgS7Wm3wtPRJFMNjpJ9M1qVrEmd4aLTGY1SmSg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=108&crop=smart&auto=webp&s=c8dd0c54b71e0a2be0dadc0f52056a727dc2c592', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=216&crop=smart&auto=webp&s=c65fdc27f73fba2e1b8c8dc471de2b334811272a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=320&crop=smart&auto=webp&s=d9631ed9ff543913e82243f6116575010dbccb29', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=640&crop=smart&auto=webp&s=d01399a65551ab7eb6d0b3a6f811624bb1c49632', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=960&crop=smart&auto=webp&s=9b6ac125b5d040c0de7d9a01a778204665cc3834', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?width=1080&crop=smart&auto=webp&s=a8d1797641a2f0e61cc4cf32f4027c55fb191129', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/N68jRiG4IlYA_weU8Zrhj2-zRYKI3fecllX9FBxXdM8.jpg?auto=webp&s=624b7fc8fb35f263cc6e97fb2c42ddcc4fb74af7', 'width': 1200}, 'variants': {}}]} |
mixtral 8x7B localy tutorial for dumb people ? | 1 | [removed] | 2024-01-14T22:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/196svqw/mixtral_8x7b_localy_tutorial_for_dumb_people/ | CommercialBit3465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196svqw | false | null | t3_196svqw | /r/LocalLLaMA/comments/196svqw/mixtral_8x7b_localy_tutorial_for_dumb_people/ | false | false | self | 1 | null |
Enabling GPU for CoquiTTS (Python, through PyTorch) | 7 | Hi Everyone, sorry if this is slightly off topic but I know that many users use this one as well:
I cannot get to use the GPU when enabling the GPU for Coqui TTS. When checking if torch cuda is compiled, I run:
torch.cuda.is_available()
It returns
True
Here is what I tried
class TTS_custom: def __init__(self, tortoise=False) -> None: self.tortoise = tortoise if tortoise is True: cuda_available = torch.cuda.is_available() self.tts = TTS("tts_models/en/multi-dataset/tortoise-v2") self.tts.to('cuda') def test(): tt = TTS_custom(True) print(torch.cuda.is_available()) tt.read("I'll be executed for this, but I don't care. I need to warn people.") test()
I installed and compiled PyTorch with
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Any idea ?
https://preview.redd.it/h1a3ozbnhhcc1.png?width=1376&format=png&auto=webp&s=76b8b0da9c4317f96e48bced90809bca142ccab8 | 2024-01-14T22:42:46 | https://www.reddit.com/r/LocalLLaMA/comments/196sl5a/enabling_gpu_for_coquitts_python_through_pytorch/ | trexgris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196sl5a | false | null | t3_196sl5a | /r/LocalLLaMA/comments/196sl5a/enabling_gpu_for_coquitts_python_through_pytorch/ | false | false | 7 | null | |
Does Amd's 7040 cpu support llama? | 1 | [removed] | 2024-01-14T22:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/196sk5s/does_amds_7040_cpu_support_llama/ | AdThin8225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196sk5s | false | null | t3_196sk5s | /r/LocalLLaMA/comments/196sk5s/does_amds_7040_cpu_support_llama/ | false | false | self | 1 | null |
Need help with choosing an LLMA / LLaMA | 1 | [removed] | 2024-01-14T22:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/196rt10/need_help_with_choosing_an_llma_llama/ | CryptoBoiss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196rt10 | false | null | t3_196rt10 | /r/LocalLLaMA/comments/196rt10/need_help_with_choosing_an_llma_llama/ | false | false | self | 1 | null |
Local image generation models? | 2 | Are there any local models that are capable of generating images, along the lines of Midjourney though not necessarily as powerful?
If none currently exist, are there any development projects that have local image generation capability as their goal?
For example, models that can create 2D graphics for Powerpoint-style presentations? | 2024-01-14T21:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/196r0q1/local_image_generation_models/ | sborowko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196r0q1 | false | null | t3_196r0q1 | /r/LocalLLaMA/comments/196r0q1/local_image_generation_models/ | false | false | self | 2 | null |
A python package I created for llms application including my own implementation of long short term memory and a web search tool for llm, it supports both Openai-like API or loading local models directly from different formats. | 85 | [https://github.com/nath1295/LLMPlus](https://github.com/nath1295/LLMPlus)
I find Langchain annoying for not letting you set different llms with different generation configurations, so I tried to build my own custom llms to avoid loading local models multiple times when I want to build an agent or tool that uses the same underlying model. Also, some of the streaming and stop words for llms are not that great in langchain, so I have my own implementation in this package I created.
I am still using an Intel MacBook (too poor to get Nvidia cards or even a new Macbook :( ) to work on this project, so I cannot guarantee the installation will be seamless, but hopefully the pip install works. The code should be working with Cuda or apple silicon (used Colab and my friend's flashy new macbook to briefly test it).
Stuff I have in the package:
* An LLM factory class to generate Langchain-compatible llms while only loading the model once.
* Embedding toolkits that have text splitters bundled with the embedding model.
* A Vector database class built on top of FAISS for local storage.
* Memory classes: (base one and one with both long-term and short-term memory, powered by a vector database)
* Prompt template class that helps you to format your prompt with different prompt formats (have some presets like llama2, chatml, vicuna etc.)
* A base tool class and a web search tool with duckduckgo as an example (please have a look, wonder if there are better ways to do that)
* A Gradio chatbot web app that lets you store different conversations, also you can set your own system prompt and configure the long-term and short-term memory settings, and you can set generation configurations like temperature, max new tokens, top k etc. (This is just for fun, my front-end skills are not better than a monkey to be very honest.)
* And of course, the docs in the repo as well.
​
I know I'm no expert, there are plenty of people in this sub who are extremely knowledgeable in this LLM field, so treat this as an amateur project looking for advice if you can bear with me for my spaghetti code. Would really appreciate any comments :')) Forgive me if I can't do much testing on fancy GPUs like you guys have, trying not to spend any money to work on this project... | 2024-01-14T21:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/196qqoy/a_python_package_i_created_for_llms_application/ | llordnt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196qqoy | false | null | t3_196qqoy | /r/LocalLLaMA/comments/196qqoy/a_python_package_i_created_for_llms_application/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'L2YZcns-AlMP_hziwdFHtjCYh8xolPEfFny-j9JVP_U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=108&crop=smart&auto=webp&s=4aaee4e5b9bd158e94cfca6c6305012082b70708', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=216&crop=smart&auto=webp&s=7511ff75677021ded987f8172f8f5846d0f9e750', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=320&crop=smart&auto=webp&s=bd8fb658f53d6f9a4a05f86df495e4a5ec2d7e47', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=640&crop=smart&auto=webp&s=a10868b568735b9d1f92e06af8850a90f71da3d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=960&crop=smart&auto=webp&s=63602b0d7dbdb7fee13f42a3a9576b65905171a5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?width=1080&crop=smart&auto=webp&s=a6a07de49e695cae10bab17d4811b2a71ce2bc15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kyQAMcnanyy3nWIUrkW4cKliNOpObfCs4KbQkAAtm1I.jpg?auto=webp&s=2cec6f37281cb5b0f380e19f0422df27ad9d065a', 'width': 1200}, 'variants': {}}]} |
Best 3B LLM right now? | 34 | Just curious which ones the best. Is there one thatnis comparable to 7B models or even better? | 2024-01-14T20:54:33 | https://www.reddit.com/r/LocalLLaMA/comments/196pyap/best_3b_llm_right_now/ | headbopper96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196pyap | false | null | t3_196pyap | /r/LocalLLaMA/comments/196pyap/best_3b_llm_right_now/ | false | false | self | 34 | null |
How much does VRAM matter? | 8 | Hi there!
I'm new to LLMs, and currently experimenting with dolphin-mixtral, which is working great on my RTX 2060 Super. I'm considering buying a new GPU for gaming, but in the meantime I'd love to have one that is able to run LLM quicker.
I have a hard time finding what GPU to buy (just considering LLM usage, not gaming). How much does VRAM matter? Is there a **performance** difference between 12 GB and 24 GB VRAM for instance? Or is it just limiting what model you can run on it?
Also, are new GPUs that much faster on LLMs? For instance, the RTX 4090 is a *loooot* more powerful than the RTX 3090 for gaming. Is is also true for LLMs in general?
The main comparison I'm wondering about would be the RTX 4060 Ti 16 GB vs RTX 4070 for small (e.g. dolphin-mixtral) LLMs. What would be the most performant one?
Thanks! | 2024-01-14T20:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/196plc8/how_much_does_vram_matter/ | NeaZerros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196plc8 | false | null | t3_196plc8 | /r/LocalLLaMA/comments/196plc8/how_much_does_vram_matter/ | false | false | self | 8 | null |
Can you recommend the right LLM model for my use case that is compatible with a T4 GPU? | 2 | Hi - I have a llm use case where I want a bot to auto-respond to various users in a group chat application with simple reasoning. The answers would need to be based on the context provided and want to minimize the chances of hallucination.
However, our business model doesn't allow for spending more than a $300-400 a month per instance for this so want something that'll fit on a T4 GPU. Commercial APIs are too expensive for us given the volume of messages we have.
However, our business model doesn't allow spending more than $300-400 a month per instance for this, so we want something that'll fit on a T4 GPU. Commercial APIs are too expensive for us given the volume of messages we have. | 2024-01-14T20:33:26 | https://www.reddit.com/r/LocalLLaMA/comments/196pfri/can_you_recommend_the_right_llm_model_for_my_use/ | m1ss1l3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196pfri | false | null | t3_196pfri | /r/LocalLLaMA/comments/196pfri/can_you_recommend_the_right_llm_model_for_my_use/ | false | false | self | 2 | null |
Is there a text to speech model or api service that supports client streaming? | 1 | [removed] | 2024-01-14T20:21:45 | https://www.reddit.com/r/LocalLLaMA/comments/196p5ws/is_there_a_text_to_speech_model_or_api_service/ | warycat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196p5ws | false | null | t3_196p5ws | /r/LocalLLaMA/comments/196p5ws/is_there_a_text_to_speech_model_or_api_service/ | false | false | self | 1 | null |
Can someone provide an instruction to run this docker image - ochat/openchat-server | 1 | I am trying to run this docker image - ochat/openchat-server: [ochat/openchat-server - Docker Image](https://hub.docker.com/r/ochat/openchat-server) and I keep on getting this message - openai\_api\_server.py: error: argument --model: expected one argument | 2024-01-14T19:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/196olfk/can_someone_provide_an_instruction_to_run_this/ | labloke11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196olfk | false | null | t3_196olfk | /r/LocalLLaMA/comments/196olfk/can_someone_provide_an_instruction_to_run_this/ | false | false | self | 1 | null |
LM Studio RX 6900xt does not work when trying to use gpu offload | 3 | Even with n\_gpu\_layers set to 1 i get the same error.
​
Error:
{
"cause": "(Exit code: 0). Please try loading the model again. ",
"suggestion": "Ensure you have enough available memory to load this model.",
"data": {
"memory": {
"ram\_capacity": "16.71 GB",
"ram\_unused": "16.71 GB"
},
"gpu": {
"type": "AmdOpenCL",
"vram\_recommended\_capacity": 0,
"vram\_unused": 0
},
"os": {
"platform": "linux",
"version": "6.6.10-arch1-1",
"supports\_avx2": true
},
"app": {
"version": "0.2.10",
"downloadsDir": "/home/callum/.cache/lm-studio/models"
},
"model": {}
},
"title": "Model error"
} | 2024-01-14T19:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/196odgx/lm_studio_rx_6900xt_does_not_work_when_trying_to/ | EpicGamer1337mlg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196odgx | false | null | t3_196odgx | /r/LocalLLaMA/comments/196odgx/lm_studio_rx_6900xt_does_not_work_when_trying_to/ | false | false | self | 3 | null |
Drawing simple insights from tabular data | 1 | [removed] | 2024-01-14T19:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/196nisj/drawing_simple_insights_from_tabular_data/ | Labanc_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196nisj | false | null | t3_196nisj | /r/LocalLLaMA/comments/196nisj/drawing_simple_insights_from_tabular_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RCFh0Kid3SAqWEkALMGNW1e9Vu6ayZpftekoayP00hY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=108&crop=smart&auto=webp&s=b3881e36da92b82c6947f6ca4ff3804ca47f2aea', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=216&crop=smart&auto=webp&s=17b5b01e50a969ac9e2353bebb062cd52a99d108', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=320&crop=smart&auto=webp&s=acadaf004e8aeb6919eabdb0d93065a34f7e89df', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=640&crop=smart&auto=webp&s=883009d39175a2f03b76275ed0f7c6011d94a3a7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=960&crop=smart&auto=webp&s=7cc62aef83f192d102fa78c83c8f4fcfa85057e3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=1080&crop=smart&auto=webp&s=6ca6913f202be9a9f83b266dd459edc90adbf9dd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?auto=webp&s=41fa146938cd97da5abfeff0d092a2cc151e65fa', 'width': 1200}, 'variants': {}}]} |
How to prevent context overflow with llama-cpp-python create_chat_completion? | 1 | [removed] | 2024-01-14T19:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/196nadv/how_to_prevent_context_overflow_with/ | dr-yd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196nadv | false | null | t3_196nadv | /r/LocalLLaMA/comments/196nadv/how_to_prevent_context_overflow_with/ | false | false | self | 1 | null |
Seeking Help on Open Source Projects with Local ML Models on Mobile Phones | 4 | I'm currently exploring open-source projects that run local machine learning models on mobile devices. I'm interested in understanding the performance, challenges, and practical applications of these projects.
Have any of you tested or worked with such open-source projects on your mobile phones? I would greatly appreciate if you could share your experiences, insights, or any recommendations you might have.
Looking forward to your valuable input!
Thank you! | 2024-01-14T19:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/196n8tj/seeking_help_on_open_source_projects_with_local/ | klei10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196n8tj | false | null | t3_196n8tj | /r/LocalLLaMA/comments/196n8tj/seeking_help_on_open_source_projects_with_local/ | false | false | self | 4 | null |
Llama-cpp-python: Switching back and forth between caches? | 1 | [removed] | 2024-01-14T18:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/196mvt6/llamacpppython_switching_back_and_forth_between/ | dr-yd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196mvt6 | false | null | t3_196mvt6 | /r/LocalLLaMA/comments/196mvt6/llamacpppython_switching_back_and_forth_between/ | false | false | self | 1 | null |
Can the iMac M1 2021, 16GB (Model Identifier: iMac21,1) handle LocalLLaMA? Seeking recommendations | 1 | I’d love to know your advices, recommendations for settings, tweaks, or configurations you found effective, thanks! | 2024-01-14T18:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/196mm2m/can_the_imac_m1_2021_16gb_model_identifier/ | tigerzxzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196mm2m | false | null | t3_196mm2m | /r/LocalLLaMA/comments/196mm2m/can_the_imac_m1_2021_16gb_model_identifier/ | false | false | self | 1 | null |
#GeminiPro got a long distance to go..... | 1 | Gemini Pro claiming Phi, Orca LLMs have been created by the big G :) | 2024-01-14T18:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/196miw3/geminipro_got_a_long_distance_to_go/ | puneethmishra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196miw3 | false | null | t3_196miw3 | /r/LocalLLaMA/comments/196miw3/geminipro_got_a_long_distance_to_go/ | false | false | self | 1 | null |
What do you want to ask your AI personal assistant? | 21 | Let's get creative! I see a lot of potential in local LLMs, but I want to start testing real ideas.
What do you want local LLMs to do for you, even if it sounds far-fetched? What would you want to ask it and have it respond to (preferably coherently haha)?
Here's an example I'm working on now:
Good morning! I've got a long day ahead of me. Draw a picture of a cute dog for me.
Can you look up the weather in San Diego and add what to wear to my todo list?
Tell me the score of the most recent kings game.
Can you look up some python job queue examples and create a document with an overview of the options?
Tell me my top 5 to do tasks for the day.
To some, this sounds far-fetched. To others, probably boring. But this is exciting to me. One message with a list of multiple tasks. Some require multiple steps.
What are things you would like AI to do that would help you? How would you phrase it/talk to it? | 2024-01-14T18:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/196m9iy/what_do_you_want_to_ask_your_ai_personal_assistant/ | AndrewVeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196m9iy | false | null | t3_196m9iy | /r/LocalLLaMA/comments/196m9iy/what_do_you_want_to_ask_your_ai_personal_assistant/ | false | false | self | 21 | null |
Question about running multiple GPUs | 1 | Hello all. I have a new 3090 and a less new 3060.
What I'd like to have is:
While AI training: Use 3090 for training, 3060 for all other computer tasks. I believe I already heard combining them for training is not possible.
While AI inference: Combine both VRAMs/power if possible for LLM and/or image inference, if not, then 3090 for inference, 3060 for everything else, as above.
While not doing any AI: use 3090 for everything.
Is this setup possible? How much of a pain is it to setup and maintain? Will I need to switch between hdmi ports on my monitor each time I do a switch? Will Kohya (I know that's stable diffusion, but I also want to look into training LLMs) and oobabooga handle this situation nicely? | 2024-01-14T17:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/196kxij/question_about_running_multiple_gpus/ | Baphilia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196kxij | false | null | t3_196kxij | /r/LocalLLaMA/comments/196kxij/question_about_running_multiple_gpus/ | false | false | self | 1 | null |
What is the most easy way to finetune a model or change the format? | 2 | I know this is probably a complex question but I would like to train mistral 7b on various data but im not sure how or if i have the pc power to do so.
Im not good with advanced things but is there an easy way to train or change the format of a model because there are some model that i need a gguf version of. | 2024-01-14T17:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/196ku25/what_is_the_most_easy_way_to_finetune_a_model_or/ | Gaming-invisibleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ku25 | false | null | t3_196ku25 | /r/LocalLLaMA/comments/196ku25/what_is_the_most_easy_way_to_finetune_a_model_or/ | false | false | self | 2 | null |
My first fine-tune: mistral-7b-v0.1-GreeceRome-v0.1 for MLX | 48 | I bought an M2 Mac MacBook last summer in the hopes that we'd be able to use Apple Silicon Macs for LLM training and inference, and with MLX taking off, that time is here 😊
Currently GGUF export isn't supported (but they are working on it) and the pace of new releases has been pretty fast. Until then for all your Mac enthusiasts:
**Model**: [mistral-7b-v0.1-GreeceRome-v0.1](https://huggingface.co/mlx-community/mistral-7b-v0.1-GreeceRome-v0.1#mistral-7b-v01-greecerome-v01) a classical history assistant fine-tuned from Mistral 7b, on 1,640 Q/A pairs on Greek & Roman history, over 3 epochs.
[Dataset](https://huggingface.co/datasets/wmmarcellino/mistral-7b-v0.1-GreeceRome-v0.1): a classics dataset used for a fine-tune of Mistral 7b base model. It contains 1,640 Q/A pairs on Greek & Roman history. The dataset was generated via Mixtral-8x7b Instruct v01, run over 512 token-length chunks of vol's 2&3 of Will Durants' 13 vol **Story of Civilization** (*Life of Greece* and *Caesar & Christ*).
Training data was formatted with \[INST\] and \[/INST\] delimiting instructions:
{"text": "Q: \"Why did many Greeks come to resent Rome's 'liberation' and 'peacekeeping' efforts, such as forbidding class war and interfering in disputes, despite Rome having given Greece freedom from previous conflicts?\"\nA: Many Greeks came to resent Rome's \"liberation\" and \"peacekeeping\" efforts due to several reasons. First, after the Romans had given Greece freedom...(blah blah blah)...interfering in their domestic affairs, and ultimately"}
Anyways, pretty jazzed to release my first baby resource to the open-source community, and will likely put up a GGUF when that's possible.
[Guide](https://www.reddit.com/r/LocalLLaMA/comments/18p731p/project_using_mixtral_8x7b_instruct_v01_q8_to/) to locally generating training data on an M2 Max
[Guide](https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/) to fine-tuning using MLX
​ | 2024-01-14T16:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/196kc6l/my_first_finetune_mistral7bv01greeceromev01_for/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196kc6l | false | null | t3_196kc6l | /r/LocalLLaMA/comments/196kc6l/my_first_finetune_mistral7bv01greeceromev01_for/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': '-kSdZFJSalpqT3sfh5fUsnD-99fKRZ6Tm_kYGe5KDKI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=108&crop=smart&auto=webp&s=aa96989ceb46b1e79bdcc0380ed921610c6214fd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=216&crop=smart&auto=webp&s=9fa3b02c925a17bd8b6c3b764e68fa3e0e8c76d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=320&crop=smart&auto=webp&s=2b6f4fc6c384023c665abb628d845c5e2f3c88cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=640&crop=smart&auto=webp&s=b89b74ac3be0d081b95756c128f2d0c9f3dd2174', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=960&crop=smart&auto=webp&s=51edc75b224ef05f0a19d820e153b288fd021b1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?width=1080&crop=smart&auto=webp&s=95e0edcc4fc401fff82bc9f8a9efd8165dbb8b83', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BC5lWpe-QcxnsDphiAIXQkkY3kbUaexQMFbcFtRIDTA.jpg?auto=webp&s=221092639a66503c99ef5c3e9abc1d2b134c40f6', 'width': 1200}, 'variants': {}}]} |
On the hunt for weirdo LLMs | 142 | Recently I've been trying to find the weirdest LLMs I can get my hands on: I'm constantly amused by outputs from the occult-trained Mistral-Trismegistus, and ToxicQA occasionally gives me some good, albeit bonkers responses. Has anyone got any suggestions for models trained on things that you really should not be training models on (conspiracy theories, 20 years worth of Usenet posts on the band Rush, etc etc).
I feel like I'm not alone here in wanting to go down rabbit holes with total weirdoes (much like in real life). | 2024-01-14T16:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/196jtc2/on_the_hunt_for_weirdo_llms/ | McWild20XX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196jtc2 | false | null | t3_196jtc2 | /r/LocalLLaMA/comments/196jtc2/on_the_hunt_for_weirdo_llms/ | false | false | self | 142 | null |
Fundamental limitations of *current* LMM approaches | 2 | And possible directions of moving forward.
Preface: Recent posts by LeCun and discussion of LMM limitations got me thinking...
Here is a quote from one of my favorite authors, Leonid Kaganov, from his article on GMO hysteria: (https://lleo.me/dnevnik/2008/02/26 in Russian tho):
"For a brain that lacks a centralized hierarchy of knowledge, where every brick is tightly packed because it follows from the previous one and is confirmed by the subsequent one, any information is perceived as being suspended separately in space. The multiplication table, an extra-sensory perception, a blockbuster, Wikipedia, a co-worker's advice, an advertisement in a glossy magazine, a school textbook, a Sunday sermon, an article in a blog, a TV show, molecular physics, atomic energy, a naked woman, a killer with a shovel - all information has equal rights, and the criterion is still faith."
Now replace "brain" with an LMM and "faith" with "statistical probability" and I think this is a pretty good analogy.
My argument:
The problem, as I see it, lie in the very nature of *vector embeddings*.
They are great for purely associative learning, and indeed that is why well-trained LMMs excel in "soft" areas, but frankly suck in anything that require precision and "factuality", and "hallucinate" like someone on LSD because they "think" by pure association. Yea, we have RAG, but that's frankly a crutch (if a powerful one).
Talking of RAG: my GF got me into Semantle (like wordle, but using semantic distance probability) and in fact this is a pretty good indication how crazy embeddings can be sometimes when it comes to associations. At best, current LMMs solve the problem (that was, admittedly, seemingly intractable just a few years ago) of giving AI "common sense", but it only takes you so far.
So... what can replace vector embeddings and yet captures *causal*, hierarchical relationships better?
Knowledge graphs!
In a way, I think "lets think step by step" is poor man's approximation of "hierarchical knowledge" by invoking associations, but a much more powerful way would be to explore knowledge graph of concepts being discussed.
In my own experience, when asking about, say, "bicycle design", it very often confuses very simple things that relate to the fact that a "bicycle" is a singetrack vehicle, but given that a "vehicle" is *usually* (statistically) multi-track like a car with very different dynamics, its advice is almost always laughably wrong unless you do the work of exploring "bicycle knowledge graph" yourself by "leading questions" at the very least.
Tl;DR: People from machine learning that read this sub, please put more effort in marrying LMMs with knowledge graphs, preferably at the level of model architecture. | 2024-01-14T16:32:01 | https://www.reddit.com/r/LocalLLaMA/comments/196jryw/fundamental_limitations_of_current_lmm_approaches/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196jryw | false | null | t3_196jryw | /r/LocalLLaMA/comments/196jryw/fundamental_limitations_of_current_lmm_approaches/ | false | false | self | 2 | null |
Configuring 34B 200K models with lower context length? | 8 | I've tried out a couple of 34B models, configured with 10.000 context length, but each of them seems to just fall apart after around only 4000 tokens and keep repeating. I've tested for example Nous-Capybara-34B and also Capybara-Tess-Yi-34B-200K-DARE-Ties with the default rope\_freq\_base but both of them behaves badly rather quick.
Is there some settings I could have wrong like rope? How do I figure out what a good rope setting is for any of these 34B models are for a changed context length, be it for for those models with supported 200K or not? | 2024-01-14T16:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/196jelw/configuring_34b_200k_models_with_lower_context/ | LombarMill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196jelw | false | null | t3_196jelw | /r/LocalLLaMA/comments/196jelw/configuring_34b_200k_models_with_lower_context/ | false | false | self | 8 | null |
Advice Needed: Can this PC handle LocalLLaMA well? | 1 | Processor:
- Type: AMD Ryzen 5
- Generation: Zen 2 (4th generation)
- Model Number: AMD Ryzen 5 4600H
- Cores: 6
- Base Frequency: 3 GHz (3,000 MHz)
- Boost Frequency: 4 GHz (4,000 MHz)
- Cache: 8 MB
- Features: Automatic Overclocking, Virtualization Support
Memory:
- Operational RAM Size: 8 GB
- Memory Type: DDR4
- Memory Frequency: 3,200 MHz (3.2 GHz)
- Installed Slots: 1 ×
- Total Slots: 2×
Graphics Card:
- Type: Gaming
- Memory: 4 GB
- GPU Brand/Model: NVIDIA GeForce GTX 1650
- Graphics Card Brand: NVIDIA GeForce
- Model Graphics Cards: GTX 1650 | 2024-01-14T15:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/196is1p/advice_needed_can_this_pc_handle_localllama_well/ | tigerzxzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196is1p | false | null | t3_196is1p | /r/LocalLLaMA/comments/196is1p/advice_needed_can_this_pc_handle_localllama_well/ | false | false | self | 1 | null |
Open-hermes 2.5 is better than GPT-3.5 by my real-world tests, change my mind | 167 | 2024-01-14T15:07:08 | uniformly | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 196hvr1 | false | null | t3_196hvr1 | /r/LocalLLaMA/comments/196hvr1/openhermes_25_is_better_than_gpt35_by_my/ | false | false | 167 | {'enabled': True, 'images': [{'id': 'Q5BwJnCZUbhrfWrA41PmPlQLAs_1HtEEZ72GJdqzwXE', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/eicux9s78fcc1.png?width=108&crop=smart&auto=webp&s=b558484c16cb1aea125b8a8cc9694e39c4d463e0', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/eicux9s78fcc1.png?width=216&crop=smart&auto=webp&s=1cae58801e7e628113ed3bac2cbe93f667e7e00e', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/eicux9s78fcc1.png?width=320&crop=smart&auto=webp&s=748f936a9c660140a9e2d39d6d1cc4bff1d579c8', 'width': 320}], 'source': {'height': 442, 'url': 'https://preview.redd.it/eicux9s78fcc1.png?auto=webp&s=3a1b45dfdc32b15f8eb5705cc5535666b813b8eb', 'width': 564}, 'variants': {}}]} | |||
Why are there so few fine tunes of Internlm 20b? | 6 | There are so many 20B merges, but but not many finetunes of InternLM 20B (I saw one trained with Open Assistant but there are many better datasets now). It seems similar to Yi-34b (2.4 trillion training tokens compared to Yi's 3 Trillion, also mainly in English and Chinese). This model trained on for example capybara would be a good middleground between Yi and 13b Models.
​
Here is a link to a LLamafied version: [https://huggingface.co/KnutJaegersberg/internlm-20b-llamafied](https://huggingface.co/KnutJaegersberg/internlm-20b-llamafied) | 2024-01-14T14:18:11 | https://www.reddit.com/r/LocalLLaMA/comments/196guzr/why_are_there_so_few_fine_tunes_of_internlm_20b/ | QuieselWusul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196guzr | false | null | t3_196guzr | /r/LocalLLaMA/comments/196guzr/why_are_there_so_few_fine_tunes_of_internlm_20b/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'R5mQmu8Q3rlmeSDRygTTvVk06DrXokLYs0q8FD2XXtg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=108&crop=smart&auto=webp&s=d52104ae44d692fd5743fd300a29e4cfb9fadbb9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=216&crop=smart&auto=webp&s=07723ffb7fb785e318cf36c7b505030591a52838', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=320&crop=smart&auto=webp&s=97beeb6611fcc46ad4b38c7bd55ef9be8675a696', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=640&crop=smart&auto=webp&s=bf41a299082194cc1c2430e86efc29a1feb67475', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=960&crop=smart&auto=webp&s=5376243018149ad68ce4ea5369c859a5ee68224a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?width=1080&crop=smart&auto=webp&s=1c5121a430a5e98d448a6751edec5171a20ec602', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B2xa_yz2PCLrimEVMDNuYxRq5KDazYpfeYKkzI3HCU8.jpg?auto=webp&s=a62a2fceddcc646bed9bb3fb7a9c2a292d698816', 'width': 1200}, 'variants': {}}]} |
Win 11 Laptops for trying out LLMs? | 1 | What are the sub 1000$ laptops people are using to try out LLMs? I'm presuming some GPU card and at least 16Gb RAM is needed. I don't plan to do custom builds and would prefer to buy something from popular vendors. | 2024-01-14T14:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/196gujv/win_11_laptops_for_trying_out_llms/ | 10vatharam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196gujv | false | null | t3_196gujv | /r/LocalLLaMA/comments/196gujv/win_11_laptops_for_trying_out_llms/ | false | false | self | 1 | null |
Is there a way to use large models and some tools to intelligently convert Sketch design drafts into relatively smart CSS + React code? | 1 | I want to create a tool that converts Sketch files into React + CSS. While CSS can be generated from Sketch layer information through rules, achieving more intelligent DOM layout, CSS naming, and component decomposition for a more perfect reproduction of the design requires a more sophisticated approach.
I've tried rule-based generation of CSS and React code, but the results were not aesthetically pleasing. Integrating image recognition, large models, and other technologies seems challenging. With the advancements in large models, they can accomplish various tasks based on prompts. Do you have any ideas or good examples to share on how to integrate large models to achieve this goal? | 2024-01-14T14:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/196gs27/is_there_a_way_to_use_large_models_and_some_tools/ | Suitable-Mastodon542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196gs27 | false | null | t3_196gs27 | /r/LocalLLaMA/comments/196gs27/is_there_a_way_to_use_large_models_and_some_tools/ | true | false | self | 1 | null |
I have made a Kaggle notebook to run TabbyAPI | 6 | 2024-01-14T13:19:28 | https://www.kaggle.com/code/blutiger/tabbyapi/notebook | _BluTiger | kaggle.com | 1970-01-01T00:00:00 | 0 | {} | 196fpwi | false | null | t3_196fpwi | /r/LocalLLaMA/comments/196fpwi/i_have_made_a_kaggle_notebook_to_run_tabbyapi/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'p15coSqe7L8wApjnVlwASEYE50BcnmvRuPbSVpGUPaM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?width=108&crop=smart&auto=webp&s=b0ef6b067fd0b46d01dc9f262edb560782bc9f2c', 'width': 108}], 'source': {'height': 160, 'url': 'https://external-preview.redd.it/hIZn74UCUzMss02AeqYwTjjr9Q4P0f4_c6SjRwG_mWM.jpg?auto=webp&s=317a9be4dd095d5a4b95cfdd96ada08acea08513', 'width': 160}, 'variants': {}}]} | ||
Any good 3B coding generation models? | 5 | SLMs like Phi-2 have shown that size doesn't matter (to some extent). So, in the same vein, are there any coding models or phi-2 fine-tunes? I think 3B coding model would be extremely practical since I can have a locally running GitHub copilot. | 2024-01-14T12:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/196ey54/any_good_3b_coding_generation_models/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ey54 | false | null | t3_196ey54 | /r/LocalLLaMA/comments/196ey54/any_good_3b_coding_generation_models/ | false | false | self | 5 | null |
Local AI pipeline application | 1 | [removed] | 2024-01-14T12:30:42 | https://www.reddit.com/r/LocalLLaMA/comments/196ev5o/local_ai_pipeline_application/ | AssistantsLab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ev5o | false | null | t3_196ev5o | /r/LocalLLaMA/comments/196ev5o/local_ai_pipeline_application/ | false | false | self | 1 | null |
Best vision model for GUI manipulation? | 1 | [removed] | 2024-01-14T12:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/196eoeq/best_vision_model_for_gui_manipulation/ | Accomplished_Yard636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196eoeq | false | null | t3_196eoeq | /r/LocalLLaMA/comments/196eoeq/best_vision_model_for_gui_manipulation/ | false | false | self | 1 | null |
Exllamav2 performance on cpu only better than llama.cpp? | 5 | Hi there,
I'm currently using llama.cpp on my cpu only machine.
I've heard a lot of good things about exllamav2 in terms of performance, just wondering if there will be a noticeable difference when not using a GPU.
Anyone ever tried that or uses it with CPU only? | 2024-01-14T12:12:31 | https://www.reddit.com/r/LocalLLaMA/comments/196ek5w/exllamav2_performance_on_cpu_only_better_than/ | Frequent_Valuable_47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196ek5w | false | null | t3_196ek5w | /r/LocalLLaMA/comments/196ek5w/exllamav2_performance_on_cpu_only_better_than/ | false | false | self | 5 | null |
Inference of mamba models in pure C | 52 | Hoping this lands in llama.cpp! https://github.com/ggerganov/llama.cpp/issues/4353 | 2024-01-14T11:34:18 | https://github.com/kroggen/mamba.c | waxbolt | github.com | 1970-01-01T00:00:00 | 0 | {} | 196dyov | false | null | t3_196dyov | /r/LocalLLaMA/comments/196dyov/inference_of_mamba_models_in_pure_c/ | false | false | 52 | {'enabled': False, 'images': [{'id': '1bsBxh1HBhWvPYKmeI-f8w3CuO2Gpo5Z4AYrmHWnge0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=108&crop=smart&auto=webp&s=acfb72d1ae808f6abbf17f419d023281405bcdad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=216&crop=smart&auto=webp&s=df65d8550cb87ec4ba330a96cc86d0e0f9811037', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=320&crop=smart&auto=webp&s=4eabdf4d71dcf55744111b95c419012a00b0a725', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=640&crop=smart&auto=webp&s=0a397f21e2e56d324377444e967efea1cab1753e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=960&crop=smart&auto=webp&s=010c47ce44342ebc1004f598a6efc4d0a7536504', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?width=1080&crop=smart&auto=webp&s=4aa39317f77a2caec497d52402449188274249aa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gH03yXmVa8xvwL2ljTI68Sz3bOocuOay3BUGhvf9vJM.jpg?auto=webp&s=04dcae1071493c0b2a8a1106171692ef65efec18', 'width': 1200}, 'variants': {}}]} | |
How to compute the GPU memory required for 1 output token | 1 | [removed] | 2024-01-14T11:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/196drw9/how_to_compute_the_gpu_memory_required_for_1/ | SideShow_Bot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196drw9 | false | null | t3_196drw9 | /r/LocalLLaMA/comments/196drw9/how_to_compute_the_gpu_memory_required_for_1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Xzwqqaz8hxV0vqMKJqbqS7iTkl8aIJPQVFdbT44DXM4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=108&crop=smart&auto=webp&s=8406da0a31e72fff0ad295cb3f49d1cc49424501', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=216&crop=smart&auto=webp&s=44827cf14b29d22e61781f8266c46e33d5f4a7d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=320&crop=smart&auto=webp&s=5319921473e82afdcefbd83ef1a615e8e5e3f764', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=640&crop=smart&auto=webp&s=cfc58512342c3af7198985d2509990552c888497', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=960&crop=smart&auto=webp&s=7651c1cb51a42814d610c657e4a26134dc88dfac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?width=1080&crop=smart&auto=webp&s=d0be7b70bcee9d3c54e4f612efa4091c6ab71acb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xFPlT0_nWReiO652jEs4-REyvgyiovkBI0GDp2BFOm8.jpg?auto=webp&s=dc2766eb746ecb11e378a80fc89c96641ff78c4c', 'width': 1200}, 'variants': {}}]} |
LLMs to call API and use tools. | 22 | I'm trying to figure out how an LLM that generates text is able to execute commands, call APIs and make use of tools inside apps. I'm guessing there's a secondary program that looks at the outputs of the LLM and that triggers the function/API call or any other capability. Is my understanding right?
Also I want to build a chatbot which for some inputs, can query and API and fetch the response from that API. How do I do that? Can someone point me to resources in the space? | 2024-01-14T11:21:00 | https://www.reddit.com/r/LocalLLaMA/comments/196drfh/llms_to_call_api_and_use_tools/ | im_datta0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 196drfh | false | null | t3_196drfh | /r/LocalLLaMA/comments/196drfh/llms_to_call_api_and_use_tools/ | false | false | self | 22 | null |
Building a fully local LLM voice assistant to control my smart home | 31 | 2024-01-14T10:43:17 | https://johnthenerd.com/blog/local-llm-assistant/ | JohnTheNerd3 | johnthenerd.com | 1970-01-01T00:00:00 | 0 | {} | 196d72d | false | null | t3_196d72d | /r/LocalLLaMA/comments/196d72d/building_a_fully_local_llm_voice_assistant_to/ | false | false | default | 31 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.