title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anyone running a 5000 series GPU in a Linux VM for LLM/SD with a Linux host (e.g. Proxmox)? Does shutting down your VM crash your host? | 0 | I have a 5070 Ti that is passed through into a Fedora Server 42 VM. Wanna run some LLM and maybe ComfyUI in it.
I have to install the open source Nvidia driver because the older proprietary one doesn't support newer GPUs anymore. Anyway, I followed the driver install guide of Fedora, and installed the driver successfu... | 2025-05-14T19:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kmoia9/anyone_running_a_5000_series_gpu_in_a_linux_vm/ | regunakyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmoia9 | false | null | t3_1kmoia9 | /r/LocalLLaMA/comments/1kmoia9/anyone_running_a_5000_series_gpu_in_a_linux_vm/ | false | false | self | 0 | null |
JoyCaption Beta One: an image captioning Visual Language Model (VLM) built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models | 1 | [removed] | 2025-05-14T19:29:05 | https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava | noobitom | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kmo5ei | false | null | t3_1kmo5ei | /r/LocalLLaMA/comments/1kmo5ei/joycaption_beta_one_an_image_captioning_visual/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TItd7P6JybMVSD_-iwYroQbp11rVVr7fbZLrwLuYaVo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=108&crop=smart&auto=webp&s=14c7526bc8e2dd16123f224037d628959e477eb0', 'width': 108}, {'height': 116, 'url': 'h... | |
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms | 135 |
Today, Google announced AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework... | 2025-05-14T19:14:49 | NewtMurky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmnsol | false | null | t3_1kmnsol | /r/LocalLLaMA/comments/1kmnsol/alphaevolve_a_geminipowered_coding_agent_for/ | false | false | 135 | {'enabled': True, 'images': [{'id': '54-i1F4Xk7efV144Ff_fU_TjAd4MXRQTzSSE2rxjLUY', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=108&crop=smart&auto=webp&s=b72d12a476de535c70badaa22b582f76fb6ac87f', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jp... | ||
Base Models That Can Still Complete Text in an Entertaining Way | 82 | Back during the LLaMa-1 to Mistral-7B era, it used to be a lot of fun to just download a base model, give it a ridiculous prompt, and let it autocomplete. The results were often less dry and more entertaining than asking the corresponding instruct models to do it.
But today's models, even the base ones, seem to be ... | 2025-05-14T18:32:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kmmq6d/base_models_that_can_still_complete_text_in_an/ | Soft-Ad4690 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmmq6d | false | null | t3_1kmmq6d | /r/LocalLLaMA/comments/1kmmq6d/base_models_that_can_still_complete_text_in_an/ | false | false | self | 82 | null |
Why do so many people not recommend LLM Studio? | 17 | Curious why so many people do not like this application or prefer an alternative? What's wrong with it? | 2025-05-14T18:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kmmo48/why_do_so_many_people_not_recommend_llm_studio/ | intimate_sniffer69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmmo48 | false | null | t3_1kmmo48 | /r/LocalLLaMA/comments/1kmmo48/why_do_so_many_people_not_recommend_llm_studio/ | false | false | self | 17 | null |
My Local LLM Chat Interface: Current Progress and Vision | 80 | Hello everyone, my first reddit post ever! I’ve been building a fully local, offline LLM chat interface designed around actual daily use, fast performance, and a focus on clean, customizable design. It started as a personal challenge and has grown into something I use constantly and plan to evolve much further.
Here’s... | 2025-05-14T18:18:12 | https://v.redd.it/0az6hifchs0f1 | Desperate_Rub_1352 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmmdm9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0az6hifchs0f1/DASHPlaylist.mpd?a=1749838705%2CZGNlNjFlMTMyNmJjNWEwM2E1YWMyY2NjYTMzODYyYWI0OTIwN2E3MTg4YjA3NjIzMTk1MjU1ZTUxNGRiMDA1MQ%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/0az6hifchs0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kmmdm9 | /r/LocalLLaMA/comments/1kmmdm9/my_local_llm_chat_interface_current_progress_and/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=2787f09a4cfec9c8d89a45e369a36e7f27a7e... | |
How to get started with LLM (highschool senior)? | 0 | I am beginner starting out with LLM and stuff, Can you provide me a roadmap to get started.
For context: I am an highschool senior. I have basic understanding of python.
What are the things I need to learn to work on LLM from base, I can spend 7h+ for 2 month. | 2025-05-14T18:12:30 | Most-Tea840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmm8gs | false | null | t3_1kmm8gs | /r/LocalLLaMA/comments/1kmm8gs/how_to_get_started_with_llm_highschool_senior/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'eQmcXfueDu9Qc0eiZrTizoqDkZ3t8SusgktSl4JND6s', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=108&crop=smart&auto=webp&s=64c9eab036f509ab85ffba62b6e406a47b812f1f', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpe... | ||
How to get started with LLM (highschool senior)? | 0 | 2025-05-14T18:10:33 | Most-Tea840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmm6p2 | false | null | t3_1kmm6p2 | /r/LocalLLaMA/comments/1kmm6p2/how_to_get_started_with_llm_highschool_senior/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0TUaCbB-w38608XTXWLq3y3YFq966Lr7gYXuVoKfEZU', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=108&crop=smart&auto=webp&s=b70533131f02e2005b9fda43eac2c75412bcc97a', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpe... | |||
Qwen3-30B-A6B-16-Extreme is fantastic | 420 | [https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme](https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme)
Quants:
[https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF)
Someone recently mentioned this model here on r/LocalLLaMA ... | 2025-05-14T17:57:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kmlu2y/qwen330ba6b16extreme_is_fantastic/ | DocWolle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmlu2y | false | null | t3_1kmlu2y | /r/LocalLLaMA/comments/1kmlu2y/qwen330ba6b16extreme_is_fantastic/ | false | false | self | 420 | {'enabled': False, 'images': [{'id': 'SJ3pgQQCKG9CpSkqEjPCMkNkO03Y1_NTrzn_Asqv48M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=108&crop=smart&auto=webp&s=61b6b2ee8a5f4eadfdd153f565d2f56eaf70dbd0', 'width': 108}, {'height': 116, 'url': 'h... |
Building My Local LLM Chat UI: Progress So Far + Future Roadmap | 1 | Hello everyone, my first reddit post ever! I’ve been building a fully local, offline LLM chat interface designed around actual daily use, fast performance, and a focus on clean, customizable design. It started as a personal challenge and has grown into something I use constantly and plan to evolve much further.
Here’s... | 2025-05-14T17:53:12 | https://v.redd.it/0g70v9f2cs0f1 | Desperate_Rub_1352 | /r/LocalLLaMA/comments/1kmlqtr/building_my_local_llm_chat_ui_progress_so_far/ | 1970-01-01T00:00:00 | 0 | {} | 1kmlqtr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0g70v9f2cs0f1/DASHPlaylist.mpd?a=1749966800%2CYjhhYzIzNGY0MTc3NTBiZjY3NTNmYTM2ZTE1OThhZWZjMDU4YjI5ODA1YjM3YzVjZGMxNGQ2OTMwNzY3YTdlZQ%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/0g70v9f2cs0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kmlqtr | /r/LocalLLaMA/comments/1kmlqtr/building_my_local_llm_chat_ui_progress_so_far/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=d990e0f1435d82e6f09ab2dc3c0f563db410c... | |
Xeon 6 6900, 12mrdimm 8800, amx.. worth it? | 1 | Intel's latest xeon 6 6900 (formerly rapid granite). 12 mrdimm up to 8800, amx support..
I can find a cpu for under 5k, no way to find a available motherboard (except the one on aliexpress for 2k).
All I can really find is a complet system on itcreations (usa) with 12 rdimm 6400 for around 13k iirc.
What is your o... | 2025-05-14T17:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kmksr5/xeon_6_6900_12mrdimm_8800_amx_worth_it/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmksr5 | false | null | t3_1kmksr5 | /r/LocalLLaMA/comments/1kmksr5/xeon_6_6900_12mrdimm_8800_amx_worth_it/ | false | false | self | 1 | null |
How we built our AI code review tool for IDEs | 1 | 2025-05-14T17:08:22 | https://www.coderabbit.ai/blog/how-we-built-our-ai-code-review-tool-for-ides | thewritingwallah | coderabbit.ai | 1970-01-01T00:00:00 | 0 | {} | 1kmklm3 | false | null | t3_1kmklm3 | /r/LocalLLaMA/comments/1kmklm3/how_we_built_our_ai_code_review_tool_for_ides/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=108&crop=smart&auto=webp&s=f9c4e206c230b04e4a38f0cb72f2955c17a2ed90', 'width': 108}, {'height': 122, 'url': 'h... | ||
Fun little AI quiz | 1 | 2025-05-14T16:51:27 | workbyatlas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmk62a | false | null | t3_1kmk62a | /r/LocalLLaMA/comments/1kmk62a/fun_little_ai_quiz/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '747n02rz1s0f1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=108&crop=smart&auto=webp&s=e510b4ca95c768e6b26444604bdd0c526702ae56', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=216&crop=smart&auto=... | ||
[R] Open Dataset – 15,000 Clean Text Blocks for GPT Fine-Tuning (Public Domain, Non-Fiction) | 1 | [removed] | 2025-05-14T16:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kmk2ur/r_open_dataset_15000_clean_text_blocks_for_gpt/ | Patient-Tooth7354 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmk2ur | false | null | t3_1kmk2ur | /r/LocalLLaMA/comments/1kmk2ur/r_open_dataset_15000_clean_text_blocks_for_gpt/ | false | false | self | 1 | null |
Don't know how to proceed with qwen 2.5 vl series models to get the correct bounding boxes around the words in the document | 1 | [removed] | 2025-05-14T16:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kmk0nk/dont_know_how_to_proceed_with_qwen_25_vl_series/ | GurEmbarrassed2584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmk0nk | false | null | t3_1kmk0nk | /r/LocalLLaMA/comments/1kmk0nk/dont_know_how_to_proceed_with_qwen_25_vl_series/ | false | false | self | 1 | null |
NimbleEdge AI – Fully On-Device Llama 3.2 1B Assistant with Text & Voice, No Cloud Needed | 28 | Hi everyone!
We’re excited to share **NimbleEdge AI**, a fully on-device conversational assistant built around **Llama 3.2 1B**, **Whisper Tiny or Google ASR**, and **Kokoro TTS** – all running directly on your mobile device.
The best part? It works **offline**, and **nothing ever leaves your device**—no data is sent... | 2025-05-14T16:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kmjr1n/nimbleedge_ai_fully_ondevice_llama_32_1b/ | voidmemoriesmusic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmjr1n | false | null | t3_1kmjr1n | /r/LocalLLaMA/comments/1kmjr1n/nimbleedge_ai_fully_ondevice_llama_32_1b/ | false | false | self | 28 | null |
[FREE SAMPLE] Clean GPT Fine-Tuning Dataset – 15,000 Curated Text Blocks from Public Domain Books (Non-Fiction) | 1 | [removed] | 2025-05-14T16:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kmjoht/free_sample_clean_gpt_finetuning_dataset_15000/ | Patient-Tooth7354 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmjoht | false | null | t3_1kmjoht | /r/LocalLLaMA/comments/1kmjoht/free_sample_clean_gpt_finetuning_dataset_15000/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=108&crop=smart&auto=webp&s=d86627c87d9d144c16c153653adb9156be4935a0', 'width': 108}, {'height': 116, 'url': 'h... |
NimbleEdge AI – Fully On-Device Llama 3.1 1B Assistant with Text & Voice, No Cloud Needed | 1 | [removed] | 2025-05-14T16:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kmjnpa/nimbleedge_ai_fully_ondevice_llama_31_1b/ | voidmemoriesmusic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmjnpa | false | null | t3_1kmjnpa | /r/LocalLLaMA/comments/1kmjnpa/nimbleedge_ai_fully_ondevice_llama_31_1b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ.jpeg?width=108&crop=smart&auto=webp&s=a1c342885a1dbfbf7ae6fa969c63e35999633f3d', 'width': 108}, {'height': 162, 'url': '... |
gemini pays less attention to system messages by default? | 0 | exploring models for an application that will have to frequently inject custom instructions to guide the model in its next response
i noticed that gemini compared to gpt requires a lot more prompting to follow system messages and values user messages much higher by default
wonder if this is just a result of different... | 2025-05-14T16:31:37 | Goericke | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmjnpg | false | null | t3_1kmjnpg | /r/LocalLLaMA/comments/1kmjnpg/gemini_pays_less_attention_to_system_messages_by/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'aKhf1ZF_jCa2xzdu2dl6RCjBAGa9CPjV5MZOR__DGi0', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?width=108&crop=smart&auto=webp&s=a84fcc7cd84db936d9b9f4c9029b4d03c6b2039a', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/r143ofbwwr0f1.pn... | ||
now you can create a mobile app by prompting "just code me the best mobile app bro" | 0 | 2025-05-14T16:27:13 | sickleRunner | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmjjnu | false | null | t3_1kmjjnu | /r/LocalLLaMA/comments/1kmjjnu/now_you_can_create_a_mobile_app_by_prompting_just/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'v77u2kdmxr0f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=108&crop=smart&auto=webp&s=cb7e858080fcb02082a8e179d15accb67c7ecc7f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=216&crop=smart&auto=we... | ||
Personal notes: Agentic Loop from OpenAI's GPT-4.1 Prompting Guide | 3 | Finally got around to the bookmark I had saved a while ago: OpenAI's prompting guide:
[https://cookbook.openai.com/examples/gpt4-1\_prompting\_guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide)
I have to say I really like it! I am still working through it. I usually scribble my notes in Excalidr... | 2025-05-14T16:09:45 | phoneixAdi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmj3zn | false | null | t3_1kmj3zn | /r/LocalLLaMA/comments/1kmj3zn/personal_notes_agentic_loop_from_openais_gpt41/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': '27dndr0qtr0f1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=108&crop=smart&auto=webp&s=664d145bc70f28ae617dab9a1ee30f24a59ac398', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=216&crop=smart&auto=we... | |
Roadmap for frontier models summer 2025 | 3 | 1. grok 3.5
2. o3 pro / o4 full
3. gemini ultra
4. claude 4 (neptune)
5. deepseek r2
6. r2 operator
[https://x.com/iruletheworldmo/status/1922413637496344818](https://x.com/iruletheworldmo/status/1922413637496344818) | 2025-05-14T16:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kmj3gl/roadmap_for_frontier_models_summer_2025/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmj3gl | false | null | t3_1kmj3gl | /r/LocalLLaMA/comments/1kmj3gl/roadmap_for_frontier_models_summer_2025/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ztSy_MabctLaC-zAjVunO33lufNbxoJXJs5r6phqO7g', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fpWIsY_XH0hB_jTvQqTBczYhToRDGpL7hcAo67ksQBI.jpg?width=108&crop=smart&auto=webp&s=ce762d95475584036c857d08e7062d5ad2dfa2f9', 'width': 108}], 'source': {'height': 20... |
[image processing failed] | 1 | [deleted] | 2025-05-14T15:43:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kmifn5 | false | null | t3_1kmifn5 | /r/LocalLLaMA/comments/1kmifn5/image_processing_failed/ | false | false | default | 1 | null | ||
[image processing failed] | 1 | [deleted] | 2025-05-14T15:42:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kmiffg | false | null | t3_1kmiffg | /r/LocalLLaMA/comments/1kmiffg/image_processing_failed/ | false | false | default | 1 | null | ||
[image processing failed] | 1 | [deleted] | 2025-05-14T15:42:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kmif8e | false | null | t3_1kmif8e | /r/LocalLLaMA/comments/1kmif8e/image_processing_failed/ | false | false | default | 1 | null | ||
I updated the SmolVLM llama.cpp webcam demo to run locally in-browser on WebGPU. | 418 | Inspired by [https://www.reddit.com/r/LocalLLaMA/comments/1klx9q2/realtime\_webcam\_demo\_with\_smolvlm\_using\_llamacpp/](https://www.reddit.com/r/LocalLLaMA/comments/1klx9q2/realtime_webcam_demo_with_smolvlm_using_llamacpp/), I decided to update the llama.cpp server demo so that it runs 100% locally in-browser on Web... | 2025-05-14T15:33:15 | https://v.redd.it/or5b3ks8nr0f1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmi6vl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/or5b3ks8nr0f1/DASHPlaylist.mpd?a=1749828809%2CYjE2ZTA2ZGZmNjQwZmRmNzYzZWJiNTNmZjAxOWFlYmEzZTYxOThmNmVmZGYwYjNlYWJmMmNhNzk0MGUwZjI4NQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/or5b3ks8nr0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kmi6vl | /r/LocalLLaMA/comments/1kmi6vl/i_updated_the_smolvlm_llamacpp_webcam_demo_to_run/ | false | false | 418 | {'enabled': False, 'images': [{'id': 'Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=108&crop=smart&format=pjpg&auto=webp&s=c624c3accf3427a26457b7b0b9e23d62726b... | |
Stable Audio Open Small - new fast audio generation model | 60 | **Weights**: [https://huggingface.co/stabilityai/stable-audio-open-small](https://huggingface.co/stabilityai/stable-audio-open-small)
**Paper**: [https://arxiv.org/abs/2505.08175](https://arxiv.org/abs/2505.08175)
**Arm learning path**: [https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/run-stable-audio... | 2025-05-14T15:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kmi59x/stable_audio_open_small_new_fast_audio_generation/ | iGermanProd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmi59x | false | null | t3_1kmi59x | /r/LocalLLaMA/comments/1kmi59x/stable_audio_open_small_new_fast_audio_generation/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': '39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=108&crop=smart&auto=webp&s=49e48660beb5870fa7e2d7eb5025f203aee72565', 'width': 108}, {'height': 116, 'url': 'h... |
AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance | 192 | I've been doing some (ongoing) testing on a Strix Halo system recently and with a bunch of desktop systems coming out, and very few advanced/serious GPU-based LLM performance reviews out there, I figured it might be worth sharing a few notes I've made on the current performance and state of software.
This post will pr... | 2025-05-14T15:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmi3ra | false | null | t3_1kmi3ra | /r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/ | false | false | self | 192 | {'enabled': False, 'images': [{'id': 'LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=108&crop=smart&auto=webp&s=3c6c0b68f1819ae018c774a2adecbe294e588c55', 'width': 108}, {'height': 121, 'url': 'h... |
Stable Audio Open Small - a new fast audio generation model | 1 | [removed] | 2025-05-14T15:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kmi3il/stable_audio_open_small_a_new_fast_audio/ | iGermanProd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmi3il | false | null | t3_1kmi3il | /r/LocalLLaMA/comments/1kmi3il/stable_audio_open_small_a_new_fast_audio/ | false | false | self | 1 | null |
Stable Audio Open Small - a new fast audio generation model | 1 | [removed] | 2025-05-14T15:25:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kmi02i/stable_audio_open_small_a_new_fast_audio/ | iGermanProd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmi02i | false | null | t3_1kmi02i | /r/LocalLLaMA/comments/1kmi02i/stable_audio_open_small_a_new_fast_audio/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=108&crop=smart&auto=webp&s=49e48660beb5870fa7e2d7eb5025f203aee72565', 'width': 108}, {'height': 116, 'url': 'h... |
Open source robust LLM extractor for HTML/Markdown in Typescript | 7 | While working with LLMs for structured web data extraction, I kept running into issues with invalid JSON and broken links in the output. This led me to build a library focused on robust extraction and enrichment:
* **Clean HTML conversion**: transforms HTML into LLM-friendly markdown with an option to extract just the... | 2025-05-14T15:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kmhwah/open_source_robust_llm_extractor_for_htmlmarkdown/ | Visual-Librarian6601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmhwah | false | null | t3_1kmhwah | /r/LocalLLaMA/comments/1kmhwah/open_source_robust_llm_extractor_for_htmlmarkdown/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=108&crop=smart&auto=webp&s=a6802fb385755f9fcc3fc7eaa69d26bee4e224c6', 'width': 108}, {'height': 108, 'url': 'h... |
Alpakafarm Team :D | 1 | [removed] | 2025-05-14T15:19:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kmhv72/alpakafarm_team_d/ | hashashinsophia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmhv72 | false | null | t3_1kmhv72 | /r/LocalLLaMA/comments/1kmhv72/alpakafarm_team_d/ | false | false | self | 1 | null |
Drummer's Snowpiercer 15B v1 - Trudge through the winter with a finetune of Nemotron 15B Thinker! | 85 | 2025-05-14T15:15:31 | https://huggingface.co/TheDrummer/Snowpiercer-15B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kmhr87 | false | null | t3_1kmhr87 | /r/LocalLLaMA/comments/1kmhr87/drummers_snowpiercer_15b_v1_trudge_through_the/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=108&crop=smart&auto=webp&s=87d6ba94090314f595b424d9a173806dbfd0cb5e', 'width': 108}, {'height': 116, 'url': 'h... | ||
"I Just Think They're Neat" - Marge Simpson | 0 | 2025-05-14T15:14:24 | Accomplished_Mode170 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmhq92 | false | null | t3_1kmhq92 | /r/LocalLLaMA/comments/1kmhq92/i_just_think_theyre_neat_marge_simpson/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Rfg_TjB8lvgfXcS6q-cjV585IfwodPYnNvhSWEuZjTY', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=108&crop=smart&auto=webp&s=cea9ab4bb9089a8bbe9c01f0fb6273b539fe2499', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.pn... | |||
Seeking VRAM Backend Recommendations & Performance Comparisons for Multi-GPU AMD Setup (7900xtx x2 + 7800xt) - Gemma, Qwen Models | 0 | Hi everyone,
I'm looking for advice on the best way to maximize output speed/throughput when running large language models on my setup. I'm primarily interested in running Gemma3:27b, Qwen3 32B models, and I'm trying to determine the most efficient VRAM backend to utilize.
**My hardware is:**
* **GPUs: (64GB)** 2x... | 2025-05-14T15:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kmhq4s/seeking_vram_backend_recommendations_performance/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmhq4s | false | null | t3_1kmhq4s | /r/LocalLLaMA/comments/1kmhq4s/seeking_vram_backend_recommendations_performance/ | false | false | self | 0 | null |
SWE-rebench: A continuously updated benchmark for SWE LLMs | 28 | Hi! We present [SWE-rebench](https://swe-rebench.com/) — a new benchmark for evaluating agentic LLMs on a continuously updated and decontaminated set of real-world software engineering tasks, mined from active GitHub repositories.
SWE-rebench combines the methodologies of SWE-bench and LiveCodeBench: we collect new is... | 2025-05-14T14:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kmhb0c/swerebench_a_continuously_updated_benchmark_for/ | Fabulous_Pollution10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmhb0c | false | null | t3_1kmhb0c | /r/LocalLLaMA/comments/1kmhb0c/swerebench_a_continuously_updated_benchmark_for/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=108&crop=smart&auto=webp&s=149340fa6585600bd89eae2febd845a1cbc5b03b', 'width': 108}, {'height': 203, 'url': '... | |
May 2025 Model Benchmarks - Mac vs. 5080 | 0 | | Model | MMLU (%) | 4-bit RAM | M3 Max 64 GB (t/s, TTFT) | M4 Max 64 GB (t/s, TTFT) | M4 Max 128 GB (t/s, TTFT) | RTX 5080 16 GB (t/s, TTFT) |
|------------------------------|----------|-----------|---------------------------|---------------------------|----------------------------|------------... | 2025-05-14T14:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kmgoj9/may_2025_model_benchmarks_mac_vs_5080/ | FroyoCommercial627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmgoj9 | false | null | t3_1kmgoj9 | /r/LocalLLaMA/comments/1kmgoj9/may_2025_model_benchmarks_mac_vs_5080/ | false | false | self | 0 | null |
Wan-AI/Wan2.1-VACE-14B · Hugging Face (Apache-2.0) | 152 | **Wan2.1** [VACE](https://github.com/ali-vilab/VACE), an all-in-one model for video creation and editing | 2025-05-14T14:06:06 | https://huggingface.co/Wan-AI/Wan2.1-VACE-14B | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kmg1ht | false | null | t3_1kmg1ht | /r/LocalLLaMA/comments/1kmg1ht/wanaiwan21vace14b_hugging_face_apache20/ | false | false | 152 | {'enabled': False, 'images': [{'id': 'TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=108&crop=smart&auto=webp&s=426d66cf7de922a930e072de2b3577218f43a8fc', 'width': 108}, {'height': 116, 'url': 'h... | |
Testing LLMs in prod feels way harder than it should | 1 | [removed] | 2025-05-14T13:50:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kmfop9/testing_llms_in_prod_feels_way_harder_than_it/ | Aggravating_Job2019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmfop9 | false | null | t3_1kmfop9 | /r/LocalLLaMA/comments/1kmfop9/testing_llms_in_prod_feels_way_harder_than_it/ | false | false | self | 1 | null |
best model i can run on mac air m3 16gb ram? | 1 | [removed] | 2025-05-14T13:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kmfcq5/best_model_i_can_run_on_mac_air_m3_16gb_ram/ | Acrobatic-Ad-4211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmfcq5 | false | null | t3_1kmfcq5 | /r/LocalLLaMA/comments/1kmfcq5/best_model_i_can_run_on_mac_air_m3_16gb_ram/ | false | false | self | 1 | null |
Is there a benchmark that shows "prompt processing speed"? | 3 | I've been checking Artificial Analysis and others, and while they are very adamant about output speed i've yet to see "input speed".
when working with large codebases I think prompt ingestion speed is VERY important
any benches working on this? Something like "long input, short output". | 2025-05-14T13:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kmf2w9/is_there_a_benchmark_that_shows_prompt_processing/ | OmarBessa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmf2w9 | false | null | t3_1kmf2w9 | /r/LocalLLaMA/comments/1kmf2w9/is_there_a_benchmark_that_shows_prompt_processing/ | false | false | self | 3 | null |
Turn any toolkit into an MCP server | 0 | If you’ve ever wanted to expose your own toolkit (like an ArXiv search tool, a Wikipedia fetcher, or any custom Python utility) as a lightweight service for CAMEL agents to call remotely, MCP (Model Context Protocol) makes it trivial. Here’s how you can get started in just three steps:
# 1. Wrap & expose your toolkit
... | 2025-05-14T13:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kmeyfk/turn_any_toolkit_into_an_mcp_server/ | Fluffy_Sheepherder76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmeyfk | false | null | t3_1kmeyfk | /r/LocalLLaMA/comments/1kmeyfk/turn_any_toolkit_into_an_mcp_server/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=108&crop=smart&auto=webp&s=10ed01a1382f33933099b924e2555416a77c4890', 'width': 108}, {'height': 122, 'url': '... |
Turn any toolkit into an MCP server in 3 easy steps 🐫🔧 | 1 | [removed] | 2025-05-14T13:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kmexee/turn_any_toolkit_into_an_mcp_server_in_3_easy/ | Fluffy_Sheepherder76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmexee | false | null | t3_1kmexee | /r/LocalLLaMA/comments/1kmexee/turn_any_toolkit_into_an_mcp_server_in_3_easy/ | false | false | self | 1 | null |
GitHub - ByteDance-Seed/Seed1.5-VL: Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving state-of-the-art performance on 38 out of 60 public benchmarks. | 49 | Let's wait for the weights. | 2025-05-14T13:12:54 | https://github.com/ByteDance-Seed/Seed1.5-VL | foldl-li | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kmetlw | false | null | t3_1kmetlw | /r/LocalLLaMA/comments/1kmetlw/github_bytedanceseedseed15vl_seed15vl_a/ | false | false | 49 | {'enabled': False, 'images': [{'id': '0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=108&crop=smart&auto=webp&s=3dc722529f18365a867476614805992d6ccf3cb6', 'width': 108}, {'height': 108, 'url': 'h... | |
Build DeepSeek architecture from scratch | 20 high quality video lectures | 113 |
[A few notes I made as part of this playlist](https://i.redd.it/of6lxo00sq0f1.gif)
Here are the 20 lectures covering everything from Multi-Head Latent Attention to Mixture of Experts.
It took me 2 months to finish recording these lectures.
One of the most challenging (and also rewarding) thing I have done thi... | 2025-05-14T12:36:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kme2c4/build_deepseek_architecture_from_scratch_20_high/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kme2c4 | false | {'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QWNxQIq0hMo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictu... | t3_1kme2c4 | /r/LocalLLaMA/comments/1kme2c4/build_deepseek_architecture_from_scratch_20_high/ | false | false | 113 | {'enabled': False, 'images': [{'id': 'KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s.jpeg?width=108&crop=smart&auto=webp&s=4afe035c944e7e0f78c85fdddb0139a5d0e81848', 'width': 108}, {'height': 162, 'url': '... | |
best small language model? around 2-10b parameters | 52 | whats the best small language model for chatting in english only, no need for any type of coding, math or multilingual capabilities, i've seen gemma and the smaller qwen models but are there any better alternatives that focus just on chatting/emotional intelligence?
sorry if my question seems stupid i'm still new to ... | 2025-05-14T12:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kmdzv0/best_small_language_model_around_210b_parameters/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmdzv0 | false | null | t3_1kmdzv0 | /r/LocalLLaMA/comments/1kmdzv0/best_small_language_model_around_210b_parameters/ | false | false | self | 52 | null |
4 hours to go! | 1 | [removed] | 2025-05-14T12:16:17 | MihirBarve | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmdnlt | false | null | t3_1kmdnlt | /r/LocalLLaMA/comments/1kmdnlt/4_hours_to_go/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '00opoqowoq0f1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=108&crop=smart&auto=webp&s=233124aee75ed9e773334ea9b15f44935b0b18d6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=216&crop=smart&auto=... | |
recommendations for tools/templates to create MCP hosts, clients and servers | 2 | MCP servers is perhaps the best served, but there's currently so much out there of variable quality, I wanted to check in to see what you have found and which are recommended. Python language preferred. | 2025-05-14T12:12:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kmdksm/recommendations_for_toolstemplates_to_create_mcp/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmdksm | false | null | t3_1kmdksm | /r/LocalLLaMA/comments/1kmdksm/recommendations_for_toolstemplates_to_create_mcp/ | false | false | self | 2 | null |
chat.qwen.ai & chat.z.ai has the same UI | 1 | Both Qwen and Z's chat interface have the same layout, same menu settings, but they don't seem to mention reach other? Or are they using some chat template that others are using as well? | 2025-05-14T12:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kmdhq3/chatqwenai_chatzai_has_the_same_ui/ | Sheeple9001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmdhq3 | false | null | t3_1kmdhq3 | /r/LocalLLaMA/comments/1kmdhq3/chatqwenai_chatzai_has_the_same_ui/ | false | false | self | 1 | null |
What does llama.cpp's http server's file-upload button do? | 1 | Does it simply concatenate the file and my direct prompt, treating the concatenation as the prompt?
Using llama 3.2 3B Q4_K_S but incase my above suspicion is true, that does not matter as no model would yield reliable results.
What I want to do is to ask questions about a file's contents.
In my 15 experiments, som... | 2025-05-14T11:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/ | kdjfskdf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmd9f9 | false | null | t3_1kmd9f9 | /r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/ | false | false | self | 1 | null |
Browser with tor for onion sites | 1 | [removed] | 2025-05-14T11:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kmd8jv/browser_with_tor_for_onion_sites/ | chillax9041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmd8jv | false | null | t3_1kmd8jv | /r/LocalLLaMA/comments/1kmd8jv/browser_with_tor_for_onion_sites/ | false | false | self | 1 | null |
Searching a most Generous(in limits) fully managed Retrieval-Augmented Generation (RAG) service provider | 3 | I need projects like SciPhi's R2R ([https://github.com/SciPhi-AI/R2R](https://github.com/SciPhi-AI/R2R)), but the cloud limits are too tight for what I need.
Are there any other options or projects out there that do similar things without those limits? I would really appreciate any suggestions or tips! Thanks! | 2025-05-14T11:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kmd8ca/searching_a_most_generousin_limits_fully_managed/ | dagm10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmd8ca | false | null | t3_1kmd8ca | /r/LocalLLaMA/comments/1kmd8ca/searching_a_most_generousin_limits_fully_managed/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=108&crop=smart&auto=webp&s=35fd60419df55a4cd8ea2ae0fd63c949e37bc2ad', 'width': 108}, {'height': 108, 'url': 'h... |
Local AI automation pipelines | 2 | Just wondering what do you use for AI Automation pipelines for local run? Something like [make.com](http://make.com) or vectorshift.ai?
I want to run few routine task with LLM, but do not want to run it on public cloud. | 2025-05-14T11:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kmd7o5/local_ai_automation_pipelines/ | mancubus77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmd7o5 | false | null | t3_1kmd7o5 | /r/LocalLLaMA/comments/1kmd7o5/local_ai_automation_pipelines/ | false | false | self | 2 | null |
LLM - better chunking method | 15 | Problems with using an LLM to chunk:
1. Time/latency -> it takes time for the LLM to output all the chunks.
2. Hitting output context window cap -> since you’re essentially re-creating entire documents but in chunks, then you’ll often hit the token capacity of the output window.
3. Cost - since your essentially output... | 2025-05-14T11:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kmcdyt/llm_better_chunking_method/ | Phoenix2990 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmcdyt | false | null | t3_1kmcdyt | /r/LocalLLaMA/comments/1kmcdyt/llm_better_chunking_method/ | false | false | self | 15 | null |
Multi-Instance GPU (MIG) for tensor parallel possible | 2 | I have an idea that might be a very stupid, wonder is it possible at all.
I have 5x3090/4090. I wonder if i can add one rtx 6000 pro to the setup, then use Nvidia MIG to split the rtx 6000 pro into 3 of 24gb for 8xGPU tensor parallel.
I understand that splitting gpu into 3 dont make it magically x3. However, tensor p... | 2025-05-14T10:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kmbvfd/multiinstance_gpu_mig_for_tensor_parallel_possible/ | Such_Advantage_6949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmbvfd | false | null | t3_1kmbvfd | /r/LocalLLaMA/comments/1kmbvfd/multiinstance_gpu_mig_for_tensor_parallel_possible/ | false | false | self | 2 | null |
Qwen3 benchmarks from the technical report in two tables (thinking and non-thinking) for easier comparison | 1 | [removed] | 2025-05-14T09:47:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kmb5mc/qwen3_benchmarks_from_the_technical_report_in_two/ | BigPoppaK78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmb5mc | false | null | t3_1kmb5mc | /r/LocalLLaMA/comments/1kmb5mc/qwen3_benchmarks_from_the_technical_report_in_two/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=108&crop=smart&auto=webp&s=07bfd7cb9b109d6e68bdda4375a51ec55b717092', 'width': 108}, {'height': 108, 'url': 'h... |
Announcing MAESTRO: A Local-First AI Research App! (Plus some benchmarks) | 183 | Hey r/LocalLLaMA!
I'm excited to introduce **MAESTRO** (Multi-Agent Execution System & Tool-driven Research Orchestrator), an AI-powered research application designed for deep research tasks, with a strong focus on local control and capabilities. You can set it up locally to conduct comprehensive research using your o... | 2025-05-14T09:35:43 | https://www.reddit.com/gallery/1kmaztr | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kmaztr | false | null | t3_1kmaztr | /r/LocalLLaMA/comments/1kmaztr/announcing_maestro_a_localfirst_ai_research_app/ | false | false | 183 | {'enabled': True, 'images': [{'id': 'bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=108&crop=smart&auto=webp&s=9b3fd385cff2b5b74808ada5502a353a8bc963a7', 'width': 108}, {'height': 324, 'url': 'h... | |
Benchmarking models with a custom QA dataset - what's the best workflow? | 2 | There are plenty of models available, and even for a single model, there are quite a few different settings to tinker with. I’d like to evaluate and benchmark them using my own question-and-answer dataset.
My example use case is to test different quantized versions of a vision model with specific questions about a sma... | 2025-05-14T09:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kmauvs/benchmarking_models_with_a_custom_qa_dataset/ | DevilaN82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmauvs | false | null | t3_1kmauvs | /r/LocalLLaMA/comments/1kmauvs/benchmarking_models_with_a_custom_qa_dataset/ | false | false | self | 2 | null |
Seeking Guidance: Integrating RealtimeTTS with dia-1.6B or OrpheusTTS for Arabic Conversational AI | 1 | [removed] | 2025-05-14T08:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kmaekb/seeking_guidance_integrating_realtimetts_with/ | No-Reindeer-9968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmaekb | false | null | t3_1kmaekb | /r/LocalLLaMA/comments/1kmaekb/seeking_guidance_integrating_realtimetts_with/ | false | false | self | 1 | null |
All Qwen3 benchmarks from the technical report in two tables (thinking and non-thinking) | 1 | [removed] | 2025-05-14T08:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kmabk8/all_qwen3_benchmarks_from_the_technical_report_in/ | BigPoppaK78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmabk8 | false | null | t3_1kmabk8 | /r/LocalLLaMA/comments/1kmabk8/all_qwen3_benchmarks_from_the_technical_report_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=108&crop=smart&auto=webp&s=07bfd7cb9b109d6e68bdda4375a51ec55b717092', 'width': 108}, {'height': 108, 'url': 'h... |
What local model and strategies should I use to generate reports? | 1 | Hello,
I have been looking for solutions to generating reports for finished projects at work. With this I mean that I have a couple dozens pdfs (actually a lot of powerpoints, but i can convert them), and I want to create a report (<20 pages) following a clear structure that I can provide an example or template.
I ... | 2025-05-14T08:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kma8id/what_local_model_and_strategies_should_i_use_to/ | Ok_Appeal8653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kma8id | false | null | t3_1kma8id | /r/LocalLLaMA/comments/1kma8id/what_local_model_and_strategies_should_i_use_to/ | false | false | self | 1 | null |
Found a pretty good cline-compatible Qwen3 MoE for Apple Silicon | 22 | I regularly test new models appearing on ollama's directory for use on my Mac M2 Ultra. Sparse models load tokens faster on Silicon so MoEs are models I target. [mychen76/qwen3\_cline\_roocode:30b ](https://www.ollama.com/mychen76/qwen3_cline_roocode:30b)is a MoE of qwen3 and so far, it has performed very well. The sam... | 2025-05-14T08:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kma6tm/found_a_pretty_good_clinecompatible_qwen3_moe_for/ | FluffyGoatNerder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kma6tm | false | null | t3_1kma6tm | /r/LocalLLaMA/comments/1kma6tm/found_a_pretty_good_clinecompatible_qwen3_moe_for/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
[D] How `thinking_budget` effect in Qwen3? | 2 | After we set thinking\_budget, Does Qwen3 will try to consume \`thinking\_budget\` thinking tokens, or it's just a maximun limitation?
\`thinking\_budget\` only exist on Qwen's official API documentation, does exist in open source inference library.
Below is the text from Qwen3 technical report.
\> Thinking Con... | 2025-05-14T08:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kma57b/d_how_thinking_budget_effect_in_qwen3/ | Logical_Divide_3595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kma57b | false | null | t3_1kma57b | /r/LocalLLaMA/comments/1kma57b/d_how_thinking_budget_effect_in_qwen3/ | false | false | self | 2 | null |
[D] How `thinking_budget` effect in Qwen3? | 1 | [removed] | 2025-05-14T08:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kma4l6/d_how_thinking_budget_effect_in_qwen3/ | Logical_Divide_3595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kma4l6 | false | null | t3_1kma4l6 | /r/LocalLLaMA/comments/1kma4l6/d_how_thinking_budget_effect_in_qwen3/ | false | false | self | 1 | null |
Created a chat UI for Local Use Cases, Need feedback | 1 | [removed] | 2025-05-14T08:25:47 | https://v.redd.it/27q63q5pjp0f1 | Desperate_Rub_1352 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kma1ex | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/27q63q5pjp0f1/DASHPlaylist.mpd?a=1749803163%2CZWY5NjQwZGU4YTJmYTAwMjFmYmRkNzU2Zjc0ZWExMzQ2NTFkMzI0YTc1YzI5ZGRjNzM0MWQ0ZDFlMzQzOGIwMA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/27q63q5pjp0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kma1ex | /r/LocalLLaMA/comments/1kma1ex/created_a_chat_ui_for_local_use_cases_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=108&crop=smart&format=pjpg&auto=webp&s=55a20a5fab386283ca78a8b9edc2290f2352c... | |
llama.cpp for idiots. An easy way to get models? | 0 | Persuaded by the number of people saying we should use llama.cpp instead of ollama I gave it a go. First I had to download it. I am on a CPU only machine so I went to [https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases) and downloaded and unzipped [https://github.com/ggml-org... | 2025-05-14T08:18:58 | https://www.reddit.com/r/LocalLLaMA/comments/1km9y2h/llamacpp_for_idiots_an_easy_way_to_get_models/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km9y2h | false | null | t3_1km9y2h | /r/LocalLLaMA/comments/1km9y2h/llamacpp_for_idiots_an_easy_way_to_get_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'h... |
LLM response doesn't end on mobile app | 0 | I am using an opensource app called PocketPal to use small LLMs locally on my phone. The first one I tried is Qwen3-4B-Q4\_K\_M. Because I want to include the thinking, I increased the output length, otherwise it cuts off halfway. Now it works well, at quite a usable speed, given it's running locally on a mobile phone.... | 2025-05-14T08:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1km9vra/llm_response_doesnt_end_on_mobile_app/ | __ThrowAway__123___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km9vra | false | null | t3_1km9vra | /r/LocalLLaMA/comments/1km9vra/llm_response_doesnt_end_on_mobile_app/ | false | false | self | 0 | null |
Zenbook S16 or alternative with more Ram | 3 | Hey there!
Currently testing and fiddling a lot with local llms.
I need a new laptop which can also handle av1 encode in hw. And I want to test more with local llms, mainly using continue in vs code.
The catch i seem to run into is that there are no options in laptops with the ryzen ai series that have affordable or ... | 2025-05-14T07:00:22 | https://www.reddit.com/r/LocalLLaMA/comments/1km8uuk/zenbook_s16_or_alternative_with_more_ram/ | PresentationSolid643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km8uuk | false | null | t3_1km8uuk | /r/LocalLLaMA/comments/1km8uuk/zenbook_s16_or_alternative_with_more_ram/ | false | false | self | 3 | null |
What are some good models I should check out on my MBP with M3 Pro (18GB mem)? | 0 | I have 18GB of memory. I've been running Mistral's 7B model. It hallucinates pretty badly to a point that it becomes unusable. What are some models that you found running amazingly well on your M3 Pro chip? With so many new models launching, I find it really hard to keep up. | 2025-05-14T06:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1km8s9f/what_are_some_good_models_i_should_check_out_on/ | Professional_Field79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km8s9f | false | null | t3_1km8s9f | /r/LocalLLaMA/comments/1km8s9f/what_are_some_good_models_i_should_check_out_on/ | false | false | self | 0 | null |
Best open source transcription and diarization models out there for German? | 1 | [removed] | 2025-05-14T06:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/1km8lnp/best_open_source_transcription_and_diarization/ | OstrichSerious6755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km8lnp | false | null | t3_1km8lnp | /r/LocalLLaMA/comments/1km8lnp/best_open_source_transcription_and_diarization/ | false | false | self | 1 | null |
On-Device AgentCPM-GUI is Now Open-Source | 68 | Key Features:
\- 1st open-source GUI agent finely tuned for Chinese apps
\- RFT-enhanced reasoning abilities
\- Compact action-space design
\- High-quality GUI grounding
| 2025-05-14T06:17:31 | https://v.redd.it/9k8szctowo0f1 | Lynncc6 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1km889x | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9k8szctowo0f1/DASHPlaylist.mpd?a=1749795468%2CYWEwZDg2OGI2ODBhMTY3N2NiNWIzNzI3ZGRjOTc4NzdkMTQ5MDAzMzkyMzZjNDE0ZjNmZWYzZGMyOGM1ODY1OQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/9k8szctowo0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1km889x | /r/LocalLLaMA/comments/1km889x/ondevice_agentcpmgui_is_now_opensource/ | false | false | 68 | {'enabled': False, 'images': [{'id': 'aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=108&crop=smart&format=pjpg&auto=webp&s=8baa98b2bcf5cc22f268ccac852429234b10d... | |
On-Device AgentCPM-GUI is Now Open-Source | 1 | [removed] | 2025-05-14T06:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1km86at/ondevice_agentcpmgui_is_now_opensource/ | Lynncc6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km86at | false | null | t3_1km86at | /r/LocalLLaMA/comments/1km86at/ondevice_agentcpmgui_is_now_opensource/ | false | false | self | 1 | null |
Embrace the jank (2x5090) | 126 | I just got a second 5090 to add to my 4x3090 setup as they have come down in price and have availability in my country now.
Only to notice the Gigabyte model is way to long for this mining rig.
ROPs are good luckily, this seem like later batches.
Cable temps look good but I have the 5090 power limited to 400w and th... | 2025-05-14T06:04:35 | https://www.reddit.com/gallery/1km81fb | bullerwins | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1km81fb | false | null | t3_1km81fb | /r/LocalLLaMA/comments/1km81fb/embrace_the_jank_2x5090/ | false | false | 126 | {'enabled': True, 'images': [{'id': 'E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=108&crop=smart&auto=webp&s=69b9488f65adf52b6694a6fa1da67ea19a56bbec', 'width': 108}, {'height': 288, 'url': '... | |
LM studio on remote | 1 | [removed] | 2025-05-14T06:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1km805w/lm_studio_on_remote/ | HappyFaithlessness70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km805w | false | null | t3_1km805w | /r/LocalLLaMA/comments/1km805w/lm_studio_on_remote/ | false | false | self | 1 | null |
Is there such a thing as a RTX 4070 Ti Super with modded RAM? | 1 | [removed] | 2025-05-14T05:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1km7sga/is_there_such_a_thing_as_a_rtx_4070_ti_super_with/ | jair_r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km7sga | false | null | t3_1km7sga | /r/LocalLLaMA/comments/1km7sga/is_there_such_a_thing_as_a_rtx_4070_ti_super_with/ | false | false | self | 1 | null |
Getting low similarity scores on Gemini and OpenAI embedding models compared to Open Source Models | 4 | I was running multilingual-e5-large-instruct on my local using Ollama for embedding. For most of the relevant queries the embedding was returning higher similarity scores (>0.75). But I embedded the chunks and the query again with text-embedding-004 and text-embedding-3-large both of them return much lesser similarity ... | 2025-05-14T05:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1km7fm3/getting_low_similarity_scores_on_gemini_and/ | Ok_Jacket3710 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km7fm3 | false | null | t3_1km7fm3 | /r/LocalLLaMA/comments/1km7fm3/getting_low_similarity_scores_on_gemini_and/ | false | false | self | 4 | null |
Getting low similarity scores on Gemini and OpenAI embedding models compared to Open Source Models | 1 | [removed] | 2025-05-14T05:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1km7eaf/getting_low_similarity_scores_on_gemini_and/ | Inevitable-Ad-2562 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km7eaf | false | null | t3_1km7eaf | /r/LocalLLaMA/comments/1km7eaf/getting_low_similarity_scores_on_gemini_and/ | false | false | self | 1 | null |
US issues worldwide restriction on using Huawei AI chips | 206 | 2025-05-14T05:17:14 | https://asia.nikkei.com/Spotlight/Huawei-crackdown/US-issues-worldwide-restriction-on-using-Huawei-AI-chips | fallingdowndizzyvr | asia.nikkei.com | 1970-01-01T00:00:00 | 0 | {} | 1km7azf | false | null | t3_1km7azf | /r/LocalLLaMA/comments/1km7azf/us_issues_worldwide_restriction_on_using_huawei/ | false | false | 206 | {'enabled': False, 'images': [{'id': 'soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=108&crop=smart&auto=webp&s=35f7607b8e88bc9fc8f74e56e9cea7680ac318a3', 'width': 108}, {'height': 108, 'url': '... | ||
KoboldCpp Smart Launcher: GUI & CLI tools for tensor offload auto-tuning (developed with AI) | 34 | # KoboldCpp Smart Launcher: Optimize your LLM performance with tensor offloading
[https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0](https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0)
**TL;DR:** I created a launcher (GUI + CLI) for KoboldCpp that helps you automatically find the best tensor of... | 2025-05-14T05:08:05 | https://www.reddit.com/r/LocalLLaMA/comments/1km75qq/koboldcpp_smart_launcher_gui_cli_tools_for_tensor/ | viceman256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km75qq | false | null | t3_1km75qq | /r/LocalLLaMA/comments/1km75qq/koboldcpp_smart_launcher_gui_cli_tools_for_tensor/ | false | false | self | 34 | null |
Anyone here building an AI product in German? | 1 | [removed] | 2025-05-14T04:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1km6npk/anyone_here_building_an_ai_product_in_german/ | MikeTheSolist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km6npk | false | null | t3_1km6npk | /r/LocalLLaMA/comments/1km6npk/anyone_here_building_an_ai_product_in_german/ | false | false | self | 1 | null |
Aya Vision: Advancing the Frontier of Multilingual Multimodality | 46 | Abstract
>Building multimodal language models is fundamentally challenging: it requires aligning vision and language modalities, curating high-quality instruction data, and avoiding the degradation of existing text-only capabilities once vision is introduced. These difficulties are further magnified in the multilingu... | 2025-05-14T03:41:24 | https://arxiv.org/pdf/2505.08751 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1km5p7a | false | null | t3_1km5p7a | /r/LocalLLaMA/comments/1km5p7a/aya_vision_advancing_the_frontier_of_multilingual/ | false | false | default | 46 | null |
How many data should I include to retain reasoning capability of Qwen3 model? | 1 | [removed] | 2025-05-14T02:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1km490t/how_many_data_should_i_include_to_retain/ | LectureBig9815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km490t | false | null | t3_1km490t | /r/LocalLLaMA/comments/1km490t/how_many_data_should_i_include_to_retain/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?width=108&crop=smart&auto=webp&s=49a7725b34238fa2193c55d3faac5a238cd09caf', 'width': 108}, {'height': 94, 'url': 'ht... | |
How to tell Aider to use Qwen3 with the /nothink option? | 3 | I understand that I can start aider and tell it to use models hosted locally by Ollama.
`Ex. aider --model ollama/llama3`
That being said, I'm not sure how to tell aider to use the /nothink (or /no\_think) option.
Any suggestions? | 2025-05-14T02:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1km42k8/how_to_tell_aider_to_use_qwen3_with_the_nothink/ | jpummill2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km42k8 | false | null | t3_1km42k8 | /r/LocalLLaMA/comments/1km42k8/how_to_tell_aider_to_use_qwen3_with_the_nothink/ | false | false | self | 3 | null |
May 2025 Model IQ and Mac Benchmarks | 0 | Model
IQ (5-shot MMLU)
Mem (4-bit)
M3 Max 64 GB
M4 Pro 24 GB
M4 Max 64 / 128 GB
LLaMA 2 7 B
45.8 %
3.5 GB
~ 90 t/s
~ 54 t/s
~ 110 t/s
LLaMA 2 13 B
55.4 %
6.5 GB
~ 25 t/s
~ 15 t/s
~ 30 t/s
Mistral 7 B
62.5 %
3.0 GB
~ 60 t/s
~ 36 t/s
~ 65 t/s
Mixtral 8×7 B (MoE)
71.7 %
22.5 GB
~ 60 t/s
~ 36 t/s
~ 72 t/s
Mixtral 8×22... | 2025-05-14T02:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/1km426d/may_2025_model_iq_and_mac_benchmarks/ | FroyoCommercial627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km426d | false | null | t3_1km426d | /r/LocalLLaMA/comments/1km426d/may_2025_model_iq_and_mac_benchmarks/ | false | false | self | 0 | null |
Hurdle-free web search tool for LLM | 8 | Hello everyone!
Given a Windows PC that can run an LLM (Qwen3 for example) is there a robust and easy way to allow this model to search info on the web?
Ideal solution for this would be to have a tool like LM Studio that allows me to talk to a model and make it search things for me.
Any advice or (preferably) a workin... | 2025-05-14T01:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1km3mfr/hurdlefree_web_search_tool_for_llm/ | Southern_Notice9262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km3mfr | false | null | t3_1km3mfr | /r/LocalLLaMA/comments/1km3mfr/hurdlefree_web_search_tool_for_llm/ | false | false | self | 8 | null |
Get Llama 3.2 vision to only output the text instead of solving the question | 1 | I am trying to get Llama 3.2 vision to do OCR on a PNG that contains a math equation. However, I can't seem to get it to output just the OCR, instead it tries to solve it (poorly). Is there a way I can get it to just output the text? I've tried various prompts but it doesn't seem to work. | 2025-05-14T01:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1km3eh9/get_llama_32_vision_to_only_output_the_text/ | Turbulent-Week1136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km3eh9 | false | null | t3_1km3eh9 | /r/LocalLLaMA/comments/1km3eh9/get_llama_32_vision_to_only_output_the_text/ | false | false | self | 1 | null |
Has anyone created a fine tune or LORA for AutoHotkey V1 code? | 11 | All models I've tried so far suck bad at generating valid AutoHotkey code.
Has anyone found/made a model or lora that actually works? | 2025-05-14T01:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1km2ss6/has_anyone_created_a_fine_tune_or_lora_for/ | UsingThis4Questions | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km2ss6 | false | null | t3_1km2ss6 | /r/LocalLLaMA/comments/1km2ss6/has_anyone_created_a_fine_tune_or_lora_for/ | false | false | self | 11 | null |
Gemini 2.5 exp death. | 35 | Now that 2.5 exp free it's death, what alternatives are you guys using for coding ?😞 (Free alternatives) | 2025-05-14T00:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/1km2jyz/gemini_25_exp_death/ | brocolongo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km2jyz | false | null | t3_1km2jyz | /r/LocalLLaMA/comments/1km2jyz/gemini_25_exp_death/ | false | false | self | 35 | null |
Gemini 2.5 Pro, what happens when you cheat | 0 | What happens when you pretrain on an overwhelming amount of instruct with markdown to appeal to most users? The tokens of \*Astrisks some text goes here\* are so deeply embedded into the model, that even a smart model like Gemini 2.5 Pro cannot stop using them.
https://preview.redd.it/q5z86ndkbn0f1.png?width=1709&form... | 2025-05-14T00:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1km2jhn/gemini_25_pro_what_happens_when_you_cheat/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km2jhn | false | null | t3_1km2jhn | /r/LocalLLaMA/comments/1km2jhn/gemini_25_pro_what_happens_when_you_cheat/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=108&crop=smart&auto=webp&s=f2b140ec32072a6ce19a56e120c6540e9f0021ff', 'width': 108}, {'height': 152, 'url': 'h... | |
What recent local llm blown your mind? | 1 | [removed] | 2025-05-14T00:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1km1lvg/what_recent_local_llm_blown_your_mind/ | EmmaMartian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km1lvg | false | null | t3_1km1lvg | /r/LocalLLaMA/comments/1km1lvg/what_recent_local_llm_blown_your_mind/ | false | false | self | 1 | null |
Transferring internal state across agents. | 1 | [removed] | 2025-05-13T23:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/1km0wte/transferring_internal_state_across_agents/ | Emergency-Piccolo584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km0wte | false | null | t3_1km0wte | /r/LocalLLaMA/comments/1km0wte/transferring_internal_state_across_agents/ | false | false | self | 1 | null |
What are the best LLMs for NSFW hardcore roleplays for local PC that fits my specs? | 1 | [removed] | 2025-05-13T23:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/1km0vp3/what_are_the_best_llms_for_nsfw_hardcore/ | Samsim6699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1km0vp3 | false | null | t3_1km0vp3 | /r/LocalLLaMA/comments/1km0vp3/what_are_the_best_llms_for_nsfw_hardcore/ | false | false | nsfw | 1 | null |
What is ASMR in one word by phi4-reasoning :) | 0 | $ ollama run phi4-reasoning
\>>> What is ASMR in one word
<think>User asks "What is ASMR in one word" message. The assistant is a language model developed by Microsoft
named "Phi". It says "You are Phi, a language model developed by Microsoft." But then the user asked "What is ASMR
in one word." So what is ASMR? I... | 2025-05-13T22:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1klzwet/what_is_asmr_in_one_word_by_phi4reasoning/ | TruckUseful4423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1klzwet | false | null | t3_1klzwet | /r/LocalLLaMA/comments/1klzwet/what_is_asmr_in_one_word_by_phi4reasoning/ | false | false | self | 0 | null |
BitNet-r1 8B & 32B preview — sub-1 B-token BitNet fine-tunes | 1 | [removed] | 2025-05-13T22:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1klzejt/bitnetr1_8b_32b_preview_sub1_btoken_bitnet/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1klzejt | false | null | t3_1klzejt | /r/LocalLLaMA/comments/1klzejt/bitnetr1_8b_32b_preview_sub1_btoken_bitnet/ | false | false | self | 1 | null |
Finetunes? | 1 | [removed] | 2025-05-13T21:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kly91h/finetunes/ | BlueEye1814 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kly91h | false | null | t3_1kly91h | /r/LocalLLaMA/comments/1kly91h/finetunes/ | false | false | self | 1 | null |
LeBron mewing | 1 | [removed] | 2025-05-13T21:31:35 | https://v.redd.it/bf4zi15vam0f1 | LimpBackground8734 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kly2kz | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/bf4zi15vam0f1/DASHPlaylist.mpd?a=1749763910%2CMTVkMjQ1YWRiOWVhNmQ3N2UxNjY5Y2E5OTQ5MGY4MDkwZjUyNjQ5NWQyNmM4OGRlNWI2Njk0YWRmZDIzMWVlZQ%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/bf4zi15vam0f1/DASH_720.mp4?source=fallback', 'has... | t3_1kly2kz | /r/LocalLLaMA/comments/1kly2kz/lebron_mewing/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=108&crop=smart&format=pjpg&auto=webp&s=87434e030b55dc8ac96422bed26afc1d218e... | |
BitNet Finetunes of R1 Distills | 291 | My group recently discovered that you can finetune directly to ternary ({-1, 0, 1}) BitNet if you add an extra RMS Norm to the intput of linear layers. We are releasing the preview of two models - bitnet-r1-llama-8b and bitnet-r1-qwen-32b. These models are <3GB and <10GB respectively.
We also have a PR out in HF trans... | 2025-05-13T21:12:14 | https://x.com/0xCodyS/status/1922077684948996229 | codys12 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1klxlbx | false | null | t3_1klxlbx | /r/LocalLLaMA/comments/1klxlbx/bitnet_finetunes_of_r1_distills/ | false | false | default | 291 | {'enabled': False, 'images': [{'id': 'kmD24WpPRvTFzgsoxK59i2FBo115KNddilUNMpkziwI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DkDKsAS_zzAadrbN0EABgGOgbPBi8t0wwT1ePj0VWZI.jpg?width=108&crop=smart&auto=webp&s=f7e8dd284821c06c24c1fb34c18947d53d363c4e', 'width': 108}], 'source': {'height': 20... |
Two RTX 6000 Pro Blackwell..what's it get you? | 17 | What would you all do if you had 192Gb VRAM available to you on Blackwell hardware.
Is there anything it would open up that the 3090 stackers can't currently do?
What could it still not do?
Not thinking just LLM, but image/video stuff, anything else at all AI adjacent. | 2025-05-13T21:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1klxl20/two_rtx_6000_pro_blackwellwhats_it_get_you/ | SteveRD1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1klxl20 | false | null | t3_1klxl20 | /r/LocalLLaMA/comments/1klxl20/two_rtx_6000_pro_blackwellwhats_it_get_you/ | false | false | self | 17 | null |
Real-time webcam demo with SmolVLM using llama.cpp | 2,137 | 2025-05-13T20:59:50 | https://v.redd.it/81evi7ud4m0f1 | dionisioalcaraz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1klx9q2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/81evi7ud4m0f1/DASHPlaylist.mpd?a=1749762005%2CYmFjNGEwYzZmZjJkYTg3ZmI4NjUxZTI1YmI4ZmQyYzAxZDM0ODEyMzM3ZDUzMjBmNDRjZDJmNWJjOThlMzFiNQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/81evi7ud4m0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1klx9q2 | /r/LocalLLaMA/comments/1klx9q2/realtime_webcam_demo_with_smolvlm_using_llamacpp/ | false | false | 2,137 | {'enabled': False, 'images': [{'id': 'OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab76bb6ffe065c520deeffc0bad86debf7e5... | ||
Debug Agent2Agent (A2A) without code - Open Source | 5 | 🔥 Streamline your A2A development workflow in one minute!
Elkar is an open-source tool providing a dedicated UI for debugging agent2agent communications.
It helps developers:
* **Simulate & test tasks:** Easily send and configure A2A tasks
* **Inspect payloads:** View messages and artifacts exchanged between agents... | 2025-05-13T20:22:00 | https://v.redd.it/6kt1esjfyl0f1 | Educational_Bus5043 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1klwbto | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6kt1esjfyl0f1/DASHPlaylist.mpd?a=1749759772%2COGYwOTc4OWJhOTYzMThkMGE0MzlmZTE0YTkyODc5NGQ5MmFkYmE1Y2Q3ODVhNjZiZWIzMmQ4Y2I2MzZmZWZjMA%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/6kt1esjfyl0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1klwbto | /r/LocalLLaMA/comments/1klwbto/debug_agent2agent_a2a_without_code_open_source/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=108&crop=smart&format=pjpg&auto=webp&s=2859be516ee235072273e3651e2cbc3439d17... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.