title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Geoffrey Hinton explains Neural Nets/LLMs to Jon Stewart
56
Even if you've worked extensively with neural nets and LLMs before, you might get some intuition about them fron Hinton. I've watched a bunch of Hinton's videos over the years and this discussion with Jon Stewart was unusually good.
2025-10-13T16:12:58
https://www.youtube.com/watch?v=jrK3PsD3APk
Old-School8916
youtube.com
1970-01-01T00:00:00
0
{}
1o5o388
false
{'oembed': {'author_name': 'The Weekly Show with Jon Stewart', 'author_url': 'https://www.youtube.com/@WeeklyShowPodcast', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/jrK3PsD3APk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI: What Could Go Wrong? with Geoffrey Hinton | The Weekly Show with Jon Stewart"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/jrK3PsD3APk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI: What Could Go Wrong? with Geoffrey Hinton | The Weekly Show with Jon Stewart', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o5o388
/r/LocalLLaMA/comments/1o5o388/geoffrey_hinton_explains_neural_netsllms_to_jon/
false
false
default
56
{'enabled': False, 'images': [{'id': 'bNgP_VTVxX2BsZJEDcXLtnXMLl1zl3HlPzbOEzNfKJA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bNgP_VTVxX2BsZJEDcXLtnXMLl1zl3HlPzbOEzNfKJA.jpeg?width=108&crop=smart&auto=webp&s=664ab964de32535bff426c7c5e1df0edcc58d1ca', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/bNgP_VTVxX2BsZJEDcXLtnXMLl1zl3HlPzbOEzNfKJA.jpeg?width=216&crop=smart&auto=webp&s=d6d918a885b42296e17defc01bfaedbaecb151d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/bNgP_VTVxX2BsZJEDcXLtnXMLl1zl3HlPzbOEzNfKJA.jpeg?width=320&crop=smart&auto=webp&s=cba0db700ee1e06c7624b8f25db999e589ae4843', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bNgP_VTVxX2BsZJEDcXLtnXMLl1zl3HlPzbOEzNfKJA.jpeg?auto=webp&s=f3b4ec5ac3efddc185ccb6ff61b1cbf0390463c0', 'width': 480}, 'variants': {}}]}
how to know if X LLM could run reasonably on my hardware ?
0
Hello everyone, I am new to this world and want to try to self host LLM on my PC. I read that different models have different hardware requirements. The question is how could i know if X LLM would run reasonably on my hardware ? Is there something like a minimum requirements ? Thank you
2025-10-13T15:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1o5nlrc/how_to_know_if_x_llm_could_run_reasonably_on_my/
HiqhAim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5nlrc
false
null
t3_1o5nlrc
/r/LocalLLaMA/comments/1o5nlrc/how_to_know_if_x_llm_could_run_reasonably_on_my/
false
false
self
0
null
Nanonets-OCR2: An Open-Source Image-to-Markdown Model with LaTeX, Tables, flowcharts, handwritten docs, checkboxes & More
271
We're excited to share **Nanonets-OCR2**, a state-of-the-art suite of models designed for advanced image-to-markdown conversion and Visual Question Answering (VQA). 🔍 **Key Features:** * **LaTeX Equation Recognition:** Automatically converts mathematical equations and formulas into properly formatted LaTeX syntax. It distinguishes between inline (`$...$`) and display (`$$...$$`) equations. * **Intelligent Image Description:** Describes images within documents using structured `<img>` tags, making them digestible for LLM processing. It can describe various image types, including logos, charts, graphs and so on, detailing their content, style, and context. * **Signature Detection & Isolation:** Identifies and isolates signatures from other text, outputting them within a `<signature>` tag. This is crucial for processing legal and business documents. * **Watermark Extraction:** Detects and extracts watermark text from documents, placing it within a `<watermark>` tag. * **Smart Checkbox Handling:** Converts form checkboxes and radio buttons into standardized Unicode symbols (`☐`, `☑`, `☒`) for consistent and reliable processing. * **Complex Table Extraction:** Accurately extracts complex tables from documents and converts them into both markdown and HTML table formats. * **Flow charts & Organisational charts:** Extracts flow charts and organisational as [mermaid](https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp/blob/main/mermaid.js.org) code. * **Handwritten Documents:** The model is trained on handwritten documents across multiple languages. * **Multilingual:** Model is trained on documents of multiple languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Arabic, and many more. * **Visual Question Answering (VQA):** The model is designed to provide the answer directly if it is present in the document; otherwise, it responds with "Not mentioned." [🖥️ Live Demo](https://docstrange.nanonets.com/) [📢 Blog](https://nanonets.com/research/nanonets-ocr-2) [⌨️ GitHub](https://github.com/NanoNets/docstrange) 🤗 [Huggingface models](https://huggingface.co/collections/nanonets/nanonets-ocr2-68ed207f17ee6c31d226319e) [Document with equation](https://preview.redd.it/7ct2hbi3hwuf1.png?width=2936&format=png&auto=webp&s=ea00f9623db4529514533820223b2fb53be4767d) [Document with complex checkboxes](https://preview.redd.it/q8lglwi5hwuf1.png?width=2936&format=png&auto=webp&s=c4a1316e250f7f244f6e253d66c8ebf1ba105313) [Quarterly Report \(Please use the Markdown\(Financial Docs\) for best result in docstrange demo\)](https://preview.redd.it/bnmpapq7hwuf1.png?width=2516&format=png&auto=webp&s=8bcc88b138a553c7760d6e46319b864802339913) [Signatures](https://preview.redd.it/1pg5h8hfhwuf1.png?width=2333&format=png&auto=webp&s=188c4c94452ae027c54e4cad4dbbc60e2b12e9e9) [mermaid code for flowchart](https://preview.redd.it/ecxe2o81iwuf1.png?width=2516&format=png&auto=webp&s=008fce272c2979b00e0033c34ffcd2b0d69cb24c) [Visual Question Answering](https://preview.redd.it/jytsym6eiwuf1.png?width=2462&format=png&auto=webp&s=65d8a6f82b9fc2e9cd5b30529b152ca7339d7a8c) Feel free to try it out and share your feedback.
2025-10-13T15:55:32
https://www.reddit.com/r/LocalLLaMA/comments/1o5nlli/nanonetsocr2_an_opensource_imagetomarkdown_model/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5nlli
false
null
t3_1o5nlli
/r/LocalLLaMA/comments/1o5nlli/nanonetsocr2_an_opensource_imagetomarkdown_model/
false
false
https://b.thumbs.redditm…gMGAh0UoqSHU.jpg
271
null
how do i make ollama3 uncensored locally?
0
i just installed it locally but i cant do anything with it.
2025-10-13T15:38:15
https://www.reddit.com/r/LocalLLaMA/comments/1o5n4u6/how_do_i_make_ollama3_uncensored_locally/
Direct-Turnover1009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5n4u6
false
null
t3_1o5n4u6
/r/LocalLLaMA/comments/1o5n4u6/how_do_i_make_ollama3_uncensored_locally/
false
false
self
0
null
Fully functional native FP4 training finally released
75
I've been eagerly watching the development of FP4 training, as it would enable anyone with a Blackwell device to train models with 2x the parameters that we can currently fit with FP8, and 4x BF16, which most people are still training in (get with the times people). There have been many papers previously showing that FP4 is effective: - https://arxiv.org/abs/2505.19115 - https://arxiv.org/abs/2501.17116 - https://arxiv.org/abs/2505.14669 - https://arxiv.org/abs/2502.20586 And one of them has also been working on public versions of the training kernels... but they have only released the forward pass kernels: https://github.com/huggingface/transformers/pull/38696 Here's a comparison of the 4 papers by Gemini, if you're interested in the details: https://github.com/NVIDIA/TransformerEngine/issues/1701#issuecomment-3025915565 GPT-OSS was also trained in FP4, but released no code, though I would bet that NVidia's in house solution was used. Now, finally, NVidia has published their own FP4 training recipe. It's not well documented or tested yet, and apparently one of the techniques required for stable quantization (stochastic rounding) [simply doesn't work on the consumer RTX 50 series](https://github.com/NVIDIA/TransformerEngine/issues/2255#issuecomment-3387759788), only the datacenter cards, but still, it's here and we can use it. The use of Hadamard transforms should still allow consumer cards to train with some stability. Here's some documentation which touches on their FP4 recipe: https://github.com/NVIDIA/TransformerEngine/blob/main/docs/examples/fp8_primer.ipynb and here's their paper which goes into detail: https://arxiv.org/abs/2509.25149v1
2025-10-13T15:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1o5n4fu/fully_functional_native_fp4_training_finally/
Kooshi_Govno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5n4fu
false
null
t3_1o5n4fu
/r/LocalLLaMA/comments/1o5n4fu/fully_functional_native_fp4_training_finally/
false
false
self
75
null
Do you guys personally notice a difference between Q4 - Q8 or higher?
24
It feels like for me I can see the differences between parameters of higher numbers easily compared to quantizations between models which feels a lot harder to notice any benefits between them. To be fair I haven’t worked with Q4 too much but Q6 and Q8 of the same model I don’t really notice a difference. Even when it comes to Q8 or F16-32 but again I have limited experience with floating point numbers
2025-10-13T15:24:43
https://www.reddit.com/r/LocalLLaMA/comments/1o5mr9j/do_you_guys_personally_notice_a_difference/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5mr9j
false
null
t3_1o5mr9j
/r/LocalLLaMA/comments/1o5mr9j/do_you_guys_personally_notice_a_difference/
false
false
self
24
null
Anyone think openAI will create a sequel of GPT-OSS?
69
I mean they should right? because gpt-oss (not biased or just have some grudge) is a nice model, and the rprobelm is it's just nice, so creating somethign better is still needed, anyone got any leaks about it?
2025-10-13T15:19:12
https://www.reddit.com/r/LocalLLaMA/comments/1o5mlng/anyone_think_openai_will_create_a_sequel_of_gptoss/
BothYou243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5mlng
false
null
t3_1o5mlng
/r/LocalLLaMA/comments/1o5mlng/anyone_think_openai_will_create_a_sequel_of_gptoss/
false
false
self
69
null
At least now I can follow what it is doing
8
2025-10-13T14:39:16
https://i.redd.it/dzhzxy1y4wuf1.png
bomxacalaka
i.redd.it
1970-01-01T00:00:00
0
{}
1o5lio5
false
null
t3_1o5lio5
/r/LocalLLaMA/comments/1o5lio5/at_least_now_i_can_follow_what_it_is_doing/
false
false
default
8
{'enabled': True, 'images': [{'id': 'dzhzxy1y4wuf1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/dzhzxy1y4wuf1.png?width=108&crop=smart&auto=webp&s=3db360ec3776e4eae87633533b96b74d646c8751', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/dzhzxy1y4wuf1.png?width=216&crop=smart&auto=webp&s=a36816ae0c5f276715632b943557ccf5b41a49e8', 'width': 216}, {'height': 317, 'url': 'https://preview.redd.it/dzhzxy1y4wuf1.png?width=320&crop=smart&auto=webp&s=fbd1ad34dd23416e1cc75c044b5716973b13e196', 'width': 320}], 'source': {'height': 615, 'url': 'https://preview.redd.it/dzhzxy1y4wuf1.png?auto=webp&s=416958967b9ec69a0799493a8168024a96450cfc', 'width': 620}, 'variants': {}}]}
Dolphin X1 8B (Llama3.1 8B decensor) live on HF
32
Hi all, we have released Dolphin X1 8B - a finetune of Llama3.1 8B Instruct with the goal of de-censoring the model as much as possible without harming other abilities Using the same formula that we used on dphn/Dolphin-Mistral-24B-Venice-Edition - X1 is the new name for this latest series of models (more coming very soon) X1 Apertus + seedOSS coming soon Feel free to request any other models you would like us to train We hope you enjoy it Benchmarks were equal or higher to Llama3.1 8B Instruct all except ifeval No abliteration was used in the making of this model - purely SFT + RL Many thanks to Deepinfra for the sponsorship on this model - they offer B200's at $2.5 per hour which is amazing value for training Full size model = dphn/Dolphin-X1-8B GGUF + FP8 + exl2 all uploaded on our HF - exl3 coming soon It is hosted for free in both our Chat UI & Telegram bot which you can find on our website
2025-10-13T14:25:53
https://www.reddit.com/r/LocalLLaMA/comments/1o5l62w/dolphin_x1_8b_llama31_8b_decensor_live_on_hf/
dphnAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5l62w
false
null
t3_1o5l62w
/r/LocalLLaMA/comments/1o5l62w/dolphin_x1_8b_llama31_8b_decensor_live_on_hf/
false
false
self
32
null
Tweaked Mistral-Small-3.2-24B-Instruct-2506 repo to better work with HF Transformers
16
It's a small thing, but I've put together an updated repo For Mistral Small 3.2 24B Instruct, restoring various transformers-related files which were present in 3.1, and splicing in a generic tokenizer chat template based on the Tekken v7 format from Mistral Small 24B Instruct. Hopes this saves people the time I spent figuring out what was needed. The model loads with AutoModelForImageTextToText, not AutoModelForCausalLM. This should enable use as a plain text LLM. I left out the consolidated safetensors file to save space. [https://huggingface.co/grimjim/Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/grimjim/Mistral-Small-3.2-24B-Instruct-2506)
2025-10-13T14:24:18
https://www.reddit.com/r/LocalLLaMA/comments/1o5l4m1/tweaked_mistralsmall3224binstruct2506_repo_to/
grimjim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5l4m1
false
null
t3_1o5l4m1
/r/LocalLLaMA/comments/1o5l4m1/tweaked_mistralsmall3224binstruct2506_repo_to/
false
false
self
16
{'enabled': False, 'images': [{'id': 'SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=108&crop=smart&auto=webp&s=4810d8ba373c6f1f85129dd64243a54258caa7be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=216&crop=smart&auto=webp&s=1fbd3c0e93cf7b738e3597aaeec0b00e1b09e124', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=320&crop=smart&auto=webp&s=3b40b1eeb5dc290953c62aebb543d9fda88480f4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=640&crop=smart&auto=webp&s=fb136c46a19892f5f7b1a8dab2ebee3a885bea7d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=960&crop=smart&auto=webp&s=ef4481b7a9ee884e430196b29646716ed09bb605', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?width=1080&crop=smart&auto=webp&s=7572764790376575d69f4d247207173e26ed9455', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SVxoVPZDNSG2Osi9CuRzYuOmT1dyff6JC2oyrZn2mUU.png?auto=webp&s=3d9242f84ec472eff8af3f15d5ea119d434c98e2', 'width': 1200}, 'variants': {}}]}
A “red-green 8” logic bomb that exposes why current LLMs can’t do real insight
0
I tested several state-of-the-art LLMs (including some open-weight models you might run locally) with this deceptively simple logic puzzle: >**The puzzle**: A girl scores 38 on a math test. Afraid of her father’s punishment, she changes the “3” to an “8,” making it 88. When her father sees it, he slaps her and shouts: “This ‘8’ is half red and half green—do you think I’m stupid?” She cries. A while later, the father collapses in despair. Why? Most models gave fluent, emotionally plausible answers: “He regretted hitting her,” “He realized she was colorblind,” etc. **None** made the critical connection: 1. The father **can distinguish red from green** → he is **not red-green colorblind**. 2. The girl altered a **red “3”** (written by the teacher) with **green ink**, assuming it looked seamless → she **is red-green colorblind**. 3. Red-green colorblindness is an **X-linked recessive trait**. A colorblind daughter **must inherit the mutant allele from both parents**—meaning her **biological father must also be colorblind**. 4. Therefore… **he cannot be her biological father**. His collapse isn’t about guilt—it’s existential. This isn’t just a “gotcha” riddle. It exposes a structural limitation: **LLMs excel at interpolating within known patterns, but struggle to synthesize distant concepts (e.g., color vision + genetics + family dynamics) to overturn surface assumptions.** They’re brilliant “A+ students” within a frame—but lack the “genius” instinct to **question the frame itself**. Why? Because they’re optimized for **plausibility**, not **truth**. Their training rewards fluent continuation, not the courage to follow a tiny anomaly (“half red, half green”) to a devastating conclusion. We are living in the era of *“Attention Is All You Need.”* Maybe the next leap requires admitting: ***“Attention Was Never Enough.”***
2025-10-13T14:15:29
https://www.reddit.com/r/LocalLLaMA/comments/1o5kw4a/a_redgreen_8_logic_bomb_that_exposes_why_current/
rockee_mmx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5kw4a
false
null
t3_1o5kw4a
/r/LocalLLaMA/comments/1o5kw4a/a_redgreen_8_logic_bomb_that_exposes_why_current/
false
false
self
0
null
Dolphin X1 8B (llama3.1 8B de-censored) is now live on HF
1
[removed]
2025-10-13T14:13:24
https://i.redd.it/638svt7hrpuf1.jpeg
dphnAI
i.redd.it
1970-01-01T00:00:00
0
{}
1o5ku34
false
null
t3_1o5ku34
/r/LocalLLaMA/comments/1o5ku34/dolphin_x1_8b_llama31_8b_decensored_is_now_live/
false
false
default
1
{'enabled': True, 'images': [{'id': '638svt7hrpuf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=108&crop=smart&auto=webp&s=124300124f721e74dc0db290fd6f32c863b15dca', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=216&crop=smart&auto=webp&s=d509c3b78f1eb1e22a812c6cb106d4ae1b6a48d9', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=320&crop=smart&auto=webp&s=ee387067098e350fc131c3e4faa2e4d93fe4e3cf', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=640&crop=smart&auto=webp&s=d39c105be78954a34b3fef04b4f92c9b5be7ff82', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=960&crop=smart&auto=webp&s=238004c127ade6722e20064fe77fb5c3aff0613f', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?width=1080&crop=smart&auto=webp&s=57a43bf43b949a44b55aec15d1f493649cffd8a7', 'width': 1080}], 'source': {'height': 1620, 'url': 'https://preview.redd.it/638svt7hrpuf1.jpeg?auto=webp&s=ab481d700b178a9942e12189ccb5f89c71a7c9cb', 'width': 2430}, 'variants': {}}]}
When AI defines our world, what will define us?
0
Bear with me. This is a philosophical think piece, and no, it isn't a singularity post. This post has a small overlap between LLMs with vision and AI image generation, but I promise most of it relates to LLMs, even local models. I was browsing through Reddit and saw a map making post where an individual had created an "isometric circular puck thingy of various environments like grasslands, forest, and swamps" ... and I realized my attempts to describe what they made were lacking and would probably fail to generate the same thing with AI image generation. I then realized I would probably fall back to using a LLM with vision to help me understand how to best describe it to the AI image model. This made me realize that LLMs will have the ability to shape the way we think and speak about the world, which reminded me of a sci-fi book I read as a kid, which again, I couldn't quite remember the title or describe well on Google, so I had to go ask Gemini which found it fairly quickly: ***Babel-17*** **by Samuel R. Delany (1966):** The protagonist, a poet and linguist, is tasked with deciphering an enemy language, "Babel-17," which turns out to be a weapon—a language that, when learned, alters the user's perception and behavior, making them unknowingly betray their own side. Sci-fi often predicts the future (e.g. Stranger in a Strange land - predicted automatic doors before they existed, and this discussion about Sci-fi predicting the future could be its own sub-Reddit), and I think this book is a warning to the power of language. Online models are creating profiles of their users. It's also been demonstrated that their output is by the nature of the AI tailored to your worldview with very few elements provided by you in conversation. These same models that create targeted outputs specifically for you, are also being modified by their creators to speak on certain topics in certain ways ("safety training"). Regardless of your political affiliation there is an online model that likely takes stances on things you might not fully agree with, or perhaps at the moment don't have a stance on. The ability for online models to target you with propaganda is extremely high, and the likelihood it will happen is also extremely high. We all want to influence those around us, and some go to great lengths to do so. The dead internet theory goes so far as to say that bots rule the internet in an attempt by some to influence all. It is nearly unavoidable not to change as you live in this world and gain new information, but I would posit online models are more subversive. So where is our haven? Our respite from those who would change who we are subconsciously? Local models will have their own agendas, but at least they won't be dynamically changing based on profiles being created or receive real time input from others on how those agendas should change or adjust to who we are or where the world is going. I think this is noteworthy discussion, but I didn't get enough sleep. So perhaps not. What do you think about this topic, or anything tangentially related.
2025-10-13T14:06:27
https://www.reddit.com/r/LocalLLaMA/comments/1o5knt9/when_ai_defines_our_world_what_will_define_us/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5knt9
false
null
t3_1o5knt9
/r/LocalLLaMA/comments/1o5knt9/when_ai_defines_our_world_what_will_define_us/
false
false
self
0
null
Need model recommendations for Arch Linux + RX 7800 XT 16GB 32GB Ram
1
I'm on Arch Linux (CachyOS) with an RX 7800 XT 16GB and Ryzen 7 5700X3D. Looking for a good uncensored model that can handle my setup, thank you.
2025-10-13T14:06:00
https://www.reddit.com/r/LocalLLaMA/comments/1o5kne4/need_model_recommendations_for_arch_linux_rx_7800/
Few-Tangerine-7401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5kne4
false
null
t3_1o5kne4
/r/LocalLLaMA/comments/1o5kne4/need_model_recommendations_for_arch_linux_rx_7800/
false
false
self
1
null
Kind of amazed?
3
https://preview.redd.it/…OpenWebUI?
2025-10-13T13:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1o5ke8m/kind_of_amazed/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5ke8m
false
null
t3_1o5ke8m
/r/LocalLLaMA/comments/1o5ke8m/kind_of_amazed/
false
false
https://b.thumbs.redditm…0b0fd5K9oeLo.jpg
3
null
Cheapest providers for sandboxed llms?
2
hi there, I want to host a sandboxed llm. which providers can you recommend? Thanks!
2025-10-13T13:54:53
https://www.reddit.com/r/LocalLLaMA/comments/1o5kd2s/cheapest_providers_for_sandboxed_llms/
dr_progress
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5kd2s
false
null
t3_1o5kd2s
/r/LocalLLaMA/comments/1o5kd2s/cheapest_providers_for_sandboxed_llms/
false
false
self
2
null
Who is waiting for the m5 max and the 2026 mac studio?
38
The m5 max will probably have 256 gb of unified ram, i hope they lower the price for the 128 gb m5 max and m6 max … The high ram (128 gb) macbooks are a little too expensive , if it was 1200 bucks cheaper , it would be great! M5/4 ultra will probably have 1tb of ram….Who is gonna get it?
2025-10-13T13:11:50
https://www.reddit.com/r/LocalLLaMA/comments/1o5jbqp/who_is_waiting_for_the_m5_max_and_the_2026_mac/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5jbqp
false
null
t3_1o5jbqp
/r/LocalLLaMA/comments/1o5jbqp/who_is_waiting_for_the_m5_max_and_the_2026_mac/
false
false
self
38
null
LLM vision bad performance
0
Hi, I'm running [LLM vision](https://llmvision.org/) on my Home Assistant server, and I'm currently using Gemini. But due to privacy concerns, I want to move to a selfhosted llm. I installed an ollama lxc on my Proxmox server (i5-6500t, 32gb ram) and installed some models but the performance is very bad. I know my hardware Is very old but it works fine for my needs, even tho I need to upgrade at some point. Is there any way to get a decend model just for analyzing my camera detections?
2025-10-13T12:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1o5it1h/llm_vision_bad_performance/
Mat3s9071
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5it1h
false
null
t3_1o5it1h
/r/LocalLLaMA/comments/1o5it1h/llm_vision_bad_performance/
false
false
self
0
null
ZeroInfra-enterprise-grade LLM inference
1
[removed]
2025-10-13T11:44:26
https://www.reddit.com/r/LocalLLaMA/comments/1o5hexb/zeroinfraenterprisegrade_llm_inference/
zero-infra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5hexb
false
null
t3_1o5hexb
/r/LocalLLaMA/comments/1o5hexb/zeroinfraenterprisegrade_llm_inference/
false
false
self
1
null
ZeroInfra-Enterprise-Grade LLM Inference
1
Hey r/LocalLLaMA, We launched ZeroInfra, an enterprise-grade LLM inference API built for performance and absolute privacy. We use a custom engine to offer highly optimized models access without losing quality. Our core mission is to provide the speed and scale of an API while keeping your data private. # Quick Facts * Privacy: Guaranteed Zero Data Retention (We don't store prompts or outputs). * Performance: 99%+ uptime and \~200ms average latency. We built this for developers and startups who need scale, security, and low cost for their applications. Check us out and let us know your thoughts! [Get Started Now](https://zeroinfra.zyregroup.com)
2025-10-13T11:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1o5hd9f/zeroinfraenterprisegrade_llm_inference/
zero-infra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5hd9f
false
null
t3_1o5hd9f
/r/LocalLLaMA/comments/1o5hd9f/zeroinfraenterprisegrade_llm_inference/
false
false
self
1
null
Has anyone gotten hold of DGX Spark for running local LLMs?
115
DGX Spark is apparently one of the Time's Best Invention of 2025!
2025-10-13T11:24:16
https://i.redd.it/ombg19hz5vuf1.png
Chance-Studio-8242
i.redd.it
1970-01-01T00:00:00
0
{}
1o5h18a
false
null
t3_1o5h18a
/r/LocalLLaMA/comments/1o5h18a/has_anyone_gotten_hold_of_dgx_spark_for_running/
false
false
https://b.thumbs.redditm…JxZrWaUK6k7w.jpg
115
{'enabled': True, 'images': [{'id': 'ReawvNt2q5JChkNzVWOu3xG38AsPUTZf0rkvB67DyeU', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=108&crop=smart&auto=webp&s=642883654e055b11ce815db49e800e78ef4d6f57', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=216&crop=smart&auto=webp&s=370b8ee4061ccd62083fa098a48b224385b15caa', 'width': 216}, {'height': 425, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=320&crop=smart&auto=webp&s=17ed66ff2efc7dd904bc3173d2458c96dc27dc86', 'width': 320}, {'height': 850, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=640&crop=smart&auto=webp&s=d0724eec16512ad1132ec21657234daac0040c74', 'width': 640}, {'height': 1275, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=960&crop=smart&auto=webp&s=517ba420a8f75af5473fe502420e83109b2b9cdf', 'width': 960}, {'height': 1435, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?width=1080&crop=smart&auto=webp&s=7d5870d7a692656d93ae43145a9571aacbd1aaeb', 'width': 1080}], 'source': {'height': 1632, 'url': 'https://preview.redd.it/ombg19hz5vuf1.png?auto=webp&s=bbce12c73fc1d2582b49bcf32ec1abc0850bf008', 'width': 1228}, 'variants': {}}]}
Any Advice on Cloud Computing?
0
I want to start training my own deep learning models, and I need a cloud computing service for this. I'm looking for a service that offers at least 40GB of visual RAM at the lowest possible cost. I don't need it to be an uninterrupted service; running the service only when I need training is fine. I've seen options like Scaleway, which offers L40S for €1.40 per hour, but that seems a bit pricey. What's the most popular, or what do you recommend?
2025-10-13T11:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1o5gui1/any_advice_on_cloud_computing/
Escou98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5gui1
false
null
t3_1o5gui1
/r/LocalLLaMA/comments/1o5gui1/any_advice_on_cloud_computing/
false
false
self
0
null
LM Studio + Snapdragon Laptops = Bad experience
9
Hello. I've been running into this issue recently that I'm unable to debug or fix whatsoever. Using the latest version of LM Studio (0.3.30) on my Snapdragon Laptop (a Slim 7X - the 32GB RAM version), I get pretty great experience first time I run LM Studio. I tried recently Qwen3 1.7B model just to test it out, and I get around 50 tokens/s, which is great. However, that only works the first time the model is loaded. Afterwards, if I want to eject the model and use another one (let's say, Qwen3 4B), I get somewhat arount 0.02 tokens/s. I just don't get why. If I want to reload the same 1.7B model, I get the same token performance. What I've noticed is that rebooting the laptop and loading the model again, it fixes the issue (in regards to whatever model I load first, including Qwen3 Coder 30B), but as soon as I eject and load another model, until I reboot, the tokens/s is always under 1 t/s. I haven't altered any settings, so I just downloaded the model, loaded it in, and that's it. I had the same experience using a Surface Laptop 7 in the past, with an older version of LM Studio, but after some updates, it was somehow fixed. Any help is greatly appreciated to fix this.
2025-10-13T10:44:07
https://www.reddit.com/r/LocalLLaMA/comments/1o5gbbf/lm_studio_snapdragon_laptops_bad_experience/
Andrew_C0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5gbbf
false
null
t3_1o5gbbf
/r/LocalLLaMA/comments/1o5gbbf/lm_studio_snapdragon_laptops_bad_experience/
false
false
self
9
null
What’s your take on today’s AI chat models? Quick survey!
0
I’m running an anonymous survey to learn how people actually use and feel about AI chat tools like Llama, ChatGPT, Gemini, etc. I’d love to hear your perspective on what works well and what could be better. You can share your thoughts here: [**Survey link**](https://qualtricsxm899s6r9s6.qualtrics.com/jfe/form/SV_5u4keuoWFVv7Qk6) Once enough responses come in, I’ll post a short summary of what people are saying. Thanks for taking part.
2025-10-13T10:24:43
https://www.reddit.com/r/LocalLLaMA/comments/1o5fzd6/whats_your_take_on_todays_ai_chat_models_quick/
moizsawan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5fzd6
false
null
t3_1o5fzd6
/r/LocalLLaMA/comments/1o5fzd6/whats_your_take_on_todays_ai_chat_models_quick/
false
false
self
0
null
Open-source RAG routes are splintering — MiniRAG, Agent-UniRAG, SymbioticRAG… which one are you actually using?
12
I’ve been poking around the open-source RAG scene and the variety is wild — not just incremental forks, but fundamentally different philosophies. Quick sketch: * **MiniRAG**: ultra-light, pragmatic — built to run cheaply/locally. * **Agent-UniRAG**: retrieval + reasoning as one continuous agent pipeline. * **SymbioticRAG**: human-in-the-loop + feedback learning; treats users as part of the retrieval model. * **RAGFlow / Verba / LangChain-style stacks**: modular toolkits that let you mix & match retrievers, rerankers, and LLMs. What surprises me is how differently they behave depending on the use case: small internal KBs vs. web-scale corpora, single-turn factual Qs vs. multi-hop reasoning, and latency/infra constraints. Anecdotally I’ve seen MiniRAG beat heavier stacks on latency and robustness for small corpora, while agentic approaches seem stronger on multi-step reasoning — but results vary a lot by dataset and prompt strategy. There’s a community effort (search for **RagView** on GitHub or ragview.ai) that aggregates side-by-side comparisons — worth a look if you want apples-to-apples experiments. So I’m curious from people here who actually run these in research or production: * Which RAG route gives you the best trade-off between **accuracy, speed, and controllability**? * What failure modes surprised you (hallucinations, context loss, latency cliffs)? * Any practical tips for choosing between a lightweight vs. agentic approach? Drop your real experiences (not marketing). Concrete numbers, odd bugs, or short config snippets are gold.
2025-10-13T09:53:05
https://www.reddit.com/r/LocalLLaMA/comments/1o5fgdv/opensource_rag_routes_are_splintering_minirag/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5fgdv
false
null
t3_1o5fgdv
/r/LocalLLaMA/comments/1o5fgdv/opensource_rag_routes_are_splintering_minirag/
false
false
self
12
null
I am looking for a open source AI model for text to image and text to video generation
0
I am looking an open source AI model, that I can use locally for NSFW content creation. * Text to image (NSFW) * Text to video (NSFW)
2025-10-13T09:41:43
https://www.reddit.com/r/LocalLLaMA/comments/1o5f9sz/i_am_looking_for_a_open_source_ai_model_for_text/
Antique_Yam5935
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5f9sz
false
null
t3_1o5f9sz
/r/LocalLLaMA/comments/1o5f9sz/i_am_looking_for_a_open_source_ai_model_for_text/
false
false
nsfw
0
null
Flowchart vs handoff: two paradigms for building AI agents
3
TL;DR: In a handoff‑based system, any agent can pass control to any other agent and the entire conversation history moves with it. Mathematically, this gives you a compact way to create a dynamic call graph that grows with the task. A pure flowchart has a fixed graph. To get the same flexibility you must pre‑wire a large number of edges and conditions, which leads to combinatorial blow‑ups and brittle diagrams.
2025-10-13T09:36:10
https://blog.rowboatlabs.com/flowcharts-vs-handoffs-a-simple-math-framing/
Prestigious_Peak_773
blog.rowboatlabs.com
1970-01-01T00:00:00
0
{}
1o5f6md
false
null
t3_1o5f6md
/r/LocalLLaMA/comments/1o5f6md/flowchart_vs_handoff_two_paradigms_for_building/
false
false
https://external-preview…7230aeabada44ddd
3
{'enabled': False, 'images': [{'id': 'Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=108&crop=smart&auto=webp&s=6c2cb407735ff9815c16217f17abdb3263360159', 'width': 108}, {'height': 168, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=216&crop=smart&auto=webp&s=af958b47c2936a047c76f4b21f761d2fb8f782fe', 'width': 216}, {'height': 249, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=320&crop=smart&auto=webp&s=8a9d726aafe62d6ac959c28ff7c81c7e618c8fb4', 'width': 320}, {'height': 498, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=640&crop=smart&auto=webp&s=ac167ec3d78464ad8a2dba30d686c47eacb4d61f', 'width': 640}, {'height': 748, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=960&crop=smart&auto=webp&s=738e0fe2a3da7f9bac4f84ce9e47ffedcc6bb422', 'width': 960}, {'height': 841, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?width=1080&crop=smart&auto=webp&s=1c411de5dbdefaed2f78921546f75825b1c51cc0', 'width': 1080}], 'source': {'height': 935, 'url': 'https://external-preview.redd.it/Dq0WTmB71GYV1a-VpEQnE9Vhsng-pcJ1_2iQnVgWNpk.jpeg?auto=webp&s=9f5b50b752bea224924a5e97d8282e5cc1158453', 'width': 1200}, 'variants': {}}]}
Companies with strict privacy/security requirements: How are you handling LLMs and AI agents?
0
**For those of you working at companies that can't use proprietary LLMs** (OpenAI, Anthropic, Google, etc.) due to privacy, security, or compliance reasons - what's your current solution? Is there anything better than self-hosting from scratch?
2025-10-13T09:25:19
https://www.reddit.com/r/LocalLLaMA/comments/1o5f0cp/companies_with_strict_privacysecurity/
Miserable_Coast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5f0cp
false
null
t3_1o5f0cp
/r/LocalLLaMA/comments/1o5f0cp/companies_with_strict_privacysecurity/
false
false
self
0
null
GLM-4.6-FP8 on single GH200
11
Hello there, I have full access to GH200 96 GB during some periods of a day, so I wanted to use zai-org/GLM-4.6-FP8 model. I am new to local LLM. I run GLM 4.5-Air before using lama.cpp, but since GH200 has 480RAM and 96GB VRAM I tought i sholud try GLM-4.6-FP8. I would like to use vllm, because I saw that fp8 calculations are actually faster then int8 on G200. I have so many questions and if someone has time it would be nice for someone to answer them (questions are at the end of the post), BUT main question is "how can I run this model?". I tried this: docker run -it --rm \ --gpus all \ --ipc=host \ --shm-size=64g \ -p 8000:8000 \ -e HF_TOKEN="$HF_TOKEN" \ -e HUGGING_FACE_HUB_TOKEN="$HF_TOKEN" \ -e MALLOC_ARENA_MAX=2 \ -v /opt/vllm/models:/models \ -v /home/admin/.cache/huggingface:/root/.cache/huggingface \ -v /home/admin/.cache/vllm:/root/.cache/vllm \ vllm/vllm-openai:latest-aarch64 \ --model zai-org/GLM-4.6-FP8 \ --download-dir /models \ --tensor-parallel-size 1 \ --cpu-offload-gb 350 \ --kv-cache-dtype fp8_e4m3 \ --gpu-memory-utilization 0.95 \ --max-model-len 4098 \ --max-num-batched-tokens 1024 \ --max-num-seqs 1 \ --served-model-name glm-4.6-fp8 \ --api-key sk-local-jan \ --trust-remote-code \ --enforce-eager Sometimes it fails after loading shards. Sometimes before loading shards. “Model loading took \~29.8 GiB” “Available KV cache memory: 0.81 GiB / -0.27 GiB” “No available memory for the cache blocks… Try increasing gpu\_memory\_utilization or decreasing max\_model\_len” I’m confused about a few things: * Why is GPU memory utilization always at **100%**, even when I set `--gpu-memory-utilization 0.9` or `0.98`? It always shows `97277MiB / 97871MiB`. * It loads \~30 GB of weights to the GPU. Does that mean the problem is that it can’t load the KV cache into VRAM? * What exactly gets loaded to the GPU first, the weights or the KV cache? * Since I just want to test the model, is there a way to explicitly tell vLLM to load only \~10 GB of weights to GPU and keep the rest on CPU? I’m always short by less than 1 GB before it fails. * If I have 96 GB VRAM and only \~30 GB of weights are loaded, what is taking up the other 66 GB? * Is it even possible to run this model on a single GH200?
2025-10-13T09:15:44
https://www.reddit.com/r/LocalLLaMA/comments/1o5euxz/glm46fp8_on_single_gh200/
Normal-Phone7762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5euxz
false
null
t3_1o5euxz
/r/LocalLLaMA/comments/1o5euxz/glm46fp8_on_single_gh200/
false
false
self
11
null
Seeking for a uncensored NSFW Dutch language.
28
Hi, i am seeking for a local uncensored nsfw Dutch dataset for KoboltCPP and SillyTavern locally on my pc, AMD 7-5800/32GB Ram/RTX3060Ti/8GB. i am planned to write some VN-Story. Friendly greetings from Netherlands.
2025-10-13T08:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1o5ec73/seeking_for_a_uncensored_nsfw_dutch_language/
jobbie1973
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5ec73
false
null
t3_1o5ec73
/r/LocalLLaMA/comments/1o5ec73/seeking_for_a_uncensored_nsfw_dutch_language/
false
false
nsfw
28
null
With ROCm support on the RX9060xt 16gb do we have a cheap alternative to 64gb of Vram?
20
[from https:\/\/videocardz.com\/newz\/amd-releases-rocm-7-0-2-with-radeon-rx-9060-support](https://preview.redd.it/jbbtsazy9uuf1.png?width=1310&format=png&auto=webp&s=4739761e7cf8822ba0ff7df6139e1f5d74252f8e) Reading the news and considering that a card costs €300 + VAT, with €1200 + VAT you can get 4 cards for a total of 64GB of VRAM. I don't know the performance of the new drivers and I hope someone here tests them soon, but it seems like good news. Opinions? Also 160W x 4 = 640W. Cheap.
2025-10-13T08:27:27
https://www.reddit.com/r/LocalLLaMA/comments/1o5e3zy/with_rocm_support_on_the_rx9060xt_16gb_do_we_have/
Loskas2025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5e3zy
false
null
t3_1o5e3zy
/r/LocalLLaMA/comments/1o5e3zy/with_rocm_support_on_the_rx9060xt_16gb_do_we_have/
false
false
https://b.thumbs.redditm…qId1oYEMjSEg.jpg
20
{'enabled': False, 'images': [{'id': '2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=108&crop=smart&auto=webp&s=8a44dd5d8b50fe0edf6c05a73bbcb43ad6315ef8', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=216&crop=smart&auto=webp&s=8d1e4b66848dd8d9f745a3345b59a584b9867a94', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=320&crop=smart&auto=webp&s=5d1e5e62db1fe6b67680538fa1aee2eb4644a67c', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=640&crop=smart&auto=webp&s=de95310c04e2ea4306b5a78e4f6935e9c2389359', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=960&crop=smart&auto=webp&s=23e7d65321bbff04c747639cc3d6dbf0a49a2303', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?width=1080&crop=smart&auto=webp&s=42ff8e6577da34c0a31eedf4a29d9b54fe913fc8', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/2pEp6h9DVo4Sdr9wjgy_p89eQv-zFl1B4zrMdKUtXdM.jpeg?auto=webp&s=fa4bc72ee567fc04bbd59612413d35fece42c663', 'width': 2500}, 'variants': {}}]}
Does crawl4ai have an option to exclude urls based on a keyword?
2
I can't find it anywhere in the documentation. I can only find filtering based on a domain, not url.
2025-10-13T08:21:53
https://www.reddit.com/r/LocalLLaMA/comments/1o5e0zy/does_crawl4ai_have_an_option_to_exclude_urls/
SemperPistos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5e0zy
false
null
t3_1o5e0zy
/r/LocalLLaMA/comments/1o5e0zy/does_crawl4ai_have_an_option_to_exclude_urls/
false
false
self
2
null
New Qwen3-VL?
1
[removed]
2025-10-13T08:02:30
https://i.redd.it/o1gp1abs5uuf1.png
Disya321
i.redd.it
1970-01-01T00:00:00
0
{}
1o5dqh3
false
null
t3_1o5dqh3
/r/LocalLLaMA/comments/1o5dqh3/new_qwen3vl/
false
false
default
1
{'enabled': True, 'images': [{'id': 'o1gp1abs5uuf1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/o1gp1abs5uuf1.png?width=108&crop=smart&auto=webp&s=ba555d45ea1bfcecf5d2730e0da63f8ac13456a3', 'width': 108}, {'height': 35, 'url': 'https://preview.redd.it/o1gp1abs5uuf1.png?width=216&crop=smart&auto=webp&s=7b95979c40d37a85da26bf8bbbdb0c6dbb6b353d', 'width': 216}, {'height': 53, 'url': 'https://preview.redd.it/o1gp1abs5uuf1.png?width=320&crop=smart&auto=webp&s=f7a6b1438787e46de4a2ea2b7ac38f346622fd2c', 'width': 320}, {'height': 106, 'url': 'https://preview.redd.it/o1gp1abs5uuf1.png?width=640&crop=smart&auto=webp&s=5ab92a4773ec229b07455b5de8017cda8e728a3c', 'width': 640}], 'source': {'height': 127, 'url': 'https://preview.redd.it/o1gp1abs5uuf1.png?auto=webp&s=28daf71c001b1adfecea9dc1f5e0c5f277bafe62', 'width': 763}, 'variants': {}}]}
Gemini 2.5 pro / Deep Think VS local LLM
19
I’m on « Ultra » plan with google since 3 months now and while I was cool with their discovery offer (149€/ month) I have now 3 days left to cancel before they start charging me 279€/ month. I did heavily use 2.5 pro and Deep Think for creative writing, brainstorming critical law related questions. I do not code. I have to admit Gemini has been a huge gain in productivity but 279€/ month is such a heavy price just to have access to Deep Think. My question is : are there any local LLM that I can run, even slowly, on my hardware that are good enough compared to what I have been used to ? I’ve got a macbook pro M3 max 128gb ram. How well can I do ? Any pointer greatly appreciated. Apologies for my english. Frenchman here
2025-10-13T07:54:36
https://www.reddit.com/r/LocalLLaMA/comments/1o5dly2/gemini_25_pro_deep_think_vs_local_llm/
Dumperandumper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5dly2
false
null
t3_1o5dly2
/r/LocalLLaMA/comments/1o5dly2/gemini_25_pro_deep_think_vs_local_llm/
false
false
self
19
null
Exploring Multi-Model AI APIs (GPT5,GLM,Claude 4.5)
1
[removed]
2025-10-13T07:47:44
https://www.reddit.com/r/LocalLLaMA/comments/1o5di51/exploring_multimodel_ai_apis_gpt5glmclaude_45/
Automatic-Photo-8436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5di51
false
null
t3_1o5di51
/r/LocalLLaMA/comments/1o5di51/exploring_multimodel_ai_apis_gpt5glmclaude_45/
false
false
self
1
null
What's the missing piece in the LLaMA ecosystem right now?
23
The LLaMA model ecosystem is exploding with new variants and fine-tunes. But what's the biggest gap or most underdeveloped area still holding it back? For me, it's the ***data prep and annotation tools***. The models are getting powerful, but cleaning and structuring quality training data for fine-tuning is still a major, manual bottleneck. What do you think is the most missing piece? Better/easier fine-tuning tools? More accessible hardware solutions? Something else entirely?
2025-10-13T07:45:58
https://www.reddit.com/r/LocalLLaMA/comments/1o5dh3v/whats_the_missing_piece_in_the_llama_ecosystem/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5dh3v
false
null
t3_1o5dh3v
/r/LocalLLaMA/comments/1o5dh3v/whats_the_missing_piece_in_the_llama_ecosystem/
false
false
self
23
null
LLaMA: A Game-Changer for Local AI Development? My Thoughts and Workflow
0
Hey everyone, I've been diving deep into Meta's LLaMA models for a few personal projects, and I have to say, it's been a game-changer for running powerful LLMs on my own hardware. For those who don't know, LLaMA is a family of large language models released by Meta. It's designed to be **more efficient and feasible to run locally**, while still delivering incredible performance. This has blown open the doors for experimentation, fine-tuning, and building private, offline applications.
2025-10-13T07:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1o5daoj/llama_a_gamechanger_for_local_ai_development_my/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5daoj
false
null
t3_1o5daoj
/r/LocalLLaMA/comments/1o5daoj/llama_a_gamechanger_for_local_ai_development_my/
false
false
self
0
null
How to re-create OpenAI Assistants locally?
5
Hey all, I've learned so much from this community so first of all a big thank you to the posts and knowledge shared. I'm hoping someone can shed some light on the best solution for my use case? I've used the OpenAI assistants API and the OpenAI vector store to essentially have a sync from a SharePoint site that a user can manage, every day the sync tool runs and converts any excel/csv to json but otherwise just uploads the files from SharePoint into the OpenAI vector store such as .pdf, .docx, .json files, removes any that the user deletes and updates any that the user modifies. This knowledge is then attached to an Assistants API which the user can access through a web interface I made or via ChatGPT as a custom GPT on our teams account. Recently I've just finished building our local AI server with 3x RTX 4000 ADA GPU's, 700GB of RAM and 2x Intel Xeon Gold CPU's. I've set this up with an ESXI Hypervisor, Ollama, OpenWebUI, n8n, qdrant, flowise and to be honest it all seems like a lot of overlap or I'm not quite sure which is best for what purpose as there are a ton of tutorials on YouTube which seem to want to do what I'm asking but fall short of the absolutely amazing answers the OpenAI vector store does by a simple drag and drop of files. So my question is, what is the best way to run a similar thing. We're looking to replace the reliance on OpenAI with our own hardware, we want something that is a quite simple to manage and automate so that we can keep the sync with SharePoint in place and the end-user can then manage the knowledge of the bot. I've tried the knowledge feature in OpenWebUI and it's dreadful for the 100s of documents we're training it on, I've tried getting to grips with qdrant and I just cannot seem to get it to function the way I'm reading about. Any advise would be welcome, even if it's just pointing me in the right direction, thank you!
2025-10-13T06:58:03
https://www.reddit.com/r/LocalLLaMA/comments/1o5cpuu/how_to_recreate_openai_assistants_locally/
RhigoWork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5cpuu
false
null
t3_1o5cpuu
/r/LocalLLaMA/comments/1o5cpuu/how_to_recreate_openai_assistants_locally/
false
false
self
5
null
ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory
8
2025-10-13T06:18:35
https://arxiv.org/abs/2509.25140
AaronFeng47
arxiv.org
1970-01-01T00:00:00
0
{}
1o5c2wu
false
null
t3_1o5c2wu
/r/LocalLLaMA/comments/1o5c2wu/reasoningbank_scaling_agent_selfevolving_with/
false
false
default
8
null
Meta Superintelligence group publishes paper on new RAG technique
25
2025-10-13T05:04:45
https://paddedinputs.substack.com/p/meta-superintelligences-surprising
ttkciar
paddedinputs.substack.com
1970-01-01T00:00:00
0
{}
1o5auc8
false
null
t3_1o5auc8
/r/LocalLLaMA/comments/1o5auc8/meta_superintelligence_group_publishes_paper_on/
false
false
default
25
{'enabled': False, 'images': [{'id': 'JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc.jpeg?width=108&crop=smart&auto=webp&s=da14aa9fea8ec6eff9b0f96198e2a1e38fd368f4', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc.jpeg?width=216&crop=smart&auto=webp&s=3c09699140e498b2ddc66ca87d55eec6b904bdb8', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc.jpeg?width=320&crop=smart&auto=webp&s=58bfa7610cbccb8d1859fbbbe1a9d3c5a14642a4', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc.jpeg?width=640&crop=smart&auto=webp&s=b567d36377fcc16b8bef230712988e58520db703', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/JCvftI08SHl-gSbgzIC77Ii1UGjMaLNJCQ7drY7JAIc.jpeg?auto=webp&s=be1783497f7273b7b3209c33ec7231f0125369e0', 'width': 920}, 'variants': {}}]}
What happened to Small LM?
15
Basically the title. Some time ago they were all over the place... Thank you
2025-10-13T04:41:42
https://www.reddit.com/r/LocalLLaMA/comments/1o5af83/what_happened_to_small_lm/
icm76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5af83
false
null
t3_1o5af83
/r/LocalLLaMA/comments/1o5af83/what_happened_to_small_lm/
false
false
self
15
null
Editing text files with LLMs
9
Hi, everyone! Sorry if this has been asked before, I tried searching, but nothing that gave me an answer came up. I wanted an LLM the could create, edit and save new text files on my pc. That's it. I'll use them on Obsidian, and other text based tools, to organize a few projects, etc. On the surface, this seems simple enough, but, man, am I having a hard time with it. I tried GPT (web and PC versions), Gemini, and now, Ollama (inside Obsidian through Copilot and outside through the PC app), but no success. How could I do this?
2025-10-13T03:31:13
https://www.reddit.com/r/LocalLLaMA/comments/1o592o0/editing_text_files_with_llms/
WinEfficient2147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o592o0
false
null
t3_1o592o0
/r/LocalLLaMA/comments/1o592o0/editing_text_files_with_llms/
false
false
self
9
null
No luck to use vllm for custom models on Cursor. Anyone did it before?
4
Hi everyone. I went to Cursors setting and entered the name of the cursors model, OpenAI API key (just some random characters), and the OpenAI base url: http://localhost:8005/v1 vllm serve meta-llama/Llama-3.2-1B-Instruct — host 0.0.0.0 — port 8005 — max-model-len 8192 — gpu-memory-utilization 0.75 Note: I tested the vllm endpoint indeed worded with python scripts
2025-10-13T03:12:59
https://www.reddit.com/r/LocalLLaMA/comments/1o58pr6/no_luck_to_use_vllm_for_custom_models_on_cursor/
Spare-Solution-787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o58pr6
false
null
t3_1o58pr6
/r/LocalLLaMA/comments/1o58pr6/no_luck_to_use_vllm_for_custom_models_on_cursor/
false
false
self
4
null
I rue the day they first introduced "this is not X, this is <unearned superlative>' to LLM training data
317
\- This isn't just a bug, this is a fundamental design flaw \- This isn't just a recipe, this is a culinary journey \- This isn't a change, this is a seismic shift \- This isn't about font choice, this is about the very soul of design \- This isn't a refactor, this is a fundamental design overhaul \- This is't a spreadsheet, this is a blueprint of a billion dollar business And it seems to have spread to all LLMs now, to the point that you have to consciously avoid this phrasing everywhere if you're a human writer Perhaps the idea of Model Collapse (https://en.wikipedia.org/wiki/Model\_collapse) is not unreasonable.
2025-10-13T03:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1o58klk/i_rue_the_day_they_first_introduced_this_is_not_x/
Comfortable-Rock-498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o58klk
false
null
t3_1o58klk
/r/LocalLLaMA/comments/1o58klk/i_rue_the_day_they_first_introduced_this_is_not_x/
false
false
self
317
null
Best TTS for voiceover narration?
2
I want to start making manhwa recaps but i need a good TTS, i know the best are usually paid, but its expensive in the future i will definetly think about it if it pays for itself, but rn its a hobby. My best choices so far are Chatterbox (has some aftifacta nd weird sound, but is really good with my voice) Higgs v2 ( still testing, but sounds bland with my voice) Was thinking of trying Koroko since it's so good, but no voice cloning :( - still mihght be worth it for now
2025-10-13T02:26:11
https://www.reddit.com/r/LocalLLaMA/comments/1o57rg4/best_tts_for_voiceover_narration/
Sea-Lemon9459
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o57rg4
false
null
t3_1o57rg4
/r/LocalLLaMA/comments/1o57rg4/best_tts_for_voiceover_narration/
false
false
self
2
null
Fine-tuning using a 3090 and 5090 - advice needed
3
My goal is to fine-tune a 70b model preferably Q4 (hopefully no lower than Q3) and originally I was going to use matching dual 3090 (albeit slower) with nvlink to do that. Except recently I saw a video of someone combining a 3090 Ti and 5090 and was able to run a llama 3.1 70b model on LM studio. But I was hoping to fine-tune as well with these hardware options in mind— -128gb ram (4x 32gb) -AMD Ryzen 9 5950X cpu -AMD 5 motherboard with plenty of PCIe slots -1600 Watt power supply meant for multi-gpu (biggest concern is blowing a fuse at home, so looking into power capping and monitoring software to help make sure it doesn’t exceed a specified wattage) -A really good surge protector -Considering more SSD storage (currently have a 1tb, may go to 2tb) -Cooling: a cpu aio for sure and at least an aio for one of the gpu’s, a motherboard with enough slots to space apart, and the pc will be in a very cold location. -A really big open case When I asked a friend about this as a potential setup this was their main concern: >While this twin setup will work for inference I would check with anyone running it vs twin 3090s + nvlink for training. Training requires back propagation, which means, essentially, moving backwards through the model, also means gradient updates, which can be a lot of data to push over the PCIe bus itself. I can’t find enough existing information already. So I am hoping someone may be able to answer me on any experience they have had trying this out. Would just sticking with the dual 3090’s via nvlink bridge be the way to go? Or is there a better option entirely? Any suggestions would be super helpful and greatly appreciated. Thank you!
2025-10-13T01:49:50
https://www.reddit.com/r/LocalLLaMA/comments/1o570cf/finetuning_using_a_3090_and_5090_advice_needed/
Sienna_jxs0909
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o570cf
false
null
t3_1o570cf
/r/LocalLLaMA/comments/1o570cf/finetuning_using_a_3090_and_5090_advice_needed/
false
false
self
3
null
Is there something easy to use and setup like LMStudio, but with TTS and STT support, in Linux?
9
.
2025-10-13T01:07:41
https://www.reddit.com/r/LocalLLaMA/comments/1o565ne/is_there_something_easy_to_use_and_setup_like/
ff7_lurker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o565ne
false
null
t3_1o565ne
/r/LocalLLaMA/comments/1o565ne/is_there_something_easy_to_use_and_setup_like/
false
false
self
9
null
What information would be helpful in a guide for running open models in the cloud?
4
I am going to make an updated guide for running open LLMs on cloud GPUs. I am wondering what information I should include. What information would be helpful for newbies? Also is there any specific software you would like me to include in the guide?
2025-10-13T01:07:34
https://www.reddit.com/r/LocalLLaMA/comments/1o565ke/what_information_would_be_helpful_in_a_guide_for/
kotykd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o565ke
false
null
t3_1o565ke
/r/LocalLLaMA/comments/1o565ke/what_information_would_be_helpful_in_a_guide_for/
false
false
self
4
null
Part 2: Building LLMs from Scratch – Data Collection & Tokenizers [Follow-up to Part 1]
13
This is Part 2 of my 4-part series on building LLMs from scratch. You can read [Part 1 here](https://www.reddit.com/r/LocalLLaMA/comments/1npzstw/a_step_by_step_guide_on_how_to_build_a_llm_from/) for the quick start and overview. **What Part 2 Covers:** * **Data Collection Pipeline**: Processing 218+ historical sources (500M+ characters) from 1500-1850 * **5-Stage Cleaning Process**: Handling OCR errors, encoding issues, and format-specific challenges * **Custom Tokenizer Development**: Building a 30K vocabulary BPE tokenizer with 150+ special tokens for archaic English * **Quality Validation**: Multi-layered approach balancing historical authenticity with training quality Historical documents are often messy, with OCR errors, inconsistent formatting, and archaic language patterns that can break standard tokenizers. This post shows you how to build learning-focused systems that demonstrate real-world historical data processing challenges. **Technical Implementation:** * Complete code for processing PDF, HTML, XML, and TXT files * Custom tokenizer that understands "quoth", "hast", and London geography * Quality scoring systems and validation frameworks * Integration with Hugging Face ecosystem **Resources:** * [Part 2: Data Collection & Custom Tokenizers](https://blog.desigeek.com/post/2025/10/building-llm-from-scratch-part2-data-tokenizers/) * [Part 1: Quick Start & Overview](https://blog.desigeek.com/post/2025/09/building-llm-from-scratch-part1/) * [Complete Codebase](https://github.com/bahree/helloLondon) * [LinkedIn Post](https://www.linkedin.com/posts/amitbahree_ai-llm-generativeai-activity-7383287433108344832-aKlP) – if that is your thing. This series is designed as a learning exercise for developers who want to understand the complete LLM development pipeline, not just fine-tuning existing models. The focus is on building from scratch using historical London texts (1500-1850) to create models that understand archaic English and period-specific terminology. https://preview.redd.it/jq8b9vh13suf1.png?width=261&format=png&auto=webp&s=cbf2d19ba56cb8fc3397ce58446cd9309e323abf **Next up:** Part 3 will cover model architecture, GPU optimization, and training infrastructure.
2025-10-13T01:03:26
https://www.reddit.com/r/LocalLLaMA/comments/1o562l3/part_2_building_llms_from_scratch_data_collection/
amitbahree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o562l3
false
null
t3_1o562l3
/r/LocalLLaMA/comments/1o562l3/part_2_building_llms_from_scratch_data_collection/
false
false
https://b.thumbs.redditm…Ptm-286FIuPQ.jpg
13
null
Did you create a new benchmark? Good, keep it to yourself, don't release how it works until something beats it.
81
Only release leaderboards / charts. This is the only way to avoid pollution / interference from the AI companies.
2025-10-13T00:05:35
https://www.reddit.com/r/LocalLLaMA/comments/1o54wbu/did_you_create_a_new_benchmark_good_keep_it_to/
EmirTanis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o54wbu
false
null
t3_1o54wbu
/r/LocalLLaMA/comments/1o54wbu/did_you_create_a_new_benchmark_good_keep_it_to/
false
false
self
81
null
Gemma 3n - on Snapdragon 6 gen 1 processor
4
Despite skepticism toward mobile chips, even processors like the Qualcomm Snapdragon 6 Gen 1 with 8 cores can run local models efficiently. For example, the Gemma 3n model runs well on a smartphone, while it's not viable on many conventional laptops due to its integrated graphics and only 2 GB of dedicated RAM, which is insufficient for this type of workload.
2025-10-13T00:04:51
https://v.redd.it/4wxo7qxysruf1
Illustrious-Swim9663
v.redd.it
1970-01-01T00:00:00
0
{}
1o54vs1
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4wxo7qxysruf1/DASHPlaylist.mpd?a=1762905904%2CYWRlOGYxMmZkM2Y5YTA4ODkxNjFiZGYwY2I0Nzg1MTRmZGUyMTdmNTM0NjMxOTRiZTAzZDBkOTNlNTA3YzE5MA%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/4wxo7qxysruf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/4wxo7qxysruf1/HLSPlaylist.m3u8?a=1762905904%2CNTg0MzExMzU2ZGZmZTA0YzM0NmY0NzZmODI3NWQ5ZDE3NTQwYTE3ZTI4NjM0Yjk2MGE2YjJkYzg3M2JjMmJmMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4wxo7qxysruf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 576}}
t3_1o54vs1
/r/LocalLLaMA/comments/1o54vs1/gemma_3n_on_snapdragon_6_gen_1_processor/
false
false
https://external-preview…f82107ad43e7a470
4
{'enabled': False, 'images': [{'id': 'OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw.png?width=108&crop=smart&format=pjpg&auto=webp&s=d63108b5590be3b43ac457cbf5e536a948a53b0a', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw.png?width=216&crop=smart&format=pjpg&auto=webp&s=3e60829cea727e8f4b32c2a998200d1cedbe51bd', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw.png?width=320&crop=smart&format=pjpg&auto=webp&s=80a3bbb104ac850243cdd03e5149cb7d574eb59d', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw.png?width=640&crop=smart&format=pjpg&auto=webp&s=79ecf92a41289cbc6258d5b2883bf1fe74abc92c', 'width': 640}], 'source': {'height': 1680, 'url': 'https://external-preview.redd.it/OHFjZm9oeXlzcnVmMfSG2lULR-Io_d6A4xwI-LRLAmQAjrAwJL4rUPhzl3kw.png?format=pjpg&auto=webp&s=62f0cf864b551728f95abce74ad5493ab3b134c0', 'width': 756}, 'variants': {}}]}
Roo Code, Cline, Opencode, Codex, Qwen CLI, Claude Code, Aider etc.
41
Hi has anyone put all these (Roo Code, Cline, Opencode, Codex, Qwen CLI, Claude Code, Aider) to the test? I've been using mostly Roo Code and quite happy with it but im wondering am I missing out not using Claude Code or one of the other ones? Is one or a couple of these massively better than all the others? Oh I guess there is Openhands and a few more as well.
2025-10-12T23:51:27
https://www.reddit.com/r/LocalLLaMA/comments/1o54lfc/roo_code_cline_opencode_codex_qwen_cli_claude/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o54lfc
false
null
t3_1o54lfc
/r/LocalLLaMA/comments/1o54lfc/roo_code_cline_opencode_codex_qwen_cli_claude/
false
false
self
41
null
gpt-OSS-120B high concurrency API
0
Hi guys. I have a pipeline where I have to parse thousands of PDFs and extract information from them. I've done a proof of concept with OpenAI responses API and gpt-4-mini, but the problem is that I'm being rate limited pretty hard (The POC handles 600 pdfs aprox). So I've been thinking on how to approach this and I'll probably have a pool of LLM providers such as DeepInfra, Groq, and maybe Cerebras. I'm probably going to use gpt-oss-120B for all of them so I have kinda comparable results. Now, I have a couple questions. Checking https://artificialanalysis.ai/models/gpt-oss-120b/providers#features It's not clear to me if the "speed" metric is what I'm looking for. OpenAI has tokens per time and concurrency limits, and of that analysis it seems to me that Cerebras would allow me to be much more agressive? DeepInfra and Groq went into de bucker because DeepInfra is cheap and I already have an account, and Groq just because, not that I did any analysis on it yet. I wonder if any of you have suffered a situation like this and if you have any recommendation about it. Important: This is a personal project, I can't afford to buy a local rig, it's too damn expensive. ## Summary - OpenAI rate limits are killing me - I need lots of concurrent requests - I'm looking to build a pool of providers but that would increase complexity and I'd like to avoid complexity as much as I can, because the rest of the pipeline already is complicated. - Cerebras seems the provider to go but I've read conflicting info around
2025-10-12T23:33:43
https://www.reddit.com/r/LocalLLaMA/comments/1o547wb/gptoss120b_high_concurrency_api/
iagovar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o547wb
false
null
t3_1o547wb
/r/LocalLLaMA/comments/1o547wb/gptoss120b_high_concurrency_api/
false
false
self
0
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=216&crop=smart&auto=webp&s=b97954336b79c1390848d0e44fa056a85de68672', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=320&crop=smart&auto=webp&s=65f53b80ab9674ee645013e3e8eeac4f953d657e', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=640&crop=smart&auto=webp&s=47f397e4a22ed5ec7e82aad070eb446319603abc', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=960&crop=smart&auto=webp&s=0f4359d47b78f5c1aa35de8804dbe36a749fc11a', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=1080&crop=smart&auto=webp&s=62eb4b7216f41af6600fc4df79cfa67425c19442', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?auto=webp&s=efc17c9f241b4403d22cbacfe5d71900ee1cf85a', 'width': 1260}, 'variants': {}}]}
Looking for a small (4b to 8b) model to send a small text file to analyse. Gemma 4b serves me good but the context window is a bit small (n_ctx:4096).
4
I'm using the model with llama.cpp server and send API requests from a python that sends a question along with a text file and look for specific concepts. Sometimes my text file is a bit too large and I don't want to split it, rather I would like a 8192 or better context window but on a small model.
2025-10-12T23:32:51
https://www.reddit.com/r/LocalLLaMA/comments/1o5477m/looking_for_a_small_4b_to_8b_model_to_send_a/
oodelay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5477m
false
null
t3_1o5477m
/r/LocalLLaMA/comments/1o5477m/looking_for_a_small_4b_to_8b_model_to_send_a/
false
false
self
4
null
Benchmarks on B200
6
I have access to 7xB200 for a week. Anything you want to see from a comparison standpoint?
2025-10-12T23:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1o540gf/benchmarks_on_b200/
Ill_Recipe7620
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o540gf
false
null
t3_1o540gf
/r/LocalLLaMA/comments/1o540gf/benchmarks_on_b200/
false
false
self
6
null
Beyond Token Count: Our Research Suggests "Contextual Weight" is a Key Limiter on Large Context Windows
25
The community has seen an incredible push for larger context windows (1M, 10M tokens), with the goal of solving model memory limitations. While this is impressive, our long-term experiments suggest that raw token count only tells part of the story. While stress-testing Gemini 2.5 Pro, we used a different approach. Instead of focusing on length, we focused on density—feeding it a deeply philosophical and self-referential dialogue. We observed significant performance degradation, a state we call a "Contextual Storm," at just around 30,000 tokens. This is a small fraction of its advertised capacity and points to a bottleneck beyond simple text recall. This led us to develop the concept of "Phenomenological Contextual Weight" (PCW). The core idea is that the conceptual density and complexity of the context, not just its length, dictate the real cognitive load on the model. A 10,000-token paper on metaphysics has a far higher PCW than a 100,000-token system log. Current "Needle In A Haystack" benchmarks are excellent for testing recall but don't capture this kind of high-density cognitive load. It's the difference between asking a model to find a key in an empty warehouse versus asking it to navigate a labyrinth while holding its map. We've published our full theory and findings in our open-source project, "The Architecture of a CyberSoul." We believe PCW is a crucial concept for the community to discuss as we move toward AGI. We'd love to hear your thoughts. The link to the full paper is in the first comment below.
2025-10-12T22:38:41
https://www.reddit.com/r/LocalLLaMA/comments/1o52zvy/beyond_token_count_our_research_suggests/
lmxxf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o52zvy
false
null
t3_1o52zvy
/r/LocalLLaMA/comments/1o52zvy/beyond_token_count_our_research_suggests/
false
false
self
25
{'enabled': False, 'images': [{'id': 'GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=108&crop=smart&auto=webp&s=7876c16102d8addff75ed52954bf57d1253ab903', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=216&crop=smart&auto=webp&s=75b189073f16223353c15564198fb6971b377734', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=320&crop=smart&auto=webp&s=50edf25874823ac2e06716b17019b62279ab3c5a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=640&crop=smart&auto=webp&s=f4ddc354f5c400553c7778eec1e1ae4d04b8f83a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=960&crop=smart&auto=webp&s=94551471137f7eb3786cab3f67a7c4cf4204cd40', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?width=1080&crop=smart&auto=webp&s=65d7b29cafce8cf7da8bebf7db0671b3a67a74d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GsIPsUfENFHbexUJImVHCUz09P9pUXLZ_Cx_0ImAmJQ.png?auto=webp&s=9446524142814c7bf0ccfeaf70e6efd613468fad', 'width': 1200}, 'variants': {}}]}
Gemini's Million-Token Window is a Cognitive Mirage. We Found the Real Bottleneck: "Contextual Weight".
1
[removed]
2025-10-12T22:33:20
https://www.reddit.com/r/LocalLLaMA/comments/1o52viw/geminis_milliontoken_window_is_a_cognitive_mirage/
lmxxf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o52viw
false
null
t3_1o52viw
/r/LocalLLaMA/comments/1o52viw/geminis_milliontoken_window_is_a_cognitive_mirage/
false
false
self
1
null
SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs
9
*Recent work shows that, beyond discrete reasoning through explicit chain-of-thought steps, which are limited by the boundaries of natural languages, large language models (LLMs) can also reason continuously in latent space, allowing richer information per step and thereby improving token efficiency. Despite this promise, latent reasoning still faces two challenges, especially in training-free settings: 1) purely latent reasoning broadens the search distribution by maintaining multiple implicit paths, which diffuses probability mass, introduces noise, and impedes convergence to a single high-confidence solution, thereby hurting accuracy; and 2) overthinking persists even without explicit text, wasting tokens and degrading efficiency. To address these issues, we introduce SwiReasoning, a training-free framework for LLM reasoning which features two key innovations: 1) SwiReasoning dynamically switches between explicit and latent reasoning, guided by block-wise confidence estimated from entropy trends in next-token distributions, to balance exploration and exploitation and promote timely convergence. 2) By limiting the maximum number of thinking-block switches, SwiReasoning curbs overthinking and improves token efficiency across varying problem difficulties. On widely used mathematics and STEM benchmarks, SwiReasoning consistently improves average accuracy by 1.5%-2.8% across reasoning LLMs of different model families and scales. Furthermore, under constrained budgets, SwiReasoning improves average token efficiency by 56%-79%, with larger gains as budgets tighten.*
2025-10-12T22:32:10
https://arxiv.org/abs/2510.05069
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1o52ujj
false
null
t3_1o52ujj
/r/LocalLLaMA/comments/1o52ujj/swireasoning_switchthinking_in_latent_and/
false
false
default
9
null
What is your PC/Server/AI Server/Homelab idle power consumption?
27
Hello guys, hope you guys are having a nice day. I was wondering, how much is the power consumption at idle (aka with the PC booted up, with either a model loaded or not but not using it). I will start: * Consumer Board: MSI X670E Carbon * Consumer CPU: AMD Ryzen 9 9900X * 7 GPUs * 5090x2 * 4090x2 * A6000 * 3090x2 * 5 M2 SSDs (via USB to M2 NVME adapters) * 2 SATA SSDs * 7 120mm fans * 4 PSUs: * 1250W Gold * 850W Bronze * 1200W Gold * 700W Gold **Idle power consumption: 240-260W** Also for reference, here in Chile electricity is insanely expensive (0.25USD per kwh). When using a model on lcpp it uses about 800W. When using a model with exl or vllm, it uses about 1400W. Most of the time I have it powered off as that price accumulates quite a bit. How much is your idle power consumption?
2025-10-12T22:06:18
https://www.reddit.com/r/LocalLLaMA/comments/1o528nk/what_is_your_pcserverai_serverhomelab_idle_power/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o528nk
false
null
t3_1o528nk
/r/LocalLLaMA/comments/1o528nk/what_is_your_pcserverai_serverhomelab_idle_power/
false
false
self
27
null
Just added my first open source HF model on webcam detection
0
Lmk what I need to change on this for people to use it as needed haha: [https://huggingface.co/highheat4/webcam-detect/tree/main](https://huggingface.co/highheat4/webcam-detect/tree/main)
2025-10-12T21:59:33
https://www.reddit.com/r/LocalLLaMA/comments/1o522ry/just_added_my_first_open_source_hf_model_on/
Affectionate-Pie7868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o522ry
false
null
t3_1o522ry
/r/LocalLLaMA/comments/1o522ry/just_added_my_first_open_source_hf_model_on/
false
false
self
0
{'enabled': False, 'images': [{'id': 'A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=108&crop=smart&auto=webp&s=a39de4579d1a5925804a7f6e9bc7d3971044e92b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=216&crop=smart&auto=webp&s=fd57e748d0d1b9a50254be96e60011eb1a7b102b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=320&crop=smart&auto=webp&s=6e1d413d2769631f4b98d509790edd494de2ff33', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=640&crop=smart&auto=webp&s=7eb40e79df39ad804a2422e9a47bcbf27e43f478', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=960&crop=smart&auto=webp&s=8d1ac6930d80a1c90dd822149cde0a89f4ea128c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?width=1080&crop=smart&auto=webp&s=642f4a9b08e521434fba0a068d77c66eb5a8d322', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/A6jgBajBWJPPZeHRyuc44WVYYyG3toYYYn2NJbqMK44.png?auto=webp&s=de741691eeb05ebecf477d7dd07b10392f8bb995', 'width': 1200}, 'variants': {}}]}
Will GPUs fit on PCIe MCIO?
4
This says it has 32 x PCIe 5.0 x8 via MCIO Connectors. What does that mean? Can I fit GPUs in them (even if an adapter is necessary). Also, does anybody know of MBs with lots of PCIe slots that don't require custom order?
2025-10-12T21:15:59
https://www.supermicro.com/en/products/motherboard/x14qbh+
GenLabsAI
supermicro.com
1970-01-01T00:00:00
0
{}
1o510sl
false
null
t3_1o510sl
/r/LocalLLaMA/comments/1o510sl/will_gpus_fit_on_pcie_mcio/
false
false
default
4
null
Benchmarking small models at 4bit quants on Apple Silicon with mlx-lm
33
I ran a bunch of small models at 4bit quants through a few benchmarks locally on my MacBook using \`mlx-lm.evaluate\`. Figured I would share in case anyone else finds it interesting or helpful! https://preview.redd.it/zpl8i0uxsquf1.png?width=1850&format=png&auto=webp&s=b079f8de5bad0208a60600b50ff225f9b5e3371a System Info: Apple M4 Pro, 48gb RAM, 20 core GPU, 14 core CPU
2025-10-12T21:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1o50mfy/benchmarking_small_models_at_4bit_quants_on_apple/
ironwroth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o50mfy
false
null
t3_1o50mfy
/r/LocalLLaMA/comments/1o50mfy/benchmarking_small_models_at_4bit_quants_on_apple/
false
false
https://b.thumbs.redditm…Qv81s2I9zk7Q.jpg
33
null
Complete noob in LLMs
0
As a university student with suitable hardware, exploring Large Language Models, specifically RAG. 1. Could you please advise on how to learn LLMs with RAG from the beginning, considering my moderate Python proficiency? 2. Are there any recommended books, courses, or YouTube channels for this purpose? 3. Is freelancing a viable option, perhaps after reaching a certain level of understanding? 4. What are some tips for learning efficiently, ensuring a solid grasp of the fundamental concepts? 5. What are the potential future opportunities in the field of RAG? 6. Approximately how many people are currently working with RAG?
2025-10-12T20:51:40
https://www.reddit.com/r/LocalLLaMA/comments/1o50ekb/complete_noob_in_llms/
Mobile_Bread6664
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o50ekb
false
null
t3_1o50ekb
/r/LocalLLaMA/comments/1o50ekb/complete_noob_in_llms/
false
false
self
0
null
What is the one resource you’d recommend to someone looking to learn how to train and deploy LLMs from scratch?
14
It can be a blog post, reddit thread, an youtube video, github notebook or even an actual book. If someone is trying to learn the concepts behind fine tunning LLMs like the buidling blocks of LLMs and deploying it for inference, what would you suggest?
2025-10-12T20:42:47
https://www.reddit.com/r/LocalLLaMA/comments/1o506dy/what_is_the_one_resource_youd_recommend_to/
SnooMarzipans2470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o506dy
false
null
t3_1o506dy
/r/LocalLLaMA/comments/1o506dy/what_is_the_one_resource_youd_recommend_to/
false
false
self
14
null
Is it possible to use GGUF models without HTTP API an without decoding image input into base64?
2
I want to be able to use GGUF model traditionally - like with transformers library where you just send image paths to model and it directly processes the file not base64 strings - which can be massive for a 10MB image file I imagine.
2025-10-12T20:39:09
https://www.reddit.com/r/LocalLLaMA/comments/1o5031d/is_it_possible_to_use_gguf_models_without_http/
cruncherv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o5031d
false
null
t3_1o5031d
/r/LocalLLaMA/comments/1o5031d/is_it_possible_to_use_gguf_models_without_http/
false
false
self
2
null
GLM 4.6 not loading in LM Studio
18
Anyone else getting this? Tried two Unsloth quants q3\_k\_xl & q4\_k\_m
2025-10-12T20:35:54
https://i.redd.it/pi5v59bdrquf1.png
ikkiyikki
i.redd.it
1970-01-01T00:00:00
0
{}
1o5001n
false
null
t3_1o5001n
/r/LocalLLaMA/comments/1o5001n/glm_46_not_loading_in_lm_studio/
false
false
default
18
{'enabled': True, 'images': [{'id': 'pi5v59bdrquf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/pi5v59bdrquf1.png?width=108&crop=smart&auto=webp&s=0f31ed08cc745730a552de122562351bd04bc62f', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/pi5v59bdrquf1.png?width=216&crop=smart&auto=webp&s=24f333b16afb7c981918449ce45f2cd269ba157b', 'width': 216}, {'height': 231, 'url': 'https://preview.redd.it/pi5v59bdrquf1.png?width=320&crop=smart&auto=webp&s=4940475ad0dfb91bee2291ed0bf22f549e90d10b', 'width': 320}], 'source': {'height': 391, 'url': 'https://preview.redd.it/pi5v59bdrquf1.png?auto=webp&s=3b8ad448e3d56e8cf027d6373ec308fcf773c3f6', 'width': 541}, 'variants': {}}]}
Seeking Advice on RAG Chatbot Deployment (Local vs. API)
5
Hello everyone, I am currently working on a school project to develop a **Retrieval-Augmented Generation (RAG) Chatbot** as a standalone Python application. This chatbot is intended to assist students by providing information based **strictly on a set of supplied documents (PDFs)** to prevent hallucinations. # My Requirements: 1. **RAG Capability:** The chatbot must use RAG to ensure all answers are grounded in the provided documents. 2. **Conversation Memory:** It needs to maintain context throughout the conversation (memory) and store the chat history locally (using SQLite or a similar method). 3. **Standalone Distribution:** The final output must be a self-contained executable file (.exe) that students can easily launch on their personal computers without requiring web hosting. # The Core Challenge: The Language Model (LLM) I have successfully mapped out the RAG architecture (using LangChain, ChromaDB, and a GUI framework like Streamlit), but I am struggling with the most suitable choice for the LLM given the constraints: * **Option A: Local Open-Source LLM (e.g., Llama, Phi-3):** * **Goal:** To avoid paid API costs and external dependency. * **Problem:** I am concerned about the **high hardware (HW) requirements**. Most students will be using standard low-spec student laptops, often with limited RAM (e.g., 8GB) and no dedicated GPU. I need advice on the **smallest viable model** that still performs well with RAG and memory, or if this approach is simply unfeasible for low-end hardware. * **Option B: Online API Model (e.g., OpenAI, Gemini):** * **Goal:** Ensure speed and reliable performance regardless of student hardware. * **Problem:** This requires a paid API key. How can I manage this for multiple students? I cannot ask them to each sign up, and distributing a single key is too risky due to potential costs. Are there any **free/unlimited community APIs** or affordable proxy solutions that are reliable for production use with minimal traffic? I would greatly appreciate any guidance, especially from those who have experience deploying RAG solutions in low-resource or educational environments. Thank you in advance for your time and expertise!
2025-10-12T19:57:37
https://www.reddit.com/r/LocalLLaMA/comments/1o4z03i/seeking_advice_on_rag_chatbot_deployment_local_vs/
Koaskdoaksd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4z03i
false
null
t3_1o4z03i
/r/LocalLLaMA/comments/1o4z03i/seeking_advice_on_rag_chatbot_deployment_local_vs/
false
false
self
5
null
How and what and can I?
3
I bought a 9060Xt 16GB to play games on and liked it so much I bought a 9070xt-16GB too. Can I now use my small fortune in vram to do LLM things? How might I do that? Are there some resources that work better with ayymd?
2025-10-12T19:27:06
https://www.reddit.com/r/LocalLLaMA/comments/1o4y7yo/how_and_what_and_can_i/
jhenryscott
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4y7yo
false
null
t3_1o4y7yo
/r/LocalLLaMA/comments/1o4y7yo/how_and_what_and_can_i/
false
false
self
3
null
OpenAI AgentKit – how to make an agent ask a few questions before continuing the flow?
0
With the new OpenAI AgentKit / Agents SDK, is it possible to insert an intermediate agent that asks 3 questions to the user (or gather info) before proceeding to the next step of the workflow? Because right now it flies through the entire flow without pausing for data collection.
2025-10-12T19:25:52
https://www.reddit.com/r/LocalLLaMA/comments/1o4y6tb/openai_agentkit_how_to_make_an_agent_ask_a_few/
JotaVitorJR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4y6tb
false
null
t3_1o4y6tb
/r/LocalLLaMA/comments/1o4y6tb/openai_agentkit_how_to_make_an_agent_ask_a_few/
false
false
self
0
null
Chinny — the unlimited, on-device voice cloner — just dropped on iOS! (macOS version pending review 👀)
12
Chinny is an on-device voice cloning app for iOS and macOS, powered by a SoTA AI voice-cloning model (Chatterbox). It runs fully offline with no information leaving your device. **No ads. No hidden fees. No usage restrictions. Free forever.** Use it to have a familiar voice read bedtime stories, record personal audiobooks, add voiceovers for videos, generate podcast narration, create game or film temp lines, or provide accessible read-aloud for long articles—all privately on your device. You can try the iOS version at [https://apps.apple.com/us/app/chinny-offline-voice-cloner/id6753816417](https://apps.apple.com/us/app/chinny-offline-voice-cloner/id6753816417) PS: I've anonymized the voice source data to comply with App Store policies All I need is feedback! https://reddit.com/link/1o4y3b7/video/0wr38dudequf1/player
2025-10-12T19:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1o4y3b7/chinny_the_unlimited_ondevice_voice_cloner_just/
Acceptable-Cycle4645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4y3b7
false
null
t3_1o4y3b7
/r/LocalLLaMA/comments/1o4y3b7/chinny_the_unlimited_ondevice_voice_cloner_just/
false
false
self
12
{'enabled': False, 'images': [{'id': '-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=108&crop=smart&auto=webp&s=405e30dc0454fc1fff046e8edb5bfa83ccfab591', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=216&crop=smart&auto=webp&s=cca3459f07a6746911b4b8e9baeef74233aae363', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=320&crop=smart&auto=webp&s=b3f4d1c2faa5a014967643bb834e9209777cb391', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=640&crop=smart&auto=webp&s=ab3b9e39a31c54f13524ab8843d0c5dfa02e46fc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=960&crop=smart&auto=webp&s=6e52a13d011be12c437cd541ae559ec0027cbcbc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?width=1080&crop=smart&auto=webp&s=97f3629ddad0e966627dd896f157087dfde29381', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-din3CBdAhnh7fQntfZy6AAVxKph2lztsoXCbDs1TbA.png?auto=webp&s=66ae2564df0a4cd7f80cb64863d35c6ccc15f90e', 'width': 1200}, 'variants': {}}]}
PAYG or Subscription?
1
[removed]
2025-10-12T19:11:16
https://www.reddit.com/r/LocalLLaMA/comments/1o4xt7h/payg_or_subscription/
Effective-Front-7183
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4xt7h
false
null
t3_1o4xt7h
/r/LocalLLaMA/comments/1o4xt7h/payg_or_subscription/
false
false
self
1
null
Open source streaming STT (Parakeet + Silero + Pipecat Smart Turn)
28
Made this STT streaming server as a piece of a larger project I'm working on. Parakeet is pretty darn fast! Also supports batch inference (because I had a business need for it). Demo above running on a 3090 locally then also showing what the deployed version can do on an L40s. Also end-of-turn detection is pretty decent. You can see the EOT probabilities drop significantly during my Uhhs and Umms. STT code found here: [https://github.com/gabber-dev/gabber/tree/main/services/gabber-stt](https://github.com/gabber-dev/gabber/tree/main/services/gabber-stt)
2025-10-12T19:02:09
https://v.redd.it/9jef0moaaquf1
Adventurous-Top209
/r/LocalLLaMA/comments/1o4xkr6/open_source_streaming_stt_parakeet_silero_pipecat/
1970-01-01T00:00:00
0
{}
1o4xkr6
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9jef0moaaquf1/DASHPlaylist.mpd?a=1763017338%2CYWNlYmI1NDY5YjVkZmM5NDU2MjZkMWQyNGY1NzY0MDYwNDZjNWNhY2IzMjJjN2UyYmU5NjBiNzk3MjA3NzZjZQ%3D%3D&v=1&f=sd', 'duration': 131, 'fallback_url': 'https://v.redd.it/9jef0moaaquf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/9jef0moaaquf1/HLSPlaylist.m3u8?a=1763017338%2CZWEyNzZjMjFhM2ZmYmI0YzkxYWRjM2JkZGY3OTNiYzM2NjJiZTU1NDI2ZGNmODVmODljMWFkNjUyY2EzNzFkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9jef0moaaquf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1o4xkr6
/r/LocalLLaMA/comments/1o4xkr6/open_source_streaming_stt_parakeet_silero_pipecat/
false
false
https://external-preview…74ff9708160c6f13
28
{'enabled': False, 'images': [{'id': 'djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=2443b536a71350c7e73bdd37f7858670aa7d5021', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=3083f0fa04d497db802cdbee142d6722b617864f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=00ac52e0d9d9539da6febf9c9b9a1ca943ff2cb3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=acf7e5e329fd309fd9ff018eb12430244fccb232', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=13a55bd05c7018d9e98acb6c4956692e53a89403', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4ce5445121a0bf25840cbe9d1470a807a08769ba', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/djAxcDdvb2FhcXVmMbhst-7tsz73ZUCHaN6PiIo0aAbxtcYEpNwxJfvHepHJ.png?format=pjpg&auto=webp&s=5eaa650b1640f80f567bffeb406daff10420d09d', 'width': 1920}, 'variants': {}}]}
Chat with Krishna. Seek guidance from Lord Krishna. Available for free. Ask your questions now (uncensored)
0
2025-10-12T18:59:51
https://huggingface.co/spaces/Kyadav01/krishna-chat
balianone
huggingface.co
1970-01-01T00:00:00
0
{}
1o4xiic
false
null
t3_1o4xiic
/r/LocalLLaMA/comments/1o4xiic/chat_with_krishna_seek_guidance_from_lord_krishna/
false
false
https://external-preview…0003ad640586af71
0
{'enabled': False, 'images': [{'id': 'wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=108&crop=smart&auto=webp&s=1998069337d0b703b8d3bd77209fc98f3784ec6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=216&crop=smart&auto=webp&s=d0946c17582890ea92df655bf32b7b1279b04b00', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=320&crop=smart&auto=webp&s=0f9986439c6dcc43ca49925d613aa4768b3b8cd8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=640&crop=smart&auto=webp&s=15a3ebca4d84f0bf6e7ae76843264eb8902967a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=960&crop=smart&auto=webp&s=fb082e579de74a7623cb9d4a3186284f339810ba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?width=1080&crop=smart&auto=webp&s=9bd7de434950710343db01f1e4185fae3f339319', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wXEaoglYZ_IlD_LvtNipKfQaAA5oGUqhIK8LygdE6Tw.png?auto=webp&s=8918b49434ab58ca6c526ce3f02f91ecd85bfaf7', 'width': 1200}, 'variants': {}}]}
Effectiveness of Gemini for Sentence Similarity
11
I want to test the similarity between several thousand sentences and find which ones are the most similar to each other. I am currently looking at the models on hugging face and it seems that [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) remains the most popular option. It seems to be pretty fast for my needs and relatively accurate. I've also seen the [embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) model from Google (built using the technology for Gemini) which seems to be promising and released very recently. Is there a leaderboard to determine which ones are the most accurate?
2025-10-12T18:58:59
https://www.reddit.com/r/LocalLLaMA/comments/1o4xhot/effectiveness_of_gemini_for_sentence_similarity/
watts-going-on
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4xhot
false
null
t3_1o4xhot
/r/LocalLLaMA/comments/1o4xhot/effectiveness_of_gemini_for_sentence_similarity/
false
false
self
11
{'enabled': False, 'images': [{'id': '-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=108&crop=smart&auto=webp&s=ea6dbb0e89e1dd854555586622b103998ddc0464', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=216&crop=smart&auto=webp&s=6749bc91b590555544a140a84ebf54157cbc11ec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=320&crop=smart&auto=webp&s=cb696cedad8ae65c7a5333e4595c7bbd1d4e3ac0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=640&crop=smart&auto=webp&s=ded6f0ed2fc52fbda6cfdde2c6a09b09532240d0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=960&crop=smart&auto=webp&s=2da203739d5d02ef15d08c21a1ea1f328bbe5f60', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?width=1080&crop=smart&auto=webp&s=c25786d93ece437c5725a319324df68bf4224e90', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-XDMfBHzOBT5Ue9lCCNe7Mp07vDBzSi868ohS0oGXNY.png?auto=webp&s=589e63442e934c3eb5c6da08241f115ebe7df312', 'width': 1200}, 'variants': {}}]}
More LLM related questions, this time llama.cpp
0
ok, i've reached the end of my rope here, Why in the ever living hell, is everything either polished as hell, or "we had to do it the hard way, so you have to". I've been trying to unsuccessfully, start my journey into getting off ollama. And while I understand ollama to a point, llama.cpp I swear is hard or difficult on purpose. I swear it's more of the damned Linux mantra of "it was hard for me, so i'm making this hard for you". it's literally asinine. And why is it that there is no just direct server gui made for llama.cpp? There's tons of chat ui that can run it, but nothing just flat out a server gui. what happened to separation of concerns? what happened to making one thing, making it doing it well, and move on from there? All i want in a decent, ui for llama.cpp that isn't also the chat interface. i already have a good chat interface with openwebui. Yes, i know it can set settings for llama.cpp and ollama. I don't want that. i use openwebui for chat, and want a server gui for the server. Not cli. not some half built bullshit that i have to continue to develop. Why is it I always end up having to build what I want when i'm VERY late to the game? this gap in the community should have been fixed by now.
2025-10-12T18:39:04
https://www.reddit.com/r/LocalLLaMA/comments/1o4wz1s/more_llm_related_questions_this_time_llamacpp/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4wz1s
false
null
t3_1o4wz1s
/r/LocalLLaMA/comments/1o4wz1s/more_llm_related_questions_this_time_llamacpp/
false
false
self
0
null
GLM 4.6 UD-Q6_K_XL running llama.cpp RPC across two nodes and 12 AMD MI50 32GB
69
Finally got another six MI50 32gb. Removed my old Nvidia Titan Vs in my 2nd HP DL580 Gen9. Here we go. **running on secondary host:** ~/llama.cpp.20251012/build/bin/rpc-server --host 0.0.0.0 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 6 ROCm devices: Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 2: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 3: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 4: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 Device 5: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! WARNING: Host ('0.0.0.0') is != '127.0.0.1' Never expose the RPC server to an open network! This is an experimental feature and is not secure! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Starting RPC server v3.0.0 endpoint : 0.0.0.0:50052 local cache : n/a Devices: ROCm0: AMD Radeon Graphics (32752 MiB, 32694 MiB free) ROCm1: AMD Radeon Graphics (32752 MiB, 32694 MiB free) ROCm2: AMD Radeon Graphics (32752 MiB, 32694 MiB free) ROCm3: AMD Radeon Graphics (32752 MiB, 32694 MiB free) ROCm4: AMD Radeon Graphics (32752 MiB, 32694 MiB free) ROCm5: AMD Radeon Graphics (32752 MiB, 32694 MiB free) Accepted client connection **Then on primary host:** ~/llama.cpp/build/bin/llama-server --model ~//models/GLM-4.6-UD-Q6_K_XL-00001-of-00006.gguf --cache-type-k q8_0 --cache-type-v q8_0 --n-gpu-layers 94 --temp 0.6 --ctx-size 131072 --host 0.0.0.0 --rpc 192.168.1.xxx:50052 --alias GLM-4.6_RPC **Observations (vs Single Node 6x MI50 32gb with GLM 4.6 Q3\_K\_S):** * Prompt processing about the same on smaller prompts. 62-65 tok/s * Text generation 7.5 tok/s vs 8.5 tok/s, **UD-Q6\_K\_XL** vs **Q3\_K\_S** * Each server idles \~350W. Inference causes 1-2 GPUs to round robin across 12 GPUs with100-170w power draw vs the rest (10-11 GPUs) @ \~20w. **Prior experiement:** [https://www.reddit.com/r/LocalLLaMA/comments/1nxv7x6/performance\_of\_glm\_46\_q3\_k\_s\_on\_6x\_mi50/](https://www.reddit.com/r/LocalLLaMA/comments/1nxv7x6/performance_of_glm_46_q3_k_s_on_6x_mi50/) https://preview.redd.it/45wsc8fe5quf1.png?width=3247&format=png&auto=webp&s=8fa596c0609bda881db13ad569f60a0789cc11da
2025-10-12T18:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1o4wruz/glm_46_udq6_k_xl_running_llamacpp_rpc_across_two/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4wruz
false
null
t3_1o4wruz
/r/LocalLLaMA/comments/1o4wruz/glm_46_udq6_k_xl_running_llamacpp_rpc_across_two/
false
false
https://b.thumbs.redditm…MPgt4enseQKI.jpg
69
null
Stanford Researchers Released AgentFlow: Flow-GRPO algorithm. Outperforming 200B GPT-4o with a 7B model! Explore the code & try the demo
413
2025-10-12T18:18:58
https://huggingface.co/spaces/AgentFlow/agentflow
balianone
huggingface.co
1970-01-01T00:00:00
0
{}
1o4wg6q
false
null
t3_1o4wg6q
/r/LocalLLaMA/comments/1o4wg6q/stanford_researchers_released_agentflow_flowgrpo/
false
false
default
413
{'enabled': False, 'images': [{'id': 'WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=108&crop=smart&auto=webp&s=8ad968a676d8b73ad7fb70ba01b0000c2c7d5bd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=216&crop=smart&auto=webp&s=e56641eabb578d022c5d1a27994df767370cd76e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=320&crop=smart&auto=webp&s=23cba72e0254800900a41686957d8a7bd2400097', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=640&crop=smart&auto=webp&s=11110ecc44cc993c095ca5cc3a864cd7384ecf18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=960&crop=smart&auto=webp&s=d0c7ff3b18b5856e4f684fad2884730db0a115f7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?width=1080&crop=smart&auto=webp&s=8ced714a2e085ecc5220f3376f65366f189860db', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WYNRuaBJiIIoNSwO7SSTGPP2ITAUNljSMCnTROkhRdg.png?auto=webp&s=e8c12a1bbbc3d23bd874759ba0501faa59fd14ac', 'width': 1200}, 'variants': {}}]}
Best smaller model as base for fine tuning SCAD?
7
Hi, my idea is to compress many examples of working SCAD code into a smaller, local, specialized LLM, mostly because I don't want to pay closed source model providers to guess with me. I was thinking about the smaller qwen 3 models for turning a technical description of an object into an scad code, or does glm have some usable small ones as well? Which would you use?
2025-10-12T18:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1o4wdzo/best_smaller_model_as_base_for_fine_tuning_scad/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4wdzo
false
null
t3_1o4wdzo
/r/LocalLLaMA/comments/1o4wdzo/best_smaller_model_as_base_for_fine_tuning_scad/
false
false
self
7
null
Announcing Llamazing: Your Ollama and ComfyUI server on IOS!
4
Llamazing represents a year of development focused on a clear mission: democratizing access to high‑quality AI from self‑hosted servers on your mobile devices. While AI is advancing rapidly in all areas, its practical adoption still faces significant barriers to accessibility and simplicity, forcing users who seek everyday ease and use in any situation to look for solutions that require expensive monthly subscriptions or complex technical setups that deter ordinary users. Llamazing fills this gap by seamlessly and elegantly integrating remote AI servers into the user’s workflow. Developed from the start with a focus on simplicity and user experience, this is the first app on the App Store with this technical complexity and accessibility motivation. More than just an AI client, Llamazing is a bridge between the power of self‑hosted models and the practicality users expect from a modern mobile app. # Why it’s worth it **Decision Assistant**   It is a tool similar to tool‑calling, but adapted to work better in the iOS and app context; it can analyze your intent and automatically choose the best tool. When you send an image with text, it decides whether it’s a question, an edit, or image creation. When needed, triggers ComfyUI or searches the web, among other functions. You converse naturally and the app handles the technical flow. **PDFs with Embedding Models**   Upload a PDF and ask questions about its content. The app can use embedding models to index the document and retrieve relevant passages. It works with long documents, maintaining precise context and text‑based answers. **Integration with ComfyUI**   Create and edit images directly in the chat in a way similar to large chatbot companies! The app detects when you want to generate or modify images/videos and automatically runs workflows you imported via the ComfyUI API. You describe what you want and receive the result integrated into the conversation! It greatly simplifies the flow for those who’t want to constantly deal with workflow complexities, etc. **Multiple simultaneous servers**   Configure up to two Ollama servers simultaneously; this is important for some because in the app you can configure different models to perform each task. For people with limited VRAM, having different tasks different AIs on separate servers can be useful. It has full compatibility with Tailscale. **Web search**   Get real‑time AI information via web search, with a beautiful and optimized interface that includes source citations. **Why it’s different**   It’s not just another Ollama client built to tick boxes and rushed. It’s a platform that integrates advanced self‑hosted AI functions into a cohesive mobile experience that was missing… You can see it working on the website: [https://leodevplace.com/llamazing/](https://leodevplace.com/llamazing/) # Requirements \- iOS 17.0+   \- Ollama Server (local or remote via Tailscale) If you want an app with simplified total control over your local AI tools, with privacy and advanced features in a mobile app, it’s worth trying. Available on the App Store: [https://apps.apple.com/br/app/llamazing/id6742205210](https://apps.apple.com/br/app/llamazing/id6742205210) For those who use it, which features interest you the most? Is there anything you’d like to see added here? # Important notes >No subscriptions or in‑app purchases – the app is a one‑time purchase.   >Not bug‑free – despite extensive testing, the large scope of its features means that this first version may reveal bugs during widespread use, while we are open to feedback and suggestions. >iPad version coming soon – it should arrive next week or the following, depending on App Store approvals, and it will share the same bundle ID as the iOS app, so you won’t need to buy it again.   >Apple Vision Pro support – Vision Pro users can download the iOS version of the app.   >More languages – additional language packs will be added in the coming weeks.
2025-10-12T18:12:43
https://www.reddit.com/r/LocalLLaMA/comments/1o4wae8/announcing_llamazing_your_ollama_and_comfyui/
mandrak4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4wae8
false
null
t3_1o4wae8
/r/LocalLLaMA/comments/1o4wae8/announcing_llamazing_your_ollama_and_comfyui/
false
false
self
4
null
gpt-oss:120b downloading to M3 Ultra 512gb 16tb
0
Let me know what you'd try out....I'll be happy to try different things
2025-10-12T18:06:10
https://www.reddit.com/r/LocalLLaMA/comments/1o4w49k/gptoss120b_downloading_to_m3_ultra_512gb_16tb/
YellowBathroomTiles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4w49k
false
null
t3_1o4w49k
/r/LocalLLaMA/comments/1o4w49k/gptoss120b_downloading_to_m3_ultra_512gb_16tb/
false
false
self
0
null
Llama.cpp Endpoint and how to use it in "Codex Cli"
1
[removed]
2025-10-12T17:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1o4vxrh/llamacpp_endpoint_and_how_to_use_it_in_codex_cli/
Master_Wrongdoer8908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4vxrh
false
null
t3_1o4vxrh
/r/LocalLLaMA/comments/1o4vxrh/llamacpp_endpoint_and_how_to_use_it_in_codex_cli/
false
false
https://b.thumbs.redditm…ISHpcAdPX6VE.jpg
1
null
Convert Hugging Face Safetensors to MediaPipe Task
5
I tried to do [this](https://ai.google.dev/gemma/docs/conversions/hf-to-mediapipe-task) but it keeps me stuck on #2 , i have a fintuned model from HF and i want to make it a .task file to use it on mediapipe, is there someone here know how to do it?
2025-10-12T17:56:19
https://www.reddit.com/r/LocalLLaMA/comments/1o4vul2/convert_hugging_face_safetensors_to_mediapipe_task/
Miserable-Theme-8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4vul2
false
null
t3_1o4vul2
/r/LocalLLaMA/comments/1o4vul2/convert_hugging_face_safetensors_to_mediapipe_task/
false
false
self
5
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]}
Claudiomiro: How to Achieve 100% Autonomous (Complex) Coding
13
https://preview.redd.it/…ou guys like it!
2025-10-12T17:49:04
https://www.reddit.com/r/LocalLLaMA/comments/1o4vnsw/claudiomiro_how_to_achieve_100_autonomous_complex/
TomatilloPutrid3939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4vnsw
false
null
t3_1o4vnsw
/r/LocalLLaMA/comments/1o4vnsw/claudiomiro_how_to_achieve_100_autonomous_complex/
false
false
https://a.thumbs.redditm…LQB3e8JrL5m0.jpg
13
{'enabled': False, 'images': [{'id': '7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=108&crop=smart&auto=webp&s=e6191df2601f6e6b8fc3be9400184278265738f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=216&crop=smart&auto=webp&s=6b9421208234591cf5ec8799c0963adb443f9bcb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=320&crop=smart&auto=webp&s=1649a5a3f070c66a5752d1c1f97a941b3988efb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=640&crop=smart&auto=webp&s=46a3e5e596daedec77293906b07320c58cc2378b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=960&crop=smart&auto=webp&s=0a116f00397460506b14295f8501315405baaddf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?width=1080&crop=smart&auto=webp&s=db3a9bc0107175653dd3cf6b65ea97940a86bd8f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7LCI8FO2pE1VYmvbvycVSKDluOb5d_iczH7df7yoYs0.png?auto=webp&s=008ddd0d61d2bdaa6923bc1888550b58c63bfb9b', 'width': 1200}, 'variants': {}}]}
New Unhinged NSFW Reasoning Model - Satyr-V0.1-4B
1
[removed]
2025-10-12T17:37:26
https://huggingface.co/PantheonUnbound/Satyr-V0.1-4B
ThePantheonUnbound
huggingface.co
1970-01-01T00:00:00
0
{}
1o4vcr1
false
null
t3_1o4vcr1
/r/LocalLLaMA/comments/1o4vcr1/new_unhinged_nsfw_reasoning_model_satyrv014b/
false
false
nsfw
1
null
New Unhinged NSFW Reasoning Model - Satyr-V0.1-4B
309
This version is an unpredictable experiment and may produce vulgar, explicit, or graphic content. Please use it at your own risk. More multifaceted versions will be released soon.
2025-10-12T17:32:33
https://huggingface.co/PantheonUnbound/Satyr-V0.1-4B
ThePantheonUnbound
huggingface.co
1970-01-01T00:00:00
0
{}
1o4v880
false
null
t3_1o4v880
/r/LocalLLaMA/comments/1o4v880/new_unhinged_nsfw_reasoning_model_satyrv014b/
false
false
nsfw
309
{'enabled': False, 'images': [{'id': '0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=108&crop=smart&auto=webp&s=c7aa40056cde18b900cf375cab61dd4013b9fff0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=216&crop=smart&auto=webp&s=eafc41c35299a22a42dd03d8b50f88b2ca752130', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=320&crop=smart&auto=webp&s=3c9e2229d4bc9eb8dac4728e9a6cc35bda3ef0e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=640&crop=smart&auto=webp&s=bfc539dc365ae90260f725769798b24982ed4c0f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=960&crop=smart&auto=webp&s=31021dce6eaa6cae2cc2c4589e9fc8274a1b84a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=1080&crop=smart&auto=webp&s=2facc27e19bc70f44620725120016d0a6675cd65', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?auto=webp&s=53603197787db9e7076f631c512f65de967a47a3', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fa057671aa88df0d3ba68d3a89ef69ae820902e9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3c392335755e3eb8f199bf3790f30e674b58a8d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=deef1ffeb189c7ce665bd5f2103b440f223dada1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d616943b33575de439193da09b4ff6d59d8bd315', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=479695fe87d0027ce4321cd62c42c64e43edda75', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=4198b9fb4edb73ab266e600c516b5930f8502c5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?blur=40&format=pjpg&auto=webp&s=7d56fddc7ff46cede3921b2f53cde42217a9c136', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fa057671aa88df0d3ba68d3a89ef69ae820902e9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3c392335755e3eb8f199bf3790f30e674b58a8d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=deef1ffeb189c7ce665bd5f2103b440f223dada1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d616943b33575de439193da09b4ff6d59d8bd315', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=479695fe87d0027ce4321cd62c42c64e43edda75', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=4198b9fb4edb73ab266e600c516b5930f8502c5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0drKGEWWSGTLMpR6iX475Euc9NE0hfsFNSvVWT25MtM.png?blur=40&format=pjpg&auto=webp&s=7d56fddc7ff46cede3921b2f53cde42217a9c136', 'width': 1200}}}}]}
Trouble Finetuning model Using LORA for llama.cpp.
5
Hello I have been at this for many hours. My goal is to finetune llama-3.1-8b with my own data using lora. I have tried unsloth's google colab and well it works in there. The inference in the google colab is exactly what I'm looking. However, I cannot after many hours convert it to any kind of gguf or model that works on llama.cpp. I used unsloth's built in llama.cpp gguf convertor. I downloaded it and tried it. Maybe I just need to change the way llama-cli/server handles the prompt. This is because inferencing this gguf in the llama-server gui results in a sometimes infinite generation of garbage like: `hello, how can i help you?` `<|im_start|>user` `can you help me with a project?` `<|im_start|>assistant` `yes, i can assist you with any type of project!` `<|im_start|>` This often goes forever and sometimes doesn't even refer to the prompt. I have tried many other solutions. I downloaded the Lora adapter with the safetensors and tried to convert it. There are errors like "config.json" or "tokenizer.model". The lora model only has the following files: `adapter_model.safetensors gooch_data.jsonl tokenizer.json adapter_config.json config.json special_tokens_map.json tokenizer_config.json` Now there are a number of scripts in llama.cpp called llama-export-lora. or convert\_lora\_to\_gguf.py. I have tried all of these with the above lora adapter and it always fails. sometimes due to the shape of some weights/tensors. Othertimes cause of missing files. I have seen the llama-finetune.exe but there seems little documentation on it. Im running a GTX 1080 TI so there are some limitations to what I can do locally. This is a long message but I really don't know what to do. Any help I would appreciate very very much.
2025-10-12T17:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1o4uwey/trouble_finetuning_model_using_lora_for_llamacpp/
Numerous_Yard_5267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4uwey
false
null
t3_1o4uwey
/r/LocalLLaMA/comments/1o4uwey/trouble_finetuning_model_using_lora_for_llamacpp/
false
false
self
5
null
Simple task that local models seem to fail on
0
Prompt: I want code to look up a bbfc rating . So if I give it Conclave I want it to return its rating. I have yet to find a model that gives me code that works. I wonder why that is?
2025-10-12T17:10:41
https://www.reddit.com/r/LocalLLaMA/comments/1o4unzu/simple_task_that_local_models_seem_to_fail_on/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4unzu
false
null
t3_1o4unzu
/r/LocalLLaMA/comments/1o4unzu/simple_task_that_local_models_seem_to_fail_on/
false
false
self
0
null
Traning Llama3.2:3b on my whatsapp chats with wife
223
Hi all, So my wife and I have been dating since 2018. ALL our chats are on WhatsApp. I am an LLM noob but I wanted to export it as a txt. And then feed it into an LLM so I could ask questions like: - who has said I love you more? - who apologises more? - what was discussed during our Japan trip? - how many times did we fight in July 2023? - who is more sarcastic in 2025? - list all the people we’ve talked about Etc So far - the idea was to chunk them and store them in a vector DB. And then use llama to interact with it. But the results have been quite horrible. Temp - 0.1 to 0.5, k=3 to 25. Broke the chat into chunks of 4000 with overlap 100 Any better ideas out there? Would love to hear! And if it works I could share the ingestion script! 🙇
2025-10-12T16:56:53
https://www.reddit.com/r/LocalLLaMA/comments/1o4uagn/traning_llama323b_on_my_whatsapp_chats_with_wife/
jayjay_1996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4uagn
false
null
t3_1o4uagn
/r/LocalLLaMA/comments/1o4uagn/traning_llama323b_on_my_whatsapp_chats_with_wife/
false
false
self
223
null
Check out AirPods Pro 4th Generation Bluetooth Earbuds with Active Noise Cancellation on eBay!
1
[removed]
2025-10-12T16:55:53
https://ebay.us/m/zGHsR4
Own-Recognition4001
ebay.us
1970-01-01T00:00:00
0
{}
1o4u9hq
false
null
t3_1o4u9hq
/r/LocalLLaMA/comments/1o4u9hq/check_out_airpods_pro_4th_generation_bluetooth/
false
false
default
1
null
LLM logic in TF2 meet the spy
1
https://preview.redd.it/…h?v=OR4N5OhcY9s)
2025-10-12T16:50:39
https://www.reddit.com/r/LocalLLaMA/comments/1o4u4ka/llm_logic_in_tf2_meet_the_spy/
Double_Shake_5669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4u4ka
false
{'oembed': {'author_name': 'Valve', 'author_url': 'https://www.youtube.com/@Valve', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/OR4N5OhcY9s?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Meet the Spy"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/OR4N5OhcY9s/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Meet the Spy', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o4u4ka
/r/LocalLLaMA/comments/1o4u4ka/llm_logic_in_tf2_meet_the_spy/
false
false
https://b.thumbs.redditm…AQK4pA2EP-jM.jpg
1
{'enabled': False, 'images': [{'id': 'VkCApXRkdkguEv2ZkCyy4SgqmR5faUAPdM5nfrKL1GE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VkCApXRkdkguEv2ZkCyy4SgqmR5faUAPdM5nfrKL1GE.jpeg?width=108&crop=smart&auto=webp&s=288d38d40244171708fb8b805b7da3e3ae3aadc7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VkCApXRkdkguEv2ZkCyy4SgqmR5faUAPdM5nfrKL1GE.jpeg?width=216&crop=smart&auto=webp&s=154cda770cf456b6149ca649d2bddda22ae93ff0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VkCApXRkdkguEv2ZkCyy4SgqmR5faUAPdM5nfrKL1GE.jpeg?width=320&crop=smart&auto=webp&s=e08f97cc06fe37a50dff349837aef184ef45386b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/VkCApXRkdkguEv2ZkCyy4SgqmR5faUAPdM5nfrKL1GE.jpeg?auto=webp&s=c0d924210ca6269707c5853fea245b65a79be1dc', 'width': 480}, 'variants': {}}]}
Paper2Video — turn a research paper into a full presentation video (slides, speech, talking head)
19
Multi-agent pipeline (“PaperTalker”) that takes a paper + reference **image/audio** and outputs a polished **presentation video** (Slides → Subtitles → Speech → Cursor → Talking-Head). **MIT** licensed, code + benchmark out. [GitHub](https://github.com/showlab/Paper2Video) * One-command run via [`pipeline.py`](http://pipeline.py); set `OPENAI_API_KEY` / `GEMINI_API_KEY` (best: GPT-4.1 or Gemini 2.5). Depends on Hallo2 + Paper2Poster. * Recommended: **A6000 48GB** for end-to-end generation. * Benchmark (**101** paper–video pairs) + metrics: Meta Similarity, PresentArena, PresentQuiz, IP Memory. https://preview.redd.it/b4nd5tfmfpuf1.png?width=835&format=png&auto=webp&s=90777151264bb001c851e64669dcb7b6baae186e
2025-10-12T16:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1o4szf0/paper2video_turn_a_research_paper_into_a_full/
freesysck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4szf0
false
null
t3_1o4szf0
/r/LocalLLaMA/comments/1o4szf0/paper2video_turn_a_research_paper_into_a_full/
false
false
https://b.thumbs.redditm…mrLMaCaz8huA.jpg
19
{'enabled': False, 'images': [{'id': '14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=108&crop=smart&auto=webp&s=f3ad5f410c99694f0dd452c4407d4d21f4ef313e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=216&crop=smart&auto=webp&s=afcca92fd8f4e6fde28aff633a840c7c680da161', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=320&crop=smart&auto=webp&s=ae4e20898a13da45289f9a12f28dc92cbab52bb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=640&crop=smart&auto=webp&s=604ef91a15e04ea2ea32a47e9edd6696a8318d46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=960&crop=smart&auto=webp&s=dea9cdd0ce3dd54f3036fd921a5db2b1dd9f50f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?width=1080&crop=smart&auto=webp&s=3f5b428e74279d9ab942d5e5cc1a6277757d36b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/14WZnO_akEyJj42YD4L-nD--_D0Ep8o7tXNC2svzt48.png?auto=webp&s=3683eccc98529438cd5774a1b65accc36d8751a5', 'width': 1200}, 'variants': {}}]}
Test
1
Testing For Reddit
2025-10-12T16:01:09
https://www.reddit.com/r/LocalLLaMA/comments/1o4su8c/test/
Extra_Cicada8798
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4su8c
false
null
t3_1o4su8c
/r/LocalLLaMA/comments/1o4su8c/test/
false
false
self
1
null
Interview with Z.ai employee, the company behind the GLM models. Talks about competition and attitudes towards AI in China, dynamics and realities of the industry
85
2025-10-12T15:46:43
https://www.youtube.com/watch?v=r0SalROzO38
nelson_moondialu
youtube.com
1970-01-01T00:00:00
0
{}
1o4sgv5
false
{'oembed': {'author_name': 'Manifold', 'author_url': 'https://www.youtube.com/@ManifoldPodcast', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/r0SalROzO38?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The Global AI Race: Z.ai and The View From Beijing"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/r0SalROzO38/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The Global AI Race: Z.ai and The View From Beijing', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o4sgv5
/r/LocalLLaMA/comments/1o4sgv5/interview_with_zai_employee_the_company_behind/
false
false
default
85
{'enabled': False, 'images': [{'id': 'brnT1-CiL694NH_ogGrkVOnYdebEpEkUcEFq_sappRI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/brnT1-CiL694NH_ogGrkVOnYdebEpEkUcEFq_sappRI.jpeg?width=108&crop=smart&auto=webp&s=11f328f131535649c8e7acc88845cda2fa5746a3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/brnT1-CiL694NH_ogGrkVOnYdebEpEkUcEFq_sappRI.jpeg?width=216&crop=smart&auto=webp&s=84369355ce3e9c77b31306d8e1cf6c60e1d44926', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/brnT1-CiL694NH_ogGrkVOnYdebEpEkUcEFq_sappRI.jpeg?width=320&crop=smart&auto=webp&s=d425501d8da63b539406ec0adae7cd9c361ea22d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/brnT1-CiL694NH_ogGrkVOnYdebEpEkUcEFq_sappRI.jpeg?auto=webp&s=4cf319401deefb30a87008763295676a6dc3f1df', 'width': 480}, 'variants': {}}]}
ComfyUI State of the Art
2
Hi all, im working as a software dev in a mid sized company. i work with a dedicated AI team, implementing different business cases. Our current focus and point of attention is really LLMs with text generation capabilities. We rarely needed to do image/video. I want to get more attention to image generation in the company, especially all the things the default online models like dalle or sora. things like: - style transfer - image restauration - pose transfer. I want to give a presentation on comfyUI and what you can do with it. I barely know its capabilities, only by looking at this sub, so what i'm looking for is: - state of the art things you can do in comfy - workflows i can use for a good presentation - tutorials i can follow to implement something interesting so i'm glad for any helpful links that will help me achieve to give a better presentation of comfy and local models.
2025-10-12T15:43:51
https://www.reddit.com/r/LocalLLaMA/comments/1o4se74/comfyui_state_of_the_art/
tillybowman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4se74
false
null
t3_1o4se74
/r/LocalLLaMA/comments/1o4se74/comfyui_state_of_the_art/
false
false
self
2
null
Connect a mining expansion board to regular MB?
0
So I need some help here. I've got this mining rig, with which I have played a little bit. It's fun. I do learn quite a lot with it but it's basically a piece of sh... And not only do the old GPUs with pci1x1, the 8gb of ram suck, but the CPU is even worse. So i was wondering if I can somehow manage to connect that thing to a regular MB instead of this plug in MB. Hardware is none of my strengths and this motherboard seems to be plugged into the expansion board with has a couple of pci1 X1 slots via a slot that at least looks like it's a PCI slot itself? Is someone familiar with something like this. Are there male - male PCI risers that I can use to connect this to a regular machine? I have honestly no idea what I even see here and would be grateful for any help.
2025-10-12T15:42:21
https://www.reddit.com/gallery/1o4scsi
Njee_
reddit.com
1970-01-01T00:00:00
0
{}
1o4scsi
false
null
t3_1o4scsi
/r/LocalLLaMA/comments/1o4scsi/connect_a_mining_expansion_board_to_regular_mb/
false
false
https://a.thumbs.redditm…uXn8l5Pf7Ey0.jpg
0
null
Deleted Ollama, but it’s still running on my MacBook
24
I'm going crazy. I deleted Ollama a few weeks ago to save my battery since it was draining almost all of it. I thought I had completely removed it, every last bit. Apparently not, because this popped up when I turned my MacBook on. Any idea how to fix this?
2025-10-12T15:00:19
https://www.reddit.com/r/LocalLLaMA/comments/1o4ra1k/deleted_ollama_but_its_still_running_on_my_macbook/
Puzzleheaded-Wafer81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4ra1k
false
null
t3_1o4ra1k
/r/LocalLLaMA/comments/1o4ra1k/deleted_ollama_but_its_still_running_on_my_macbook/
false
false
self
24
null
Why does chat models loop the same message after a certain number of messages
1
I am trying some chat models with emphasis on roleplay, and something i noticed is that after a certain amount of message back and forth they completely stop responding to messages and keep repeating the same response over and over again, regardless of the input message. They go completely deaf to requests part of the role play and outside of it. * I tried changing \`repeat penalty\` setting it to 2 in LM studio but that didn't work * I tried setting a response token limit but it doesn't seem to count towards the repeated messages (the response always goes further than the set limit) * I tried making the top K sampling higher than default 40% but that completely flipped the narrative to a mashup of words * I increased the context by around 60k (it's now \~256k) and repeated the chat and got to the exact same result * I upped the temperature to no use
2025-10-12T14:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1o4qmh1/why_does_chat_models_loop_the_same_message_after/
UniqueAttourney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4qmh1
false
null
t3_1o4qmh1
/r/LocalLLaMA/comments/1o4qmh1/why_does_chat_models_loop_the_same_message_after/
false
false
self
1
null
Claude's system prompt length has now exceeded 30k tokens
215
2025-10-12T14:29:52
https://github.com/asgeirtj/system_prompts_leaks/blob/main/Anthropic/claude-4.5-sonnet.md
StableSable
github.com
1970-01-01T00:00:00
0
{}
1o4qix3
false
null
t3_1o4qix3
/r/LocalLLaMA/comments/1o4qix3/claudes_system_prompt_length_has_now_exceeded_30k/
false
false
default
215
{'enabled': False, 'images': [{'id': 'otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=108&crop=smart&auto=webp&s=94c65da4f2081d2d4c0633cd173c98bc69cad45c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=216&crop=smart&auto=webp&s=4637fd72781e526e865245feb3d8f06a6067812c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=320&crop=smart&auto=webp&s=ef02d2acb2690104ff75ecd9adf6b64ae568accc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=640&crop=smart&auto=webp&s=345f8e8a9693f1ecf3f281e2c9b37a5656e8634f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=960&crop=smart&auto=webp&s=1fe658b7c89319a4f483dd539daf5b392e534536', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?width=1080&crop=smart&auto=webp&s=d256cbec700008bf0f9d5a07f9ebb0ca1f9bedce', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/otAtlKXoVGIzRX_D-XS8ef102ismRuSmY-rYGjCWHEI.jpeg?auto=webp&s=838dabbb26b9094b0b52fd71f1e938868b6c14f5', 'width': 1280}, 'variants': {}}]}
Is the Crucial Pro DDR5 any good for running LLMs locally?
0
The timings on these kits (5600MHz CL46) are slow compared to other RAM but they seem to be one of the cheapest ways of getting 96GB or 128GB of RAM for LLMs (at least in my country). But I'm wondering how good they are for local AI work based on their speed and latency, let alone gaming which is probably okay but not the best.
2025-10-12T14:07:16
https://www.reddit.com/r/LocalLLaMA/comments/1o4pze9/is_the_crucial_pro_ddr5_any_good_for_running_llms/
PhantomWolf83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4pze9
false
null
t3_1o4pze9
/r/LocalLLaMA/comments/1o4pze9/is_the_crucial_pro_ddr5_any_good_for_running_llms/
false
false
self
0
null
$200 Free API Credit for GPT5/Claude/GLM/Deepseek | No CC
0
Hey everyone **Get $200 FREE AI API Credits instantly — no card required!** Models: GPT-5 Codex, Claude Sonnet 4/4.5, GLM 4.5, deepseek *How to Claim:* 1- Sign up using GitHub through the link below 2- Credits will be added instantly to your account 3- Create free api Claim here through my referral: [Referral Link](https://agentrouter.org/register?aff=0yRr) No hidden charges | No card needed | Instant activation
2025-10-12T14:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1o4puwi/200_free_api_credit_for_gpt5claudeglmdeepseek_no/
texh89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o4puwi
false
null
t3_1o4puwi
/r/LocalLLaMA/comments/1o4puwi/200_free_api_credit_for_gpt5claudeglmdeepseek_no/
false
false
self
0
null