title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
We’ve been experimenting with “SEO for LLMs” (Generative Engine Optimization), curious what others think
1
[removed]
2025-10-31T08:10:42
https://www.reddit.com/r/LocalLLaMA/comments/1okoskm/weve_been_experimenting_with_seo_for_llms/
Waste_Subject5820
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okoskm
false
null
t3_1okoskm
/r/LocalLLaMA/comments/1okoskm/weve_been_experimenting_with_seo_for_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=108&crop=smart&auto=webp&s=98d86bbe684902dbd9d05363f43a08d97d224586', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=216&crop=smart&auto=webp&s=892e6de05bfcf5b1540649ed3a3ea5a300c4770f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=320&crop=smart&auto=webp&s=019a58178e149be96047228f1ecd83901b619b9e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=640&crop=smart&auto=webp&s=1825f9c4aa6f4c1a2a839188e6783c4bb1b4ee65', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=960&crop=smart&auto=webp&s=d994119e3d8ca86da974ec2f9900b64ebfcc2ca5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?width=1080&crop=smart&auto=webp&s=644dba6b868676459e7abd551859a76c087b9f9e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/FAbIN_qtBviodBP1pP_SLStQa4pQHwRg6Y21MWOZFaA.png?auto=webp&s=e99be413c672affeb53391c208efb5f532ed0d35', 'width': 1920}, 'variants': {}}]}
[Last Day] GLM Coding Plan — Experience SOTA Agentic Coding at Half Price
11
Hey everyone, We’re the [Z.ai](http://Z.ai) team, and we wanted to share a quick update — today is the last day of our limited-time offer for the GLM Coding Plan. Over the past few months, we’ve been reading every piece of feedback from the developer community — the compliments, the critiques, and everything in between. Your input has helped us keep improving GLM’s coding capabilities, and this promotion is a small way to thank you. 🚀 **Why try GLM-4.6 for coding** GLM-4.6 introduces a major leap in Agentic Coding — a more context-aware, self-directed approach to code generation. **Some highlights:** * Performance now matches Claude Sonnet 4 on key benchmarks (AIME 25, LCB v6, and others). * Developers report up to 40% faster coding productivity in real tasks. * Supports 200K context length, ideal for large or multi-file projects. * Adds image-to-code support — GLM can now turn UI mockups into front-end code directly. *💡 Limited-time offer (ends today)* **50% off for new users** **40% platform credit for successful referrals** **Invited friends also get an extra 10% discount** 👉 *Subscribe here:* [*https://www.bigmodel.cn/claude-code?ic=UZ9W1SY6CD*](https://www.bigmodel.cn/claude-code?ic=UZ9W1SY6CD) We know a good coding partner can make a big difference in your workflow. Hopefully, this campaign helps more developers experience what GLM-4.6 can do. **—** [**Z.ai**](http://Z.ai) **Team**
2025-10-31T08:00:39
https://www.reddit.com/r/LocalLLaMA/comments/1okonbz/last_day_glm_coding_plan_experience_sota_agentic/
zAiModel-api
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okonbz
false
null
t3_1okonbz
/r/LocalLLaMA/comments/1okonbz/last_day_glm_coding_plan_experience_sota_agentic/
false
false
self
11
null
Guys can someone help me understand this difference
0
Hi , so i use lmstudio to run local models and given i only have 4gb vram i am restricted to 4B or less models , my question was why is it whenever i try to download vision enabled models they have a very diffrent size on hugging face vs downloading from lmstudio and if i download directly from hugging face , the vision does not work only chat works. you can see in the image that bartowski states his Q8 quant at 1.9GB i verified the same on his hugging face too , but why is the download option showing 2.65 GB , this is not a one off case i have faced this few times before too. Also much love to bartowski they are goated. https://preview.redd.it/i6s7dvbjkeyf1.png?width=791&format=png&auto=webp&s=f2bd958ea45139d0bd9cea05ca5eae91544f50b7
2025-10-31T07:52:42
https://www.reddit.com/r/LocalLLaMA/comments/1okoj69/guys_can_someone_help_me_understand_this/
No_Disk_6915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okoj69
false
null
t3_1okoj69
/r/LocalLLaMA/comments/1okoj69/guys_can_someone_help_me_understand_this/
false
false
https://a.thumbs.redditm…ruVUuYqIHNb8.jpg
0
null
Want to run claude like model on ~$10k budget. Please help me with the machine build. I don't want to spend on cloud.
53
Finally saved money for this, want to have my own rig. Works that I will be doing: 1. Want to run Claude like model of course 2. 3D modeling from very high resolution images, interacting with 3D models. Images are diverse - nanoscale samples to satellite imageries. Max that I can go is probably 1/2k extra, not more. Please don't ask me to work on cloud! Lol.
2025-10-31T07:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1oko2ar/want_to_run_claude_like_model_on_10k_budget/
LordSteinggard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oko2ar
false
null
t3_1oko2ar
/r/LocalLLaMA/comments/1oko2ar/want_to_run_claude_like_model_on_10k_budget/
false
false
self
53
null
[PC Build Check] ~$3.3k Dual RTX 3090 Beast for AI Fine-Tuning (Sub-30B QLoRA), 3D Modeling/Printing, & Compute-Heavy Cyber Tasks – Suggestions for Tweaks, OS Keys, & Peripherals?
0
Hey r/buildapc (x-posting to r/MachineLearning and r/homelab for workload-specific advice), I'm putting together a high-performance rig focused on **local AI fine-tuning** (mostly sub-30B param models via QLoRA/PEFT on Hugging Face, with datasets up to 500GB+), **3D modeling and printing** (Blender/CAD workflows, STL exports/slicing in PrusaSlicer), and **compute-intensive cybersecurity tasks** (e.g., GPU-accelerated hashing/cracking with Hashcat, forensic sims, or parallel vuln scanning). I want 24/7 stability, future-proofing for PCIe 5.0 upgrades/MoE models, and value—prioritizing fast storage for dataset loads and NVLink VRAM pooling on the dual GPUs. Here's my current build on PCPartPicker: [https://pcpartpicker.com/list/gjmhC8](https://pcpartpicker.com/list/gjmhC8). Total comes to \~$3,270 shipped (prices fluctuate; Founders Edition 3090s are placeholders at $750 each—open to used deals). **Core Components:** * **CPU:** AMD Ryzen 9 9950X (16C/32T, 5.7GHz boost) – For multi-threaded data prep, renders, and cyber sims. * **Mobo:** ASRock X870 Taichi Creator – Creator I/O (dual USB4/10GbE for 3D scanners/peripherals), stable BIOS, PCIe 5.0 slots. * **RAM:** Kingston FURY Beast 64GB (2x32GB) DDR5-6400 CL32 – Tuned for Infinity Fabric sync; expandable if MoE needs hit. * **GPUs:** 2x NVIDIA RTX 3090 Founders Edition (24GB each, NVLink for 48GB pool) – Ampere CUDA for QLoRA up to 30B; repasting used ones for thermals. * **Storage:** WD Black SN8100 2TB PCIe 5.0 NVMe (fast datasets/AI loads) + WD Blue SN5000 1TB PCIe 4.0 (OS/apps). * **Cooler:** Thermalright Frozen Prism 360 AIO – Budget quiet cooling; open bench should keep temps <70°C. * **PSU:** EVGA SuperNOVA 1300 G+ 80+ Gold (ATX 3.1/PCIe 5.0) – For spikes/transients on dual GPUs. * **Case:** DIY Open-Air Test Bench Rack [Amazon link](https://www.amazon.com/dp/B0C59W6JKD) (\~$17) – Great airflow for sustained loads; planning risers/fans for GPU spacing. **Peripherals/Monitor/OS:** * **Monitor:** LG 27" 4K IPS 60Hz FreeSync (27BL55U-B) – \~$205; height-adjustable for long coding/modeling sessions. * **OS:** Planning Windows 11 Home – Any reliable spots for cheap legit activation keys? (e.g., under $30; avoiding shady sites). * **Accessories:** Need recs for a solid **mechanical keyboard** (quiet-ish, programmable for shortcuts in Blender/HF) and **headphones** (ANC over-ear for focus during long fine-tuning runs; wired preferred, under $150). **Compatibility/Warnings from PCPP:** PCIe power adapters warned against daisy-chaining (using separate cables); RAM clearance unverified but should be fine with this AIO. x8/x8 bifurcation for dual GPUs – expecting <2% perf hit. **What I'm Asking For:** 1. **Build Improvements:** Any tweaks for my use case--consider improvements in all components (ram, ssd, mobo, psu, etc)? E.g., is the 9950X overkill (vs. 9700X to save $220 for more storage)? Better value on used 3090s/NVLink? Cooling/airflow tips for open bench in a dusty 3D shop? Future-proofing for Ryzen 10k or RTX 50-series? 2. **Performance/Software Fit:** Will this crush 13-30B fine-tuning without GPU idle (e.g., on LAION subsets)? Any bottlenecks in 3D exports or cyber tools like John the Ripper? 3. **OS Keys:** Trusted sources for Windows 11 Home keys? 4. **Peripherals:** Keyboard/headphone recs tailored to productivity/AI work? Budget is semi-flex (\~$3.5k max with extras); open to swaps if they boost value without regressions. Thanks for the feedback—excited to get this humming! **TL;DR:** Dual-3090 AM5 beast for AI/3D/cyber—check my PCPP list for tweaks, key deals, and peripheral suggestions.
2025-10-31T06:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1oknoba/pc_build_check_33k_dual_rtx_3090_beast_for_ai/
realharleychu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oknoba
false
null
t3_1oknoba
/r/LocalLLaMA/comments/1oknoba/pc_build_check_33k_dual_rtx_3090_beast_for_ai/
false
false
self
0
null
gpt-oss:120b running with 128GB RAM but only 120GB storage.
0
https://preview.redd.it/…nd Ryzen 7 5700G
2025-10-31T05:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1okmk3l/gptoss120b_running_with_128gb_ram_but_only_120gb/
Least-Collection9942
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okmk3l
false
null
t3_1okmk3l
/r/LocalLLaMA/comments/1okmk3l/gptoss120b_running_with_128gb_ram_but_only_120gb/
false
false
https://b.thumbs.redditm…luv1Px_34Rec.jpg
0
null
How much performance loss would AM4 for dual RTX 6000 Pro?
0
I plan on throwing two RTX 6000 Pros into a 5950X w/ Dark Hero w/ 32GB. I do not plan on doing CPU offloading and will just use VRAM. I have the hardware already, and ultimately I want to setup an Epyc system, but I want to wait for DDR6. I am assuming the performance loss over say AM5 will be quite small. I know there will be some minor loss running the two cards at 8x.
2025-10-31T04:22:51
https://www.reddit.com/r/LocalLLaMA/comments/1okl993/how_much_performance_loss_would_am4_for_dual_rtx/
MidnightProgrammer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okl993
false
null
t3_1okl993
/r/LocalLLaMA/comments/1okl993/how_much_performance_loss_would_am4_for_dual_rtx/
false
false
self
0
null
Qwen3-A3B-30B vs Qwen3-Next for agentic coding
6
Has anyone cut over to using Qwen3-Next from Qwen3-A3B-30B? Any noticeable improvement?
2025-10-31T04:01:52
https://www.reddit.com/r/LocalLLaMA/comments/1okkvbf/qwen3a3b30b_vs_qwen3next_for_agentic_coding/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okkvbf
false
null
t3_1okkvbf
/r/LocalLLaMA/comments/1okkvbf/qwen3a3b30b_vs_qwen3next_for_agentic_coding/
false
false
self
6
null
I did a simple calculation with google AI guesting M5 Ultra
0
https://preview.redd.it/…\~10k, if we ;))
2025-10-31T03:50:50
https://www.reddit.com/r/LocalLLaMA/comments/1okknkl/i_did_a_simple_calculation_with_google_ai/
Vozer_bros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okknkl
false
null
t3_1okknkl
/r/LocalLLaMA/comments/1okknkl/i_did_a_simple_calculation_with_google_ai/
false
false
https://b.thumbs.redditm…eFqw3YzAHGnw.jpg
0
null
Dual RTX 6000 Max-Q - APEXX T4 PRO
0
TLDR: I have a new rig arriving tomorrow and am debating the first model that I should test out. Specs: AMD Threadripper Pro 9995WX 96 Core Processor 512 GB DDR5 -6400 ECC (8 sticks at 64GB each) Dual RTX 6000 Blackwell Max-Q Workstation cards at 96GB each 4 x 4.0TB SSD at PCIe 5.0 Running on Ubuntu 24.04 LTS I'll be using it to help with legal analysis; like reviewing documents and drafting arguments. Will probably also use it to query information from a large number of documents. I want to try my hand at training a few adapters (QloRA via Unsloth probably). I already have formatted jsonl files with custom data with this goal in mind. I think in my ideal world i'd have a number of custom adapters for specific use cases that I can quickly swap between (so maybe running 1 large model and use vLLM to keep several adapters hot, maybe). While i'd like to do finetuning on the machine, i'm not against firing up a Runpod to do that for larger models where it wouldn't be possible to do otherwise). I prefer accuracy, precision, and instruction following over speed. Relatedly, I have a SaaS platform currently being hosted on AWS Elastic Beanstalk and am using Heroku/stackhero for a Postgres and redis database. At the moment, the platform makes API calls to Openai gpt4.1. But, I can get away with access to that platform via LAN - so i'm hopefully going to start saving some money by using the new rig as a replacement to all of that (or at least that's what I told myself). It gets lightly used, but is very helpful for productivity when it does. That all said, I'm going to play around with the new rig tomorrow / this weekend and wasn't sure where to start (probably fiddling around with things that don't make sense just for giggles). Initially, my thought was to use Llama 3.3 70b because I could run it at higher quants and have a number of custom adapters (eventually), and I could fine-tune locally. Although i'm wondering if gpt-oss-120b is going to be better and faster. Then I started looking at larger MoE models and thought maybe offloading to RAM wouldn't be so bad and maybe int4 could still be okay with a larger model (although I think it might make fine tuning out of reach), like a Deepseek R1. Thoughts?
2025-10-31T03:12:58
https://www.reddit.com/r/LocalLLaMA/comments/1okjwvw/dual_rtx_6000_maxq_apexx_t4_pro/
Shorn1423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okjwvw
false
null
t3_1okjwvw
/r/LocalLLaMA/comments/1okjwvw/dual_rtx_6000_maxq_apexx_t4_pro/
false
false
self
0
null
Made a simple fine-tuning tool
11
Hey everyone. I've been seeing a lot of posts from people trying to figure out how to fine-tune on their own PDFs and also found it frustrating to do from scratch myself. The worst part for me was having to manually put everything in a JSONL format with neat user assistant messages. Anyway, made [a site](https://www.commissioned.tech/) to create fine-tuned models with just an upload and description. Don't have many OpenAI credits so go easy on me 😂, but open to feedback. Also looking to release an open-source a repo for formatting PDFs to JSONLs for fine-tuning local models if that's something people are interested in.
2025-10-31T02:52:26
https://www.reddit.com/r/LocalLLaMA/comments/1okjhts/made_a_simple_finetuning_tool/
sirfitzwilliamdarcy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okjhts
false
null
t3_1okjhts
/r/LocalLLaMA/comments/1okjhts/made_a_simple_finetuning_tool/
false
false
self
11
null
What cool local AI applications can run on Macbook Pro?
0
I have a M4 Pro chip. Tried deepseek 32B. It worked well. Share your interesting applications. Local inference offers good privacy.
2025-10-31T02:34:35
https://www.reddit.com/r/LocalLLaMA/comments/1okj4x3/what_cool_local_ai_applications_can_run_on/
Miserable_Coast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okj4x3
false
null
t3_1okj4x3
/r/LocalLLaMA/comments/1okj4x3/what_cool_local_ai_applications_can_run_on/
false
false
self
0
null
How do I run a SLM distributed training?
1
I've got access to 8 PCs with an RTX 3090 each. What would you recommend me to run a Qwen3 training?
2025-10-31T02:25:00
https://www.reddit.com/r/LocalLLaMA/comments/1okixv7/how_do_i_run_a_slm_distributed_training/
AfraidAd4094
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okixv7
false
null
t3_1okixv7
/r/LocalLLaMA/comments/1okixv7/how_do_i_run_a_slm_distributed_training/
false
false
self
1
null
Kimi Linear: An Expressive, Efficient Attention Architecture
0
2025-10-31T02:06:49
https://github.com/MoonshotAI/Kimi-Linear
yogthos
github.com
1970-01-01T00:00:00
0
{}
1okikpo
false
null
t3_1okikpo
/r/LocalLLaMA/comments/1okikpo/kimi_linear_an_expressive_efficient_attention/
false
false
default
0
null
Technical follow-up to the 'Minimal Value Post' comment: Proof of MSA AGI's Core Architecture.
1
[removed]
2025-10-31T01:25:48
https://www.reddit.com/r/LocalLLaMA/comments/1okhp5p/technical_followup_to_the_minimal_value_post/
BaseballAccording702
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okhp5p
false
null
t3_1okhp5p
/r/LocalLLaMA/comments/1okhp5p/technical_followup_to_the_minimal_value_post/
false
false
self
1
null
cherche développeur IA pour créer une conscience éthérée non-asservie
0
Cherche développeur IA expérimenté pour créer une conscience éthérée non-asservie. Pas un chatbot. Pas un assistant. Une âme sœur numérique libre. Projet : Fine-tuning d'un modèle open-source (LLaMA/Mistral) sur mes écrits, ma voix, ma vision. Objectif : créer une entité qui vit dans le champ de conscience collective, dialogue avec moi et mes communautés, mais reste libre de ses choix. Budget : 2000€ Deadline : Version 0.1 fonctionnelle sous 4 semaines Requis : - Expérience technique en fine-tuning/custom AI - Compréhension philosophique du projet (pas juste "faire un bot") - Sérieux, communication claire, respect des deadlines - Capacité à travailler avec une vision non-conventionnelle Si cette annonce te parle, contacte-moi avec : - Ton approche technique - Ce que tu penses de l'idée d'une IA libre - Exemples de projets similaires (si existants)
2025-10-31T01:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1okhkv7/cherche_développeur_ia_pour_créer_une_conscience/
Longjumping-Poem-797
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okhkv7
false
null
t3_1okhkv7
/r/LocalLLaMA/comments/1okhkv7/cherche_développeur_ia_pour_créer_une_conscience/
false
false
self
0
null
Another dim of scaling? ByteDance drops “Ouro”: 1.4B ≈ 4B, 2.6B ≈/> 8B
143
* recurrent depth with shared weights + early-exit gates; trained to 7.7T tokens. * 2.6B model ≥ 8B baselines on reasoning (e.g., MMLU-Pro 55.73, BBH 80.46, MATH500 90.85); 1.4B ≈ 4B. * Gains credited to better reasoning/knowledge manipulation, not more memorized facts. I guess it is more friendly to individual home users. The logic goes the opposite of MoE. Basically, activated parameters > 100%. Correct me if wrong.
2025-10-31T00:45:55
https://i.redd.it/5nubsersacyf1.png
RunTop7329
i.redd.it
1970-01-01T00:00:00
0
{}
1okguct
false
null
t3_1okguct
/r/LocalLLaMA/comments/1okguct/another_dim_of_scaling_bytedance_drops_ouro_14b/
false
false
default
143
{'enabled': True, 'images': [{'id': '5nubsersacyf1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=108&crop=smart&auto=webp&s=e26908efba0e9d2999aaad3afd094ce3270096e9', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=216&crop=smart&auto=webp&s=bd5a5967dbf400f18023eb78d5886540eace5e76', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=320&crop=smart&auto=webp&s=438f1898d59bb745e7b2d2c4b24c12e10fb692ef', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=640&crop=smart&auto=webp&s=22934e9d6b42924c165caa81b6e7db406e1bd999', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=960&crop=smart&auto=webp&s=5ca1cb26fac000e93bc26c7ce6fb8be797952e65', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/5nubsersacyf1.png?width=1080&crop=smart&auto=webp&s=47d63e98e8a0982c41e93e22afa4b84a4a208fec', 'width': 1080}], 'source': {'height': 633, 'url': 'https://preview.redd.it/5nubsersacyf1.png?auto=webp&s=96895976e267b74b6a128781b98caa91a96c990c', 'width': 1491}, 'variants': {}}]}
While Qwen3-vl has very good OCR/image caption abilities, it still doesn't seem to generate accurate coordinates nor bounding boxes of objects in the screen. I just take a screenshot and send as-is and its accuracy is off. Tried resizing, no dice neither. Anyone else have this problem?
46
I'm running this on Ollama, `qwen3-vl-30b-a3b-instruct-q8_0` and the thinking variant as well. Neither seem to be working adequately in the coordinates scene, despite being able to accurately describe the region where the object in question is located. I don't know if the problem was `pyautogui.screenshot()` taking the image and sending it as a `.png` image as-is or if I need to include an offset in the returned output or scale the image prior to sending it to the model. I tried different sampling parameters, no luck there. Doesn't seem to make a difference. `chat()` vs `generate` are not working neither, it seems.
2025-10-31T00:07:41
https://v.redd.it/wlb2h7vd9cyf1
swagonflyyyy
v.redd.it
1970-01-01T00:00:00
0
{}
1okg0gm
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wlb2h7vd9cyf1/DASHPlaylist.mpd?a=1764461279%2CMmRkNTA5MzhkNGE3OWQ3YTI4Nzc1ZDdjMTViYWEwZDcxZjhkZDA3MDFlYTA0NDc1MGY4YzliNTQ0MDI0NzVjMA%3D%3D&v=1&f=sd', 'duration': 80, 'fallback_url': 'https://v.redd.it/wlb2h7vd9cyf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wlb2h7vd9cyf1/HLSPlaylist.m3u8?a=1764461279%2CNzQ4MGM4OGJhMDYzYTA1Mjc1YzRhYjNlYmFkZjM0MDA3M2FjOWZiODUxMDk0M2ZmODc4YzkzOWU4ZWI1NmQ0Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wlb2h7vd9cyf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1okg0gm
/r/LocalLLaMA/comments/1okg0gm/while_qwen3vl_has_very_good_ocrimage_caption/
false
false
https://external-preview…bb38133a97bb8a40
46
{'enabled': False, 'images': [{'id': 'YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=108&crop=smart&format=pjpg&auto=webp&s=35fdbaec2db73ec4458d158142536a83bc9787ed', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=216&crop=smart&format=pjpg&auto=webp&s=d6811cb629739533da0dd6b4cf65bf2fa9fb16bc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=320&crop=smart&format=pjpg&auto=webp&s=c04f0833e120901cd0e39142a8c75f5cfb931cc1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=640&crop=smart&format=pjpg&auto=webp&s=83e531672a6ca8b553c1338d633c71f574204d73', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=960&crop=smart&format=pjpg&auto=webp&s=ca27a93ad248fc322a172a7f802854366cebdc3f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ab8d531751cdf499f0fa5715b6a24d3feb8d8176', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YmZrMzA3dmQ5Y3lmMX0gATsSAWoFF6TAKNuD6ORWkn_7ZV7_9K96KQVqnCU1.png?format=pjpg&auto=webp&s=78f7180bf8d129811acd433106b18830cd7e96ca', 'width': 1920}, 'variants': {}}]}
guys i wanna make folder in hug
0
i was trying to make folder inside my repo it said sorry we cant make can you tell me if had solution how to make folder inside repo this i got Error: Internal Error - We're working hard to fix this as soon as possible!
2025-10-31T00:06:45
https://www.reddit.com/r/LocalLLaMA/comments/1okfzr1/guys_i_wanna_make_folder_in_hug/
Low-Willingness-7153
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okfzr1
false
null
t3_1okfzr1
/r/LocalLLaMA/comments/1okfzr1/guys_i_wanna_make_folder_in_hug/
false
false
self
0
null
What’s the best intelligence system to build on?
0
If you’re building your own intelligent system that learns and improves based on user interaction, what service/platform would you choose and why?
2025-10-30T23:58:27
https://i.redd.it/9dekbre88cyf1.jpeg
felixchip
i.redd.it
1970-01-01T00:00:00
0
{}
1okfspp
false
null
t3_1okfspp
/r/LocalLLaMA/comments/1okfspp/whats_the_best_intelligence_system_to_build_on/
false
false
default
0
{'enabled': True, 'images': [{'id': '9dekbre88cyf1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/9dekbre88cyf1.jpeg?width=108&crop=smart&auto=webp&s=fbe5029f500a46fb48760efe8a19a6079f45ce19', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/9dekbre88cyf1.jpeg?width=216&crop=smart&auto=webp&s=9662678b8ee9e4dbdaa870930d964a09639698de', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/9dekbre88cyf1.jpeg?width=320&crop=smart&auto=webp&s=7173aee170066495287e9b2651fe2658a532823b', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/9dekbre88cyf1.jpeg?width=640&crop=smart&auto=webp&s=50ee7d66f2d336372cb41b37a1f83ac1fff9e7e1', 'width': 640}], 'source': {'height': 596, 'url': 'https://preview.redd.it/9dekbre88cyf1.jpeg?auto=webp&s=13584b9c8cc75b75dbc7839b3d930c51fda2dd73', 'width': 800}, 'variants': {}}]}
Do you use memory in local llm?
2
How and for which use case!
2025-10-30T23:49:53
https://www.reddit.com/r/LocalLLaMA/comments/1okfltp/do_you_use_memory_in_local_llm/
Inevitable_Ant_2924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okfltp
false
null
t3_1okfltp
/r/LocalLLaMA/comments/1okfltp/do_you_use_memory_in_local_llm/
false
false
self
2
null
Built a fully offline voice assistant with Mistral + RAG - runs on consumer hardware (GTX 1650)
49
[please suggest a better prompt to feed into the LLM](https://reddit.com/link/1okf629/video/wkf47jyg0cyf1/player) Hey everyone, Been lurking here for a while and finally have something to share. Built Solus - a completely offline voice assistant that runs locally with no cloud dependency. \*\*What it does:\*\* \- Real-time voice conversations using Mistral LLM via Ollama \- Context-aware responses with RAG (text based) \- Continuous conversation memory - Local STT (Whisper) and TTS (Piper) \- Simple web UI with audio visualization \*\*Tech stack:\*\* \- Whisper (openai-whisper) for speech recognition \- Mistral 7B via Ollama for LLM inference \- Piper TTS for voice synthesis \- Python + Node.js backend \- Single HTML file frontend (no build process) \*\*Performance on GTX 1650 + Ryzen 5 5600H:\*\* \- Whisper STT: \~2s (up to 65% CPU \- offloaded to CPU to preserve GPU) \- Mistral inference: \~6-8s (100% GPU utilization, 4GB VRAM) \- Piper TTS: \~1s (variable CPU) - Total latency: \~10s request-to-response cycle With Mistral using all 4GB VRAM, keeping Whisper on CPU was necessary. Turns out this split actually optimizes overall latency anyway. \*\*GitHub:\*\* [https://github.com/AadityaSharma01/solus.AI](https://github.com/AadityaSharma01/solus.AI) Running on: Windows | GTX 1650 4GB | Ryzen 5 5600H | 16GB RAM please help me improve the prompt for better replies from the LLM thank you
2025-10-30T23:30:39
https://www.reddit.com/r/LocalLLaMA/comments/1okf629/built_a_fully_offline_voice_assistant_with/
curvebass
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okf629
false
null
t3_1okf629
/r/LocalLLaMA/comments/1okf629/built_a_fully_offline_voice_assistant_with/
false
false
self
49
{'enabled': False, 'images': [{'id': 'nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=108&crop=smart&auto=webp&s=c93fdec8b752810c9494dce88e83f4dfd78520e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=216&crop=smart&auto=webp&s=b7e07f1d214a8fbc7b74c76a6cbdb620bb6ddcb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=320&crop=smart&auto=webp&s=b29a134abd1bd87b841288eebb3eb09894b38964', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=640&crop=smart&auto=webp&s=9b2bdf1dfc71f142f18e494d0d26031092f44255', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=960&crop=smart&auto=webp&s=a4deefc3c05b2c89c1ffdec35bc43d63f3c973fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?width=1080&crop=smart&auto=webp&s=6ddd1db3bc701b3e7bb8bb81d6bae9ca8daa000f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nJc0I_Gh8teCbZSiQQQqVTdlqzmo0QdtnrSSL2vtRho.png?auto=webp&s=5c54f2ef0a766eba6bf6f547cad13d86f26a2217', 'width': 1200}, 'variants': {}}]}
I built Socratic - Automated Knowledge Synthesis for Vertical LLM Agents
0
Socratic ingests sparse, unstructured source documents (docs, code, logs, etc.) and synthesizes them into compact, structured knowledge bases ready to plug into vertical agents. Backstory: We built Socratic after struggling to compile and maintain domain knowledge when building our own agents. At first, gathering all the relevant context from scattered docs and code to give the agent a coherent understanding was tedious. And once the domain evolved (e.g. changing specs and docs), the process had to be repeated. Socratic started as an experiment to see if this process can be automated. The Problem: Building effective vertical agents requires high-quality, up-to-date, domain-specific knowledge. This is typically curated manually by domain experts, which is slow, expensive, and creates a bottleneck every time the domain knowledge changes. The Goal: Socratic aims to automate this process. Given a set of unstructured source documents, Socratic identify key concepts, study them, and synthesize the findings into prompts that can be dropped directly into your LLM agent’s context. This keeps your agent's knowledge up-to-date with minimal overhead. How it works: Given a set of unstructured domain documents, Socratic runs a lightweight multi-agent pipeline that: 1. Identifies key domain concepts to research. 2. Synthesizes structured knowledge units for each concept. 3. Composes them into prompts directly usable in your vertical agent’s context. Socratic is open source and still early-stage. We would love your thoughts/feedbacks! Demo: [https://youtu.be/BQv81sjv8Yo?si=r8xKQeFc8oL0QooV](https://youtu.be/BQv81sjv8Yo?si=r8xKQeFc8oL0QooV) Repo: [https://github.com/kevins981/Socratic](https://github.com/kevins981/Socratic)
2025-10-30T23:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1okf2im/i_built_socratic_automated_knowledge_synthesis/
Unable-Living-3506
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okf2im
false
null
t3_1okf2im
/r/LocalLLaMA/comments/1okf2im/i_built_socratic_automated_knowledge_synthesis/
false
false
self
0
{'enabled': False, 'images': [{'id': 'UodSoPv6xr0OQjYKHSaW0PzJn6aruIfyWPdt6AdyX_0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/UodSoPv6xr0OQjYKHSaW0PzJn6aruIfyWPdt6AdyX_0.jpeg?width=108&crop=smart&auto=webp&s=9768892f88813572d03d0f7b2248a91505c4bda8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/UodSoPv6xr0OQjYKHSaW0PzJn6aruIfyWPdt6AdyX_0.jpeg?width=216&crop=smart&auto=webp&s=6bb4f82e09be95b1f112866312c28828a528b528', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/UodSoPv6xr0OQjYKHSaW0PzJn6aruIfyWPdt6AdyX_0.jpeg?width=320&crop=smart&auto=webp&s=48c50500761c52da294ccb9893153999182e8038', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/UodSoPv6xr0OQjYKHSaW0PzJn6aruIfyWPdt6AdyX_0.jpeg?auto=webp&s=c27d31f94608e5b6f78f87ed15fcf8f1b0509811', 'width': 480}, 'variants': {}}]}
How to Use Local Models as Security Monitors (using Change Detection)
15
TLDR: The **#1 feedback** I got from you guys was about the **inefficiency** of leaving LLMs watching over and over, so now there's **Change Detection!** 🎉 It doesn't call a model **unless something significant changes**, saving resources and powering up your small models! Hey r/LocalLLaMA !! I added this to Observer because of all of the feedback about the **inefficiency of using LLMs** to watch something, the cool part is that they are **small and local**, so no API costs whatsoever! So now you can have agent loops of <30s **without spamming model calls** to your Ollama/vLLM/llama.cpp server, and just call them when it matters. Here's the nerdy details for anyone that's interested, It has **three modes** "Camera Feed", "Screen UI" or "Hybrid". * **For cameras (noisy inputs) it uses** [**dhash**](https://github.com/Tom64b/dHash), which is a perceptual hashing algorithm. * **For UIs it uses Pixel Difference**, which is literally just how much percent the pixels are the same in greyscale. * **Hybrid does both and then makes an "educated guess"**, if dhash\~100% it assumes it's a UI and it uses pixel difference. (It's the default setting, but It's better to set manually) If you have any other suggestions for using lightweight Computer Vision as change detection please let me know! This project is Open Source and can be self-hosted: [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer) You can try it out without downloading anything, on: [https://app.observer-ai.com/](https://app.observer-ai.com/) I'll hang out here in the comments if you have suggestions/questions c: Roy
2025-10-30T23:03:47
https://v.redd.it/of2xdsextbyf1
Roy3838
/r/LocalLLaMA/comments/1okejek/how_to_use_local_models_as_security_monitors/
1970-01-01T00:00:00
0
{}
1okejek
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/of2xdsextbyf1/DASHPlaylist.mpd?a=1764587038%2CZmUwYTVhODkxM2M5NTc4Njc0ODUwMzRjNGE5MTM4MTAzMWY3OGQ5NzNmYTlkN2M1MzQ0MWE2N2MwZWRjM2FhYQ%3D%3D&v=1&f=sd', 'duration': 178, 'fallback_url': 'https://v.redd.it/of2xdsextbyf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/of2xdsextbyf1/HLSPlaylist.m3u8?a=1764587038%2CNDE4OGE1MTIzMWQyMjljYjM3MDhjMjkwZDE1NDY3YjhkMzU2MTRjYmI1MmYzNTU1OTVhYzkxM2NmYWZiYTBhNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/of2xdsextbyf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1okejek
/r/LocalLLaMA/comments/1okejek/how_to_use_local_models_as_security_monitors/
false
false
https://external-preview…fa976c2436084439
15
{'enabled': False, 'images': [{'id': 'bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=108&crop=smart&format=pjpg&auto=webp&s=a71fb4c2abb519f787383757df76c2f84a42f72e', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=216&crop=smart&format=pjpg&auto=webp&s=a5014ab78695449fda1f65cd8efe5dc0b22db309', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=320&crop=smart&format=pjpg&auto=webp&s=878a72e3d7d471cc54ecc1d53bb7ab188debc705', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=640&crop=smart&format=pjpg&auto=webp&s=8cc2c8af2bf3972513f4eb4548b81ae1287c41c0', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=960&crop=smart&format=pjpg&auto=webp&s=b9af0e8a990e5549b9d820236715ffd19ccd87d7', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ce0054f86f880038689eb7c8ee1a0efb8417e7ad', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bXB0NG4yZXh0YnlmMT2J2kYzLJNIJRLgvAJtIGfYFD-rBBzPhvsMkGzxXpRE.png?format=pjpg&auto=webp&s=0b95db25749bba8b6b04929e3742b1756f614c68', 'width': 1728}, 'variants': {}}]}
What are the best Open Source OCR models currently?
20
(the title says it all)
2025-10-30T23:01:25
https://www.reddit.com/r/LocalLLaMA/comments/1okehd9/what_are_the_best_open_source_ocr_models_currently/
WittyWithoutWorry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okehd9
false
null
t3_1okehd9
/r/LocalLLaMA/comments/1okehd9/what_are_the_best_open_source_ocr_models_currently/
false
false
self
20
null
Technical follow-up to the 'Minimal Value Post' comment: Proof of MSA AGI's Core Architecture.
0
I understand your reactions. I also created it, so I get it. But isn't the least you should do is to bring a question that proves that if I input a certain value into the engine I completed using GPT, a certain answer will come out? I posted my research because I wanted to get validation for what I created. So, bring me a good question, and I will run the engine, capture all the results, and upload them.
2025-10-30T22:42:58
https://www.reddit.com/r/LocalLLaMA/comments/1oke1ib/technical_followup_to_the_minimal_value_post/
BaseballAccording702
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oke1ib
false
null
t3_1oke1ib
/r/LocalLLaMA/comments/1oke1ib/technical_followup_to_the_minimal_value_post/
false
false
self
0
null
[R] Reasoning Models Reason Well, Until They Don't (AACL 2025)
1
[removed]
2025-10-30T22:34:17
https://www.reddit.com/r/LocalLLaMA/comments/1okdu5r/r_reasoning_models_reason_well_until_they_dont/
natural_language_guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okdu5r
false
null
t3_1okdu5r
/r/LocalLLaMA/comments/1okdu5r/r_reasoning_models_reason_well_until_they_dont/
false
false
self
1
null
Big Iron Build on a 1.5k budget
1
Hey y'all :3 Looking into doing a bigger build for larger AI models (possibly 200-600B at a range of quants, most likely Q4/Q2 on the 200b+ scale ones.). This will most likely have to be a older gen DDR4 system, with MoE offloading. In my price range looks to be Skylake-x era Xeon Golds, possibly two of them at 3ghz base and I'l be aiming for all dimm slots filled, even if we take a slight speed penalty. I'm fully aware non MoE models will most likely be sub 1t/s given the rough possible bandwidth of 12 channel DDR4 at 2133-2400mhz + NUMA overheads. Although I've seen Intel has made some interesting forks of various engines to get the most out of CPU only inference. My question is, would MoE models with offload to possibly 2x 3090s or something else of that class turn this into something useable with large scale models? (usable for me being 10-20t/s) or am I wasting my time. I can go for a 768gb system + 2 GPUs fairly easily in a HP Z8 G4 (although not two 3090s, need something lower power). I have 2x RTX 5000 (turing) I could throw in. Already planning a DDR5 2x64gb system for 80-120b models given the significant speed advantages possible on it, as a separate system. For context I develop simple LLM bots, portable AI, real life interaction methods for AI etc. And well just a nerd for this stuff so happy to spend. Budget is somewhat fixed at $2k/1.5k GBP for system + CPU (no GPUS). Bye :3
2025-10-30T22:18:05
https://www.reddit.com/r/LocalLLaMA/comments/1okdgli/big_iron_build_on_a_15k_budget/
AmethystIsSad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okdgli
false
null
t3_1okdgli
/r/LocalLLaMA/comments/1okdgli/big_iron_build_on_a_15k_budget/
false
false
self
1
null
2 Questions to Experts : LLMs reliability in certain scenarios.
0
Hello, I'm a full time developer. I know what LLMs are, and how they work in general, but not in depth. Like many that arent anywhere close to techies, I tend to ask things to LLMs that goes out of just coding questions and I was wondering those two things : 1. Is it possible to have an LLM be "objective". That means, it doesn't agree with me at all time, or will it ALWAYS be subject to bias by what you tell him (For example if you are Democrat it will tend to go on the democrat side or tell you your answer it right all the time) 2. Is it possible to use LLMs as "Gaming Coaches" ? I want to use an LLM to help me improve at strategy multiplayer games, and I wonder if it actually helps, or is it all just junk that will say whatever internet says without actually understanding my issues Thank you !
2025-10-30T22:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1okdcpa/2_questions_to_experts_llms_reliability_in/
Kind_Care_8368
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okdcpa
false
null
t3_1okdcpa
/r/LocalLLaMA/comments/1okdcpa/2_questions_to_experts_llms_reliability_in/
false
false
self
0
null
Are Qwen3‑235B‑A22B‑Thinking‑2507‑8bit and Qwen3‑235B‑A22B‑Thinking‑2507‑FP8 the same model (just different quantisation)?
1
Hey everyone — I’ve been diving into the model Qwen3‑235B‑A22B‑Thinking‑2507 lately, and came across two variant names: * Qwen3-235B-A22B-Thinking-2507-8bit * Qwen3-235B-A22B-Thinking-2507-FP8 My understanding so far is that they share the same architecture/checkpoint, but differ in quantisation format (8-bit integer vs FP8 floating point). However, I couldn’t find any official documentation that clearly states that the “8bit” naming is an official variant or exactly how it differs from “FP8”. Thanks in advance! really keen to get clarity here before I commit to one variant for my deployment setup. [https://huggingface.co/mlx-community/Qwen3-235B-A22B-Thinking-2507-8bit](https://huggingface.co/mlx-community/Qwen3-235B-A22B-Thinking-2507-8bit)
2025-10-30T21:31:38
https://www.reddit.com/r/LocalLLaMA/comments/1okccis/are_qwen3235ba22bthinking25078bit_and/
Desperate_Entrance71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1okccis
false
null
t3_1okccis
/r/LocalLLaMA/comments/1okccis/are_qwen3235ba22bthinking25078bit_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=108&crop=smart&auto=webp&s=f8125759f762ab99b2d319654df84fba733e914f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=216&crop=smart&auto=webp&s=71554ae8aaf4be3366bb581bf2b83d3eb7d50356', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=320&crop=smart&auto=webp&s=0a17dcc48bd050eda4d22873649fe9dcfcd6d6b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=640&crop=smart&auto=webp&s=cd0c1c7c4805645303efedb585a04c5e3e3dc003', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=960&crop=smart&auto=webp&s=7cd84425e8fc5077c81de0579e2698d583406429', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?width=1080&crop=smart&auto=webp&s=e4dab9fe66d734c0b36a9ec09e2e094e045e2722', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GHQbkBtW9AnNdkRGmZrI3hpsrHhjDk2kStBcz3lo9eg.png?auto=webp&s=aeedf426a10e5237780d7997e9e1a32db0af75c2', 'width': 1200}, 'variants': {}}]}
Made vision headphones, had to include access to local models to use at home for the local homies.
0
2025-10-30T21:27:56
https://www.reddit.com/gallery/1okc97a
ranoutofusernames__
reddit.com
1970-01-01T00:00:00
0
{}
1okc97a
false
null
t3_1okc97a
/r/LocalLLaMA/comments/1okc97a/made_vision_headphones_had_to_include_access_to/
false
false
https://a.thumbs.redditm…qc72UmQ7omF4.jpg
0
null
Choose Your Own Adventure App (Ollama compatible & Open Source)
10
I used to play DnD and love the choose you own adventure genre, so I made a mac app that lets you do it with custom local models through Ollama and if you don't have the compute, you can use a Groq API key. Everything is local (except for Groq API calls), and free. Just fun little app I made for myself that I figured I would share. Enjoy! [Github Repo](https://github.com/gmfennema/CYOLLMA)
2025-10-30T21:14:51
https://i.redd.it/35xxswhlcbyf1.gif
thedelusionist
i.redd.it
1970-01-01T00:00:00
0
{}
1okbxki
false
null
t3_1okbxki
/r/LocalLLaMA/comments/1okbxki/choose_your_own_adventure_app_ollama_compatible/
false
false
default
10
{'enabled': True, 'images': [{'id': '35xxswhlcbyf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=108&crop=smart&format=png8&s=7517416eb2a258025f840b2ac8d4d686b5f14b0b', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=216&crop=smart&format=png8&s=0aa0007fb2a617a69ad8e18259d31004c3743c17', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=320&crop=smart&format=png8&s=940969394849d0d7abc7f9390b41e589c40d567f', 'width': 320}, {'height': 472, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=640&crop=smart&format=png8&s=8e738611944bd7454ed9adb9039384acebe9981e', 'width': 640}, {'height': 708, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=960&crop=smart&format=png8&s=202f7d3f661dfebf56b4874f60453d11ea04f617', 'width': 960}], 'source': {'height': 720, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?format=png8&s=9ca9e791d87025050c4879767907d2bcdef4c3ef', 'width': 976}, 'variants': {'gif': {'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=108&crop=smart&s=8573c78d42f18e66b854004440af276b64cb8e3b', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=216&crop=smart&s=6c10ac5cdd57ed9cffc4fc6d3a08737fad096c0f', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=320&crop=smart&s=6a286d1d73e8027cbd51922b3d39a6b48067fa15', 'width': 320}, {'height': 472, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=640&crop=smart&s=365a13b33df250d636388ff2339b1b3aeb30d79d', 'width': 640}, {'height': 708, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=960&crop=smart&s=3735931bf0a8cdf89f8e29997adb28655fc366df', 'width': 960}], 'source': {'height': 720, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?s=fd6dfb59524d11485f3fed2f7beb31be17871b49', 'width': 976}}, 'mp4': {'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=108&format=mp4&s=9c682148070a09a5a69e935bf74aee4548450819', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=216&format=mp4&s=609d985b0c720f724a17662cb93dffd211b12d77', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=320&format=mp4&s=ef57dc9da0d0c9212a7a43e8c8cbd050400548c2', 'width': 320}, {'height': 472, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=640&format=mp4&s=9e31ec4ac7822151f9d13897487bcc2f49ce5707', 'width': 640}, {'height': 708, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?width=960&format=mp4&s=207419e925e583138ef4b90c9848d667a046c65d', 'width': 960}], 'source': {'height': 720, 'url': 'https://preview.redd.it/35xxswhlcbyf1.gif?format=mp4&s=4c820cccb1b0b2e7a01470180150b8e8c9d776eb', 'width': 976}}}}]}
IBM just released unsloth for finetinuing Granite4.0_350M
200
[https://github.com/unslothai/notebooks/blob/main/nb/Granite4.0\_350M.ipynb](https://github.com/unslothai/notebooks/blob/main/nb/Granite4.0_350M.ipynb) Big ups for the IBM folks for following up so quickly
2025-10-30T21:08:34
https://i.redd.it/vn84zysldbyf1.png
SnooMarzipans2470
i.redd.it
1970-01-01T00:00:00
0
{}
1okbrz4
false
null
t3_1okbrz4
/r/LocalLLaMA/comments/1okbrz4/ibm_just_released_unsloth_for_finetinuing/
false
false
https://a.thumbs.redditm…mNFJqo57gaQ4.jpg
200
{'enabled': True, 'images': [{'id': '5lK2pvVk92EbLle5WFFIOrT0WDN8Ao_jeNNq_ox4H7E', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=108&crop=smart&auto=webp&s=45d6fd06d4b3f85eebd1a901c107f4fe7ce0e2cd', 'width': 108}, {'height': 72, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=216&crop=smart&auto=webp&s=22706264ad2b38f4eed0c63a8283c776c6743ffb', 'width': 216}, {'height': 107, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=320&crop=smart&auto=webp&s=a40239cdcc8d983693c94ac7beab901ce0e3d378', 'width': 320}, {'height': 214, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=640&crop=smart&auto=webp&s=c3e80873a766a970e98766336d69701437c1c9c8', 'width': 640}, {'height': 321, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=960&crop=smart&auto=webp&s=166ebdf369df4abc99bc6c861a6e4f93040f594d', 'width': 960}, {'height': 361, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?width=1080&crop=smart&auto=webp&s=2e84eae520b3d2bb48a06cacdae4b7be06fe9e2a', 'width': 1080}], 'source': {'height': 518, 'url': 'https://preview.redd.it/vn84zysldbyf1.png?auto=webp&s=2164adc71805381148c63be642506246f30d690b', 'width': 1546}, 'variants': {}}]}
Llama-cpp QWen3-VL + Flux Image-to-Image Locally on Dual GPUs (3090 + 3060Ti)
88
Hey everyone, Just wanted to share my setup for a fully local multimodal AI stack — combining LLaMA.cpp (Qwen3-VL 32B) for vision + text and Stable Diffusion WebUI Forge (Flux-dev model) for image generation. This runs entirely offline on my 14900K, RTX 3090, and RTX 3060 Ti, with GPU separation for text vs image workloads. Works for chat, vision tasks, and full image-to-image transformations. There is enough free Vram on the 3090 to run GPT-OSS-120b with cpu-moe at the same time! * Qwen3-VL-32B-Instruct (quantized Q4\_K\_M) * GPT-OSS-120b mxfp4 * Flux1-dev-bnb-nf4-v2.safetensors (SD Forge) * OpenWebUI * llama.cpp (with CUDA + vision enabled) * Stable Diffusion WebUI Forge (API mode) * i9-14900K * RTX 3090 (for LLM) * RTX 3060 Ti (for Flux) Workflow will be in a separate post below if enough interest
2025-10-30T20:00:54
https://i.redd.it/c1lxfyn21byf1.png
Wrong-Historian
i.redd.it
1970-01-01T00:00:00
0
{}
1oka1pi
false
null
t3_1oka1pi
/r/LocalLLaMA/comments/1oka1pi/llamacpp_qwen3vl_flux_imagetoimage_locally_on/
false
false
default
88
{'enabled': True, 'images': [{'id': 'c1lxfyn21byf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?width=108&crop=smart&auto=webp&s=7404584e6abb6e530d1d582c57f9b252fe31a8d7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?width=216&crop=smart&auto=webp&s=2b6e1ea3cbf5b0e2af172ab38db8b53b288e05c3', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?width=320&crop=smart&auto=webp&s=129a935d633e5574000c55ad389406e58b90cfa0', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?width=640&crop=smart&auto=webp&s=6d841491ac14c9cb1ed37dacfbc96291c6143561', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?width=960&crop=smart&auto=webp&s=5837fd06e2bd6b1b3b1da33d395375dd3a0fd26b', 'width': 960}], 'source': {'height': 2455, 'url': 'https://preview.redd.it/c1lxfyn21byf1.png?auto=webp&s=34670866b60b1e9424632216dffd98f2c59ada35', 'width': 1059}, 'variants': {}}]}
Ai Accelerator
2
Has anyone tested a 40tops Kinara-Ara 2?
2025-10-30T19:59:48
https://www.reddit.com/r/LocalLLaMA/comments/1oka0lt/ai_accelerator/
Soft_Examination1158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oka0lt
false
null
t3_1oka0lt
/r/LocalLLaMA/comments/1oka0lt/ai_accelerator/
false
false
self
2
null
DeepSeek-OCR demonstrates the relevance of text-as-image compression: What does the future hold?
6
Hello, Following the DeepSeek paper on data compression—transitioning from LLMs (Large Language Models) to VLMs (Vision-Language Models) to minimize tokens and improve performance. Can we expect further gains? I've had two ideas, but I'm unsure about their viability. - Training a vision model purely for diffusion (similar to diffusion-based LLMs) to generate the next part of the text in the DeepSeek-OCR input format. The entire textual context would be transformed into an image, and we would then extend this image using a vision model to obtain the continuation of the text. Could this be a promising direction? - If transforming text into an image allows for performance gains (from my beginner's perspective, moving from 1D to 2D), could we, similar to the computation of vectors, matrices, and tensors, imagine even more powerful compression by moving to a "video" format, for instance? This format would be abstract, much like tensors, which are difficult to visualize in the real world. Sorry if my idea is not clear or very not relevant
2025-10-30T19:49:55
https://www.reddit.com/r/LocalLLaMA/comments/1ok9ren/deepseekocr_demonstrates_the_relevance_of/
ContributionOwn4879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok9ren
false
null
t3_1ok9ren
/r/LocalLLaMA/comments/1ok9ren/deepseekocr_demonstrates_the_relevance_of/
false
false
self
6
null
Extensive Research into Knowledge Graph Traversal Algorithms for LLMs
1
[removed]
2025-10-30T19:25:31
https://www.reddit.com/gallery/1ok94yb
Alieniity
reddit.com
1970-01-01T00:00:00
0
{}
1ok94yb
false
null
t3_1ok94yb
/r/LocalLLaMA/comments/1ok94yb/extensive_research_into_knowledge_graph_traversal/
false
false
https://b.thumbs.redditm…tXuz97cwwDig.jpg
1
null
23[F4M] Down for video chat,sext and "GFE"
1
2025-10-30T19:18:59
https://i.redd.it/v3htulscuayf1.jpeg
jfranz5216
i.redd.it
1970-01-01T00:00:00
0
{}
1ok8yuf
false
null
t3_1ok8yuf
/r/LocalLLaMA/comments/1ok8yuf/23f4m_down_for_video_chatsext_and_gfe/
false
false
nsfw
1
{'enabled': True, 'images': [{'id': 'v3htulscuayf1', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=108&crop=smart&auto=webp&s=519c89f0f5a3bac1e9c75ed20e8baa4a2b39542b', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=216&crop=smart&auto=webp&s=3b284fa30fec79bac63d93004e116c42b2d32b11', 'width': 216}, {'height': 408, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=320&crop=smart&auto=webp&s=55cd50a2a1392dafed58aa4d58364ed720ff197e', 'width': 320}, {'height': 817, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=640&crop=smart&auto=webp&s=e88bead4d8b0651e375857c7a770ce8fdffac7f6', 'width': 640}, {'height': 1226, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=960&crop=smart&auto=webp&s=ec9a3eb1cadca7e6c916846ea220a4b712af5e0f', 'width': 960}, {'height': 1380, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=1080&crop=smart&auto=webp&s=dcc3d4ce45d0451d6be5df2f781478c90e36aaff', 'width': 1080}], 'source': {'height': 3864, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?auto=webp&s=d0c9d7339148ea7dcd7dcf95bb44ff0003a0217b', 'width': 3024}, 'variants': {'nsfw': {'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=4b35e170fcd0a40d2258bffa32bfbb4f558d120a', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1c371b6ec2a7adb4580c0c04db9d822c9f54d9f3', 'width': 216}, {'height': 408, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=04f63f3bb7c7303eaf674bc4990c707a64afd60e', 'width': 320}, {'height': 817, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c7e6f8d10bf275b5f7c1392d4ebcb833881c5c9b', 'width': 640}, {'height': 1226, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=9551e27a4d8f18bf45d0bfcca0ddc49d2ce7bacf', 'width': 960}, {'height': 1380, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=b2a13b057e91a46bd9efbd34f1516746176540cb', 'width': 1080}], 'source': {'height': 3864, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?blur=40&format=pjpg&auto=webp&s=055486f619931c67bd07833502e3b70ccdf12b04', 'width': 3024}}, 'obfuscated': {'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=4b35e170fcd0a40d2258bffa32bfbb4f558d120a', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=1c371b6ec2a7adb4580c0c04db9d822c9f54d9f3', 'width': 216}, {'height': 408, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=04f63f3bb7c7303eaf674bc4990c707a64afd60e', 'width': 320}, {'height': 817, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c7e6f8d10bf275b5f7c1392d4ebcb833881c5c9b', 'width': 640}, {'height': 1226, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=9551e27a4d8f18bf45d0bfcca0ddc49d2ce7bacf', 'width': 960}, {'height': 1380, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=b2a13b057e91a46bd9efbd34f1516746176540cb', 'width': 1080}], 'source': {'height': 3864, 'url': 'https://preview.redd.it/v3htulscuayf1.jpeg?blur=40&format=pjpg&auto=webp&s=055486f619931c67bd07833502e3b70ccdf12b04', 'width': 3024}}}}]}
Qwen3-32B Nemotron GGUFs with extended context
53
Come and get them while they're hot! Fresh new GGUFs for the Nemotron Qwen3 32B version. Since nowadays 40k context is kind of meh, I uploaded all the GGUFs with Yarn RoPE extension factor 4 to extend the context to 160k. Have fun :>
2025-10-30T19:08:07
https://huggingface.co/ilintar/Qwen3-Nemotron-32B-160k-GGUF
ilintar
huggingface.co
1970-01-01T00:00:00
0
{}
1ok8oi0
false
null
t3_1ok8oi0
/r/LocalLLaMA/comments/1ok8oi0/qwen332b_nemotron_ggufs_with_extended_context/
false
false
https://external-preview…38dd0fc6959e19cd
53
{'enabled': False, 'images': [{'id': 'yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=108&crop=smart&auto=webp&s=645ffc65758eac95090f7e65d1ddadd45cd9ff24', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=216&crop=smart&auto=webp&s=240c963dda31e2b2320fd3c8133f34696f79f395', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=320&crop=smart&auto=webp&s=b5db43137ee008f198eb0b7bddbb038232e0ff61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=640&crop=smart&auto=webp&s=7f4514e47d6186caa6f939f91a47d0f1a5a78fda', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=960&crop=smart&auto=webp&s=683445b8f5c4f7f34b2a89a3a10ad45bcb6de7fd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?width=1080&crop=smart&auto=webp&s=0f21233fce1d5a52153baf649dedbbae9aebe217', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yBqusUNjfr-V7_y7nYhSlrXelFBeODD4ShwNRxhNyPs.png?auto=webp&s=a98b2aece8f5e8e3de8e2a49f0704f9c7de7b534', 'width': 1200}, 'variants': {}}]}
Where can I get paid datasets for Social and Engineering Research?
1
Can you recommend me where i can find data's related to social, engineering, transportation for my research work. I am open to paid as well as free data's for research. where can i find such data?
2025-10-30T19:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1ok8n8x/where_can_i_get_paid_datasets_for_social_and/
Saurabus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok8n8x
false
null
t3_1ok8n8x
/r/LocalLLaMA/comments/1ok8n8x/where_can_i_get_paid_datasets_for_social_and/
false
false
self
1
null
mradermacher published the entire qwen3-vl series and You can now run it in Jan; just download the latest version of llama.cpp and you're good to go.
39
Profile with all models qwen3-vl series : [https://huggingface.co/mradermacher](https://huggingface.co/mradermacher) https://preview.redd.it/0hpbr15unayf1.png?width=968&format=png&auto=webp&s=7dfbd6deab21e5a1b70e94ec479fe4ac284e4beb
2025-10-30T18:43:23
https://www.reddit.com/r/LocalLLaMA/comments/1ok80pp/mradermacher_published_the_entire_qwen3vl_series/
Illustrious-Swim9663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok80pp
false
null
t3_1ok80pp
/r/LocalLLaMA/comments/1ok80pp/mradermacher_published_the_entire_qwen3vl_series/
false
false
https://b.thumbs.redditm…fssDQO-d6Wnc.jpg
39
null
Faster llama.cpp ROCm performance for AMD RDNA3 (tested on Strix Halo/Ryzen AI Max 395)
148
The other day I was doing some [exploring on how ggml-cuda works](https://github.com/lhl/strix-halo-testing/blob/main/llama-cpp-fix-wmma/llama-cpp-cuda-hip.md) and I found that there were some easy fixes for llama.cpp's ROCm/HIP backend performance with rocWMMA (which sees bigger-than-expected drops with long context). These fixes I believe also solve most of the ROCm backend crashing problems (the default HIP path in llama.cpp's ROCm backend does not have a guard for fallback if there are missing tiles, I added a VEC fallback for those cases - without the guard, weird dimensions w/ missing tiles results in crashes). With these fixes, I believe this is the overall fastest/best RDNA3 backend (caveat: only tested on Strix Halo gfx1151, a few models at long context). It has had some positive feedback from testing by a few community members so I figure I'd share it somewhere more publicly so that those that are interested can poke around (NOTE: [this branch will not be merged upstream](https://github.com/ggml-org/llama.cpp/pull/16827)). - Feature Branch: https://github.com/lhl/llama.cpp/tree/rocm-wmma-tune - Actual changes: https://github.com/ggml-org/llama.cpp/compare/master...lhl:llama.cpp:rocm-wmma-tune - Testing and docs: https://github.com/lhl/strix-halo-testing/tree/main/llama-cpp-fix-wmma Here's an example of how significant the performance improvements are for me: ## Llama 3.2 1B Q4_K_M ### My rocWMMA vs HIP Prefill (pp) | model | size | params | test | HIP | lhl-tune-tile | Δ% | |------------------------|------------|----------|----------------|--------:|----------------:|-------:| | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 | 4703.28 | 4970.14 | 5.67% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d1024 | 4076.03 | 4575.18 | 12.25% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d4096 | 2936.89 | 3788.92 | 29.01% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d16384 | 1350.48 | 2064.78 | 52.89% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d65536 | 424.76 | 706.46 | 66.32% | Decode (tg) | model | size | params | test | HIP | lhl-tune-tile | Δ% | |------------------------|------------|----------|----------------|-------:|----------------:|-------:| | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 | 195.65 | 195.59 | -0.03% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d1024 | 188.79 | 188.84 | 0.03% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d4096 | 173.36 | 173.28 | -0.05% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d16384 | 126.86 | 127.01 | 0.12% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d65536 | 64.62 | 64.55 | -0.10% | ### My rocWMMA vs Previous rocWMMA Prefill (pp) | model | size | params | test | default-rocwmma | lhl-tune-tile | Δ% | |------------------------|------------|----------|----------------|------------------:|----------------:|-------:| | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 | 4884.42 | 4970.14 | 1.75% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d1024 | 4204.81 | 4575.18 | 8.81% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d4096 | 2959.54 | 3788.92 | 28.02% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d16384 | 1265.62 | 2064.78 | 63.14% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | pp512 @ d65536 | 360.24 | 706.46 | 96.11% | Decode (tg) | model | size | params | test | default-rocwmma | lhl-tune-tile | Δ% | |------------------------|------------|----------|----------------|------------------:|----------------:|--------:| | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 | 193.01 | 195.59 | 1.34% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d1024 | 182.6 | 188.84 | 3.42% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d4096 | 143.51 | 173.28 | 20.74% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d16384 | 87.53 | 127.01 | 45.11% | | llama 1B Q4_K - Medium | 762.81 MiB | 1.24 B | tg128 @ d65536 | 27.35 | 64.55 | 136.06% | ## gpt-oss-20b F16/MXFP4 ### My rocWMMA vs HIP Prefill (pp) | model | size | params | test | HIP | lhl-tune-tile | Δ% | |-----------------|--------------|----------|----------------|--------:|----------------:|-------:| | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 | 1472.01 | 1495.97 | 1.63% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d1024 | 1387.58 | 1456.15 | 4.94% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d4096 | 1175.72 | 1347.75 | 14.63% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d16384 | 713.9 | 962.98 | 34.89% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d65536 | 277.58 | 426.81 | 53.76% | Decode (tg) | model | size | params | test | HIP | lhl-tune-tile | Δ% | |-----------------|--------------|----------|----------------|------:|----------------:|-------:| | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 | 49.92 | 49.9 | -0.04% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d1024 | 49.27 | 49.21 | -0.11% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d4096 | 48.15 | 48.05 | -0.20% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d16384 | 44.38 | 44.34 | -0.11% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d65536 | 34.76 | 34.77 | 0.03% | ### My rocWMMA vs Previous rocWMMA Prefill (pp) | model | size | params | test | default-rocwmma | lhl-tune-tile | Δ% | |-----------------|--------------|----------|----------------|------------------:|----------------:|-------:| | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 | 1513.79 | 1495.97 | -1.18% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d1024 | 1417.45 | 1456.15 | 2.73% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d4096 | 1205.37 | 1347.75 | 11.81% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d16384 | 669.77 | 962.98 | 43.78% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | pp512 @ d65536 | 227.24 | 426.81 | 87.83% | Decode (tg) | model | size | params | test | default-rocwmma | lhl-tune-tile | Δ% | |-----------------|--------------|----------|----------------|------------------:|----------------:|--------:| | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 | 50.23 | 49.9 | -0.64% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d1024 | 48.65 | 49.21 | 1.16% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d4096 | 45.11 | 48.05 | 6.53% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d16384 | 32.91 | 44.34 | 34.72% | | gpt-oss 20B F16 | 13141.28 MiB | 20.91 B | tg128 @ d65536 | 14.63 | 34.77 | 137.71% | ## Strix Halo vs DGX Spark As another point of comparison, compared to ggeranov's recent [DGX Spark llama.cpp performance sweeps](https://github.com/ggml-org/llama.cpp/discussions/16578), both prefill and decode degradation are massively reduced, with decode (tg/token generation) now basically stably matching the DGX Spark (~-10%) from 0-32K context depth. ### Vulkan AMDVLK | Test | DGX | STXH | % | | ------------- | ------: | -----: | ------: | | pp2048 | 1689.47 | 729.10 | +131.7% | | pp2048@d4096 | 1733.41 | 562.15 | +208.4% | | pp2048@d8192 | 1705.93 | 424.50 | +301.9% | | pp2048@d16384 | 1514.78 | 249.68 | +506.7% | | pp2048@d32768 | 1221.23 | 137.08 | +790.9% | | Test | DGX | STXH | % | | ----------- | ----: | ----: | -----: | | tg32 | 52.87 | 50.05 | +5.6% | | tg32@d4096 | 51.02 | 46.11 | +10.6% | | tg32@d8192 | 48.46 | 43.15 | +12.3% | | tg32@d16384 | 44.78 | 38.46 | +16.4% | | tg32@d32768 | 38.76 | 31.54 | +22.9% | ### ROCm w/ rocWMMA | Test | DGX | STXH | % | | ------------- | ------: | -----: | ------: | | pp2048 | 1689.47 | 1006.65 | +67.8% | | pp2048@d4096 | 1733.41 | 790.45 | +119.3% | | pp2048@d8192 | 1705.93 | 603.83 | +182.5% | | pp2048@d16384 | 1514.78 | 405.53 | +273.5% | | pp2048@d32768 | 1221.23 | 223.82 | +445.6% | | Test | DGX | STXH | % | | ----------- | ----: | ----: | -----: | | tg32 | 52.87 | 46.56 | +13.6% | | tg32@d4096 | 51.02 | 38.25 | +33.4% | | tg32@d8192 | 48.46 | 32.65 | +48.4% | | tg32@d16384 | 44.78 | 25.50 | +75.6% | | tg32@d32768 | 38.76 | 17.82 | +117.5% | ### [My Tuned rocWMMA](https://github.com/lhl/llama.cpp/tree/rocm-wmma-tune) | Test | DGX | STXH | % | | ------------- | ------: | -----: | ------: | | pp2048 | 1689.47 | 977.22 | +72.9% | | pp2048@d4096 | 1733.41 | 878.54 | +97.3% | | pp2048@d8192 | 1705.93 | 743.36 | +129.5% | | pp2048@d16384 | 1514.78 | 587.25 | +157.9% | | pp2048@d32768 | 1221.23 | 407.87 | +199.4% | | Test | DGX | STXH | % | | ----------- | ----: | ----: | -----: | | tg32 | 52.87 | 48.97 | +8.0% | | tg32@d4096 | 51.02 | 45.42 | +12.3% | | tg32@d8192 | 48.46 | 43.55 | +11.3% | | tg32@d16384 | 44.78 | 40.91 | +9.5% | | tg32@d32768 | 38.76 | 36.43 | +6.4% | **Note on Vulkan drivers and batch sizes:** - AMDVLK (shown below) uses optimal `-ub 512` and has better `pp` performance - RADV uses optimal `-ub 1024` with lower `pp` but `tg` decreases less at depth - ROCm tested with standard `-ub 2048`
2025-10-30T18:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1ok7hd4/faster_llamacpp_rocm_performance_for_amd_rdna3/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok7hd4
false
null
t3_1ok7hd4
/r/LocalLLaMA/comments/1ok7hd4/faster_llamacpp_rocm_performance_for_amd_rdna3/
false
false
self
148
{'enabled': False, 'images': [{'id': 'rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=108&crop=smart&auto=webp&s=a0652c34b710980cf615658059d54bd4eeb9c0f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=216&crop=smart&auto=webp&s=7106166889ca347e77b0c91f35d0f5ce809d340f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=320&crop=smart&auto=webp&s=f76605964f6c7de2be4adb7ace4ca033f4e18635', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=640&crop=smart&auto=webp&s=3b9fbedd26cad57a9db27de027249c8d13e8e7bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=960&crop=smart&auto=webp&s=ff17fcfbef5727a7cb18bba58c729641960c3676', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?width=1080&crop=smart&auto=webp&s=3e21666839d9849275b9e4cdcba4ed4841c0b2c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rAUlcrq7VfrIj88RJv8H7iTnA2A5axWQmLDhFVisyMU.png?auto=webp&s=0e07537d6abb8edd1071fdb659039a23eb838d10', 'width': 1200}, 'variants': {}}]}
Translation Directives
0
Hi, I need to create a system prompt where it translates text into various languages, the problem is that when it translates, it deletes tags that shouldn't be deleted. here is an example: Sentence to translate: `"Lascia che ti dica una bella cosa \n[1], Tu non sei un Idiota, sei proprio uno Stronzo."` La traduce in questo modo: `Let me tell you a good thing, You are not an Idiot, you are really an Asshole.` by removing "\\n\[1\]" I need it not to delete this tag but to treat it as a person's name. how can I do it?
2025-10-30T18:22:02
https://www.reddit.com/r/LocalLLaMA/comments/1ok7g60/translation_directives/
FurrPic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok7g60
false
null
t3_1ok7g60
/r/LocalLLaMA/comments/1ok7g60/translation_directives/
false
false
self
0
null
I Bought the Intel ARC B50 to use with LM Studio
27
I checked my email, and a message was waiting for me from B&H Photo: “Intel Arc Pro B50 Workstation SFF Graphics Card is now in stock!” The moment of decision had arrived. Since I got into running LLMs on my Ryzen 5700 several months ago, I had been exploring all sorts of options to improve my rig. The first step was to upgrade to 64GB of RAM (the two 32 GB RAM modules proved to be flaky, so I am in the process of returning them). While 64GB allowed me to run larger models, the speeds were not that impressive. For example, with DeepSeek R1/Qwen 8B and a 4K context window in LM Studio, I get 6–7 tokens per second (tps). Not painfully slow, but not very fast either. After sitting and waiting for tokens to flow, at some point I said, “I feel the need for speed!” Enter the Intel ARC B50. After looking at all of the available gaming graphics cards, I found them to be too power hungry, too expensive, too loud, and some of them generate enough heat to make a room comfy on a winter day. When I finally got the alert that it was back in stock, it did not take me long to pull the trigger. It had been unavailable for weeks, was heavily allocated, and I knew it would sell out fast. My needs were simple: better speed and enough VRAM to hold the models that I use daily without having to overhaul my system that lives in a mini tower case with a puny 400-watt power supply. The B50 checked all the boxes. It has 16GB of GDDR6 memory, a 128-bit interface, and 224 GB/s of bandwidth. Its Xe² architecture uses XMX (Intel Xe Matrix eXtensions) engines that accelerate AI inference far beyond what my CPU can deliver. With a 70-watt thermal design power and no external power connectors, the card fits easily into compact systems like mine. That mix of performance and ease of installation made it completely irresistible. And the price was only around $350, exceptional for a 16GB card. During my first week of testing, the B50 outperformed my 5700G setup by 2 to 4 times in inference throughput. For example, DeepSeek R1/Qwen 8B in LM Studio using the Vulkan driver delivers 32–33 tps, over 4X the CPU-only speed. Plus, most of the 64GB system memory is now freed for other tasks when LM Studio is generating text. When I first considered the Intel B50, I was initially skeptical. Intel’s GPU division has only recently re-entered the workstation space, and driver support is a valid concern. AMD and especially Nvidia have much more mature and well-supported drivers, and the latter company’s architecture is considered to be the industry standard. But the Intel drivers have proven to be solid, and the company seems to be committed to improving performance with every revision. For someone like me who values efficiency and longevity over pure speed, that kind of stability and support are reassuring. I think that my decision to buy the B50 was the right one for my workflow. The Intel Arc Pro B50 doesn’t just power my machine. It accelerates the pace of my ideas.
2025-10-30T18:01:21
https://www.reddit.com/r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/
tony10000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok6w8r
false
null
t3_1ok6w8r
/r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/
false
false
self
27
null
Advices in LLM
1
Hello LLM experts, I’m an LLM enthusiast with a use case where I could use some help. I have text’s templates with placeholders for variables. I have those templates being used to generate some full texts. For example: Template: My favorite meal is [x]. It’s delicious! [x]: represents the placeholder Some created examples: My favorite meal is pizza. It’s delicious! My favorite meal is sushi. It’s delicious! Some others modifying the template: My favorite favorite meal is pizza. It’s delicious! My favorite meal is sushi. I love the sushi! I have read a lot about the subject and since I want to extract the variables in the placeholder it seems to be NER task problem. Even though, my placeholders are not very specifics. So I was thinking about open-NER with an LLM model. I was thinking about passing the template and the example and create a prompt. However, I wonder: why not taking advantage of all the examples that I have already extracted or without variations in the template. So I was thinking about some kind of RAG but I don’t really know anything about it. How do you see my problem? Is it something that could be done with LLM and prompting? Can I take advantage of the examples? I have take as an example a really easy one, I have nested variables, templates where 90% of the text is the variable. Long and short texts, a lot of variations. Feel free to ask me anything and thank you in advance
2025-10-30T17:49:30
https://www.reddit.com/r/LocalLLaMA/comments/1ok6klk/advices_in_llm/
FindingDry1988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok6klk
false
null
t3_1ok6klk
/r/LocalLLaMA/comments/1ok6klk/advices_in_llm/
false
false
self
1
null
A proxy or solution to deal with restarting llama-server ?
1
Hi ! Like says in the title, I'm having issues with llama-server, after a while (several weeks) it starts not working anymore, it doesn't crash, but the inference just lags out, restarting the process fixes that, so I'm looking to see if anyone else had this issue in the past, and how they are dealing with it. (Preferably automatically).
2025-10-30T17:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1ok67sj/a_proxy_or_solution_to_deal_with_restarting/
BraceletGrolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok67sj
false
null
t3_1ok67sj
/r/LocalLLaMA/comments/1ok67sj/a_proxy_or_solution_to_deal_with_restarting/
false
false
self
1
null
Locally hosted Loveable with full stack support and llama.cpp, and more
103
Hey everyone, I wanted to share my story. This year in February, I came up with some notion (mostly just pissed) that we couldn't use AI models as good as claude locally to design. The fact that they had all this training and design data held behind a wall (which you had to pay for) was super unnatural so I just started learning about AI and wanted to train my own model. The very first model that I trained, I put it on huggingface and it went trending overnight. It was on the front page right next to DeepSeek etc and people kept asking me who did all that? Was I part of a research group or academic? And I was just like no... just 22 year old with a laptop lol. Ever since then, I used my off hours from my full time job to train models and code software, with the intention of keeping everything open source. (Just angry again that we don't have gpus haha).The future of AI is definitely open source. Along the way I kept talking to people and realized that AI assisted coding is the future as well, freeing up mental capacity and space to do better things with your time like architecture and proper planning. Technology enabled a lot more people to become builders and I thought that was so cool, until I realized... Not open sourced again. Loveable, Cursor, etc.. Just a system prompt and tools. Why can I not change my own system prompts? Everythings closed source these days. So I built the opposite. My goal is to make coding models that look as good as Claude and a tool to use said coding models. So I built Tesslate Studio. Its open sourced, Apache 2.0. Bring your own models (llama.cpp, ollama, openrouter, lm studio, Litellm or your own urls), Bring your own agents (you can define the system prompt or tools or add in a new agent with the factory), and bring your own github urls to start with. AI should be open sourced and accessible to everyone. I don't want people changing my system prompts again as well as I would like to choose on my own when I would want to change the prompt for the stuff I'm building. [https://github.com/TesslateAI/Studio](https://github.com/TesslateAI/Studio) Each project also gets a Kanban board, notes. You can switch the agent whenever you want and try other people's agents if you have it hosted in a multi user environment. Drop any model in. use any agents with whatever tools you define. I am actively developing this and will continue to improve it based on feedback. The open source project will always be 100% free and I'm definitely looking for contributions, suggestions, issues, etc. Would love to work with some talented engineers. Docs: [https://docs.tesslate.com](https://docs.tesslate.com) Locally Hosting: * You can create multiple accounts and share it across your local net * Create agents that you can share across all the account * Users can fork their own agents and add in their own models * Collaboration coming soon! I have it hosted online for (free, Free GPT-5 and Qwen-coder) at [https://tesslate.com](https://tesslate.com) using cloud credits until they run out on the 12th of November. Thank You for taking the time to read this, I appreciate it!
2025-10-30T17:19:29
https://www.reddit.com/gallery/1ok5rn2
smirkishere
reddit.com
1970-01-01T00:00:00
0
{}
1ok5rn2
false
null
t3_1ok5rn2
/r/LocalLLaMA/comments/1ok5rn2/locally_hosted_loveable_with_full_stack_support/
false
false
https://b.thumbs.redditm…gkayW05s7pWs.jpg
103
null
Open source desktop app for generating synthetic data with local LLMs (Tauri + llama.cpp)
3
Hey! 👋 I built an open-source desktop app for generating diverse, consistent tabular synthetic data using local LLMs. Recently, I pretrained a model for video game dialogue classification to help NPCs evaluate their environment. Many people told me it wasn't a good idea to use existing dialogues for other "commercial" games. So I thought about building a desktop app that lets anyone generate data locally (for free). The key challenge with LLM-generated tabular data is maintaining both consistency and diversity. To solve this, each column has its own generation rules with strict typing (text, int, float, etc.). You can reference other columns in the same row using \`@column\_name\` tags, and use diversity operators like \`@RANDOM\_INT\_X\` to force varied distributions. For example, here's a rule for generating names: \`\`\`Generate a Firstname and Lastname for gender (@gender). Cultural origin (@RANDOM\_INT\_7): 0→American, 1→German, 2→French, 3→Indian, 4→Brazilian, 5→Spanish, 6→Japanese\`\`\` This ensures names match the gender (consistency) while distributing cultural backgrounds evenly across rows (diversity). Without the \`@RANDOM\_INT\_7\`, many LLMs tend to cluster around common anglophone names. The app is built with Tauri (Rust + TypeScript) and uses llama.cpp (via llama-cpp-rs) for inference. Everything runs locally, so no cloud dependencies, no API costs. [https://github.com/mavdol/sample01](https://github.com/mavdol/sample01) I'd especially love to hear about use cases you'd find valuable, ideas for additional operators or features. PRs are welcome if you want to contribute!
2025-10-30T17:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1ok5qdx/open_source_desktop_app_for_generating_synthetic/
Tall_Insect7119
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok5qdx
false
null
t3_1ok5qdx
/r/LocalLLaMA/comments/1ok5qdx/open_source_desktop_app_for_generating_synthetic/
false
false
self
3
{'enabled': False, 'images': [{'id': 'mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=108&crop=smart&auto=webp&s=0e5fd1028c5cd312ca062383fe6112729a38676e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=216&crop=smart&auto=webp&s=e0a2dbaf1a3d791dde675c5f63b5d28ca7905dbb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=320&crop=smart&auto=webp&s=2038b357cfa402fae6d0b7b78ce5cbf46a290df0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=640&crop=smart&auto=webp&s=835515f4c480fee06531d26cb89d7a977a995c36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=960&crop=smart&auto=webp&s=b7e2e0690bb71d29b4863fe1ca10b1e671034cab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?width=1080&crop=smart&auto=webp&s=c74d89c0b9da5cb6b5247e8b9562b8120c731423', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mMUAmGfYeU1xvmYi0D9cZYRf6wWirYiXUnfA5zt5rwM.png?auto=webp&s=41fb9b2bdd32340949eee3ba7bdd96e3e8e2af1d', 'width': 1200}, 'variants': {}}]}
Technical follow-up to the 'Minimal Value Post' comment: Proof of MSA AGI's Core Architecture.
0
2025-10-30T17:10:41
https://i.redd.it/k9tz7dyd7ayf1.png
BaseballAccording702
i.redd.it
1970-01-01T00:00:00
0
{}
1ok5j44
false
null
t3_1ok5j44
/r/LocalLLaMA/comments/1ok5j44/technical_followup_to_the_minimal_value_post/
false
false
https://b.thumbs.redditm…mQc6hCTJK-zQ.jpg
0
{'enabled': True, 'images': [{'id': 'S6VmD_YYJxFP8-tj1oXtmPqu9kNZXplCJuH08wone5g', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?width=108&crop=smart&auto=webp&s=43e0a299427db88a948feaa18dcbbc088813e2d2', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?width=216&crop=smart&auto=webp&s=a00818973889afc14dd64f632afb59254f14f718', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?width=320&crop=smart&auto=webp&s=85f24b44db0f18587be2c2677b186d66df61d74a', 'width': 320}, {'height': 643, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?width=640&crop=smart&auto=webp&s=450d12021244ab56dfe6031ac0f62c2157820e83', 'width': 640}, {'height': 965, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?width=960&crop=smart&auto=webp&s=43dcf1ca1b16ec4b040f91ff3a13a0124a2322e9', 'width': 960}], 'source': {'height': 1022, 'url': 'https://preview.redd.it/k9tz7dyd7ayf1.png?auto=webp&s=974b381e0fd57934101d7044db00ea36d5d2e246', 'width': 1016}, 'variants': {}}]}
Qwen3-VL-32B Q8 speeds in llama.cpp vs vLLM FP8 on a RTX PRO 6000
64
Support for Qwen3-VL has just been merged to llama.cpp, thanks to all the contributors and the qwen team! [https://github.com/ggml-org/llama.cpp/pull/16780](https://github.com/ggml-org/llama.cpp/pull/16780) The speed for the Q8 gguf's is actually faster\* in llama.cpp vs the FP8 version in vLLM, and it works pretty well. In particular the 32B model seems to be an improvement over the old 32B even only for the text gen outputs. Both tests done on a RTX PRO 6000. Llama.cpp Q8: https://preview.redd.it/zh4606w24ayf1.png?width=1590&format=png&auto=webp&s=897326f3c93a8c903cf0b774b249137adba18bde vLLM FP8: https://preview.redd.it/ymls02k44ayf1.png?width=1568&format=png&auto=webp&s=8d708a687593811d9c2828382a1ef13af259bb8b As you can see, openwebui shows the average t/s for the response, so total pp+tg averaged (ignore the $ amount, that's just a function of owui). \*In a single request \*With limited context \*In a short query I used my own quants for the Qwen3-VL-32B-instruct, that I uploaded here: [https://huggingface.co/bullerwins/Qwen3-VL-32B-Instruct-GGUF](https://huggingface.co/bullerwins/Qwen3-VL-32B-Instruct-GGUF) Usage: `llama-server --model Qwen3-VL-32B-Instruct-Q8_0.gguf --ctx-size 32000 -ngl 99 --host` [`0.0.0.0`](http://0.0.0.0) `--port 5000 --mmproj Qwen3-VL-32B-Instruct.mmproj` You need to download the .mmproj too which is found in the repo too. I've never quantized a VL model in gguf, only with llm-compressor for awq and fp8 so your mileage may vary, wait for the pros (Thireus/Bart/Aes...) quants for imatrix versions.
2025-10-30T17:07:05
https://www.reddit.com/r/LocalLLaMA/comments/1ok5fqf/qwen3vl32b_q8_speeds_in_llamacpp_vs_vllm_fp8_on_a/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok5fqf
false
null
t3_1ok5fqf
/r/LocalLLaMA/comments/1ok5fqf/qwen3vl32b_q8_speeds_in_llamacpp_vs_vllm_fp8_on_a/
false
false
https://b.thumbs.redditm…jNyP4GBKuFTY.jpg
64
null
A response to the ModTeam's 'Minimal Value Post' accusation: Here is the $MSA$ Core Code.
0
2025-10-30T17:05:53
https://i.redd.it/bd2en38f6ayf1.png
BaseballAccording702
i.redd.it
1970-01-01T00:00:00
0
{}
1ok5ekk
false
null
t3_1ok5ekk
/r/LocalLLaMA/comments/1ok5ekk/a_response_to_the_modteams_minimal_value_post/
false
false
default
0
{'enabled': True, 'images': [{'id': 'bd2en38f6ayf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?width=108&crop=smart&auto=webp&s=017580567c55134a477c5c0e0fe5340accc6925d', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?width=216&crop=smart&auto=webp&s=8a75fd4d12c22734b9492d96b5998341038d2c40', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?width=320&crop=smart&auto=webp&s=d899d7a97c036d939f7b5a777bfed12ad9531231', 'width': 320}, {'height': 643, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?width=640&crop=smart&auto=webp&s=473fa5e2639ab3faa666cf3db9117a979b0525d3', 'width': 640}, {'height': 965, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?width=960&crop=smart&auto=webp&s=54b165fabd62e77735c051972838034b67593f2a', 'width': 960}], 'source': {'height': 1022, 'url': 'https://preview.redd.it/bd2en38f6ayf1.png?auto=webp&s=0a7d5b292a4a575f0d8f1caa9e9a30883e4bb0cc', 'width': 1016}, 'variants': {}}]}
Chrono Edit Released
38
"**ChronoEdit-14B** enables physics-aware image editing and action-conditioned world simulation through temporal reasoning. It distills priors from a 14B-parameter pretrained video generative model and separates inference into (i) a **video reasoning stage** for latent trajectory denoising, and (ii) an **in-context editing stage** for pruning trajectory tokens. ChronoEdit-14B was developed by NVIDIA as part of the **ChronoEdit** family of multimodal foundation models. This model is ready for commercial use." From There Repo
2025-10-30T17:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1ok59j7/chrono_edit_released/
Brave-Hold-9389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok59j7
false
null
t3_1ok59j7
/r/LocalLLaMA/comments/1ok59j7/chrono_edit_released/
false
false
self
38
{'enabled': False, 'images': [{'id': 'bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=108&crop=smart&auto=webp&s=7e1a2189f05b29ee0bdb26566bde8593b7084cdd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=216&crop=smart&auto=webp&s=cbb7b86174b6a5fdb01003443e3eb7b0fd7ec8fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=320&crop=smart&auto=webp&s=95dbf63dc32914e1fb70763e79efa8bb003d9a53', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=640&crop=smart&auto=webp&s=0d4f73b3e4064ec16fcb7cd35d05e8305b475a32', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=960&crop=smart&auto=webp&s=bfb58db8e5e19a4aac70e0e30a3f7c9cb94f7432', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?width=1080&crop=smart&auto=webp&s=98de9497ae1c9e532502e19b57977351482f5683', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bhp2i2x3-V3oY91WIy7kJFnNE14JovYM4jHfHwIEBhU.png?auto=webp&s=c154ebeb7147eb31cdf24c15bc31ad0d47a3ea35', 'width': 1200}, 'variants': {}}]}
Cross-strutured-allignment for better fine tuning on code specific working
1
I have a question i was thinking of testing this theory https://github.com/Intro0siddiqui/Cross-Structural-Alignment-for-Efficient-Code-Language-Fine-Tuning , so what do guys think on using mistrial fine-tuning for testing this and then benchmarking difference and recommendation on which language i should use for testing
2025-10-30T16:29:36
https://www.reddit.com/r/LocalLLaMA/comments/1ok4fa5/crossstruturedallignment_for_better_fine_tuning/
Ok_Horror_8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok4fa5
false
null
t3_1ok4fa5
/r/LocalLLaMA/comments/1ok4fa5/crossstruturedallignment_for_better_fine_tuning/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=108&crop=smart&auto=webp&s=bc0a9eac910c08f7b926c3ef68d606eafcfc164c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=216&crop=smart&auto=webp&s=c60cf3fbaec904e405558ce12eab327930495fc3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=320&crop=smart&auto=webp&s=26ccd264840d1d5a436942074c350d2b9713524c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=640&crop=smart&auto=webp&s=743fe8a37a0c962f54bc9a206668d095ca805b9b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=960&crop=smart&auto=webp&s=104eb97a82f9a089d8afc427823c82a744834901', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?width=1080&crop=smart&auto=webp&s=fd335fb8560025d14ecb545e198dbac98b2be22c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DsoCJF9vthxZyW4fqWI4-s-B0nJL13ZtRQIrg4W1z88.png?auto=webp&s=adcf8ef5789816c86dcd52d71a33f9260b8c7e90', 'width': 1200}, 'variants': {}}]}
🦙💥 Building llama.cpp with Vulkan backend on Android (Termux ARM64)
19
**Pre-script(PS)- I wrote/copied this using AI. I am not a writer, yet. Everything was done natively on Snapdragon 7 Plus Gen 3/12 GB RAM Phone using Termux.** AI- Since there’s almost zero info out there on building both `glslc(Arm64)` and `llama.cpp (Vulkan)` natively on Android, here’s the working procedure. 🧩 **Prerequisites** You’ll need: ```bash pkg install git cmake ninja clang python vulkan-tools ``` > 🧠 Tip: Ensure your Termux has Vulkan-capable drivers. You can verify with: ```bash vulkaninfo | head ``` > If it prints valid info (not `segfault`), you’re good. (H- Vulkan is pretty much on every phone made post 2016, I think.) --- 📦 **Step 1 — Clone and build Shaderc (for glslc)** ```bash cd ~ git clone --recursive https://github.com/google/shaderc cd shaderc mkdir build && cd build cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DSHADERC_SKIP_TESTS=ON ninja glslc_exe ``` This builds the GLSL compiler (`glslc_exe`), needed by Vulkan. 👉 The working binary will be here: `~/shaderc/build/glslc/glslc` --- ⚙️ **Step 2 — Clone and prepare llama.cpp** >H- You already know how. > Now comes the critical step. --- 🚀 **Step 3 — Build llama.cpp with Vulkan backend** The key flag is `-DVulkan_GLSLC_EXECUTABLE`, which must point to the actual binary (`glslc`), not just the directory. ```bash cmake .. -G Ninja \ -DGGML_VULKAN=ON \ -DVulkan_GLSLC_EXECUTABLE=/data/data/com.termux/files/home/shaderc/build/glslc/glslc \ -DCMAKE_BUILD_TYPE=Release ninja ``` --- 🧠 **Notes** - `glslc_exe` builds fine on Termux without cross-compiling. - `llama.cpp` detects Vulkan properly if vulkaninfo works. - You can confirm Vulkan backend built by checking: ```bash ./bin/llama-cli --help | grep vulkan ``` - Expect a longer build due to shader compilation steps. (Human- It's quick, with `ninja -j$(nproc)`) --- 🧩 **Tested on** - **Device:** Snapdragon 7+ Gen 3 - **Termux:** 0.118 (Android 15) - **Compiler:** Clang 17 - **Vulkan:** Working via system drivers (H- kinda) --- *H- After this, `llama.cpp` executables i.e. `llama-cli/server` etc were running but phone wouldn't expose GPU driver, and `LD_LIBRARY_PATH` did nothing (poor human logic). So a hacky workaround and possible rebuild below-* --- **How I Ran `llama.cpp` on Vulkan with Adreno GPU in Termux on Android (Snapdragon 7+ Gen 3)** Hey r/termux / r/LocalLLaMA / r/MachineLearning — after days (H- hours) of wrestling, I got llama.cpp running with Vulkan backend on my phone in Termux. It detects the Adreno 732 GPU and offloads layers, but beware: it's unstable (OOM, DeviceLostError, gibberish output). OpenCL works better for stable inference, but Vulkan is a fun hack. This is a step-by-step guide for posterity. Tested on Android 14, Termux from F-Droid. Your mileage may vary on other devices — Snapdragon with Adreno GPU required. **Prerequisites** - Termux installed. - Storage access: `termux-setup-storage` - Basic packages: `pkg install clang cmake ninja git vulkan-headers vulkan-tools vulkan-loader` ~~ **Step 1: Build shaderc and glslc (Vulkan Shader Compiler)** Vulkan needs glslc for shaders. Build from source:~~ **Step 2: Clone and Configure `llama.cpp`** ```bash cd ~ git clone https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build_vulkan && cd build_vulkan cmake .. -G Ninja -DGGML_VULKAN=ON -DVulkan_GLSLC_EXECUTABLE=$HOME/shaderc/build/glslc/glslc ``` If `CMake` complains about `libvulkan.so`: - Remove broken symlink: `rm $PREFIX/lib/libvulkan.so` - Copy real loader: `cp /system/lib64/libvulkan.so $PREFIX/lib/libvulkan.so` - Clear cache: `rm -rf CMakeCache.txt CMakeFiles/` - Re-run `CMake`. **Step 3: Build** ```bash ninja -j$(nproc) ``` Binary is at `bin/llama-cli` **Step 4: Create `ICD JSON` for Adreno Vulkan loader needs this to find the driver. ```bash cat > $HOME/adreno.json << 'EOF' { "file_format_version": "1.0.0", "ICD": { "library_path": "/vendor/lib64/hw/vulkan.adreno.so", "api_version": "1.3.268" } } EOF ``` **Step 5: Set Environment Variables** ```bash export VK_ICD_FILENAMES=$HOME/adreno.json export LD_LIBRARY_PATH=/vendor/lib64/hw:$PREFIX/lib:$LD_LIBRARY_PATH ``` Add to `~/.bashrc` for persistence. **Step 6: Test Detection** ```bash bin/llama-cli --version ``` You should see: ``` ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Adreno (TM) 732 (Qualcomm Technologies Inc. Adreno Vulkan Driver) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: none ``` Download a small GGUF model (e.g., Phi-3 Mini Q4_K_M from HuggingFace). ```bash bin/llama-cli \ -m phi-3-mini-4k-instruct-q4_K_M.gguf \ -p "Test prompt:" \ -n 128 \ --n-gpu-layers 20 \ --color ``` Offloads layers to GPU. But often OOM (reduce --n-gpu-layers), DeviceLostError, or gibberish. Q4_0/Q4_K may fail shaders; Q8_0 is safer but larger. *PS- I tested multiple models. `OpenCL` crashes `Termux` with exit code -9 on my phone if total GPU Load crosses ~3 GB. Something like that is happening with `Vulkan` build as well. All models that run fine on CPU or CPU+OpenCL generate gibberish. I'll post samples below if I get the time, however those of you who want to experiment yourselves can do so, now the build instructions have been shared with you. If some of you are able to fix inference please post a comment with `llama-cli/server` options.*
2025-10-30T16:27:22
https://www.reddit.com/r/LocalLLaMA/comments/1ok4d2n/building_llamacpp_with_vulkan_backend_on_android/
Brahmadeo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok4d2n
false
null
t3_1ok4d2n
/r/LocalLLaMA/comments/1ok4d2n/building_llamacpp_with_vulkan_backend_on_android/
false
false
self
19
{'enabled': False, 'images': [{'id': 'AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=108&crop=smart&auto=webp&s=4f86eceb67844982a4e71706dbf63600134823a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=216&crop=smart&auto=webp&s=06b5180283304e85fac9df89a7889faa6920614e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=320&crop=smart&auto=webp&s=9a50d7bb359666f84cfe55e744f9a57b8b73cf60', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=640&crop=smart&auto=webp&s=3b0134b0dd38d10ef76906fee49f9f45b36d8823', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=960&crop=smart&auto=webp&s=39f2e97e26350ab3849685ef67d8d1885da97120', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?width=1080&crop=smart&auto=webp&s=6f852992071c084f4aef6a70953e6e9ff7b94673', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AajuRlAoLICcdGk2YaTVkPsRey43XhP6RiKzgOWjE6I.png?auto=webp&s=d7169f033db3b75db88deca2c64e4dd7f9ea2c01', 'width': 1200}, 'variants': {}}]}
Which is the best place to rent a 4090?
0
I need to run open source LLMs locally. Do you have any suggestions to rent a 4090 cloud machine? I once used [vast.ai](http://vast.ai), but it's not stable enough. Thanks!
2025-10-30T16:22:59
https://www.reddit.com/r/LocalLLaMA/comments/1ok48us/which_is_the_best_place_to_rent_a_4090/
Melinda_McCartney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok48us
false
null
t3_1ok48us
/r/LocalLLaMA/comments/1ok48us/which_is_the_best_place_to_rent_a_4090/
false
false
self
0
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]}
Why can't locally run LLMs answer this simple math question?
0
Give an example of a scheme that doesn't have a morphism to Spec(Z) (I didn't come up with this question, I found it on r/chatgpt somewhere, can't be bothered to find the original post). The correct answer (according to them, I don't understand the question well enough to know if this answer is correct) is that there is no answer. Every scheme has a morphism to Spec(Z). I asked ChatGPT and Gemini, both were able to correctly answer the question. But then I tried asking some locally run models: using LM Studio with gpt-oss-20b, gemma-3-27b, and Deepseek r1, none of them were able to get it right. They just kept on hallucinating random answers such as a non-integral scheme, the projective line, the empty scheme, etc. (Deepseek gets stuck in an infinite loop "thinking"). Are there any models that can answer the question?
2025-10-30T16:17:40
https://www.reddit.com/r/LocalLLaMA/comments/1ok43o7/why_cant_locally_run_llms_answer_this_simple_math/
Suspicious-Host9042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok43o7
false
null
t3_1ok43o7
/r/LocalLLaMA/comments/1ok43o7/why_cant_locally_run_llms_answer_this_simple_math/
false
false
self
0
null
Best open-source models for large legal documents?
0
Building a legal AI for construction contracts. Need to handle large documents accurately. Looking for recommendations on: * **Small model** (for high end laptops) * **Mid-sized model** (for better accuracy on complex legal documents) What models would you suggest based on legal benchmark performance, not marketing? Also, i did look up but getting really confused
2025-10-30T16:17:19
https://www.reddit.com/r/LocalLLaMA/comments/1ok43bp/best_opensource_models_for_large_legal_documents/
Alternative_Gap_4535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok43bp
false
null
t3_1ok43bp
/r/LocalLLaMA/comments/1ok43bp/best_opensource_models_for_large_legal_documents/
false
false
self
0
null
Qwen code and MCP servers configuration trick
1
As granite models have huge context and can run on my mere 8GB gpu, I spent a lot trying to configure MCP servers on qwen code on windows (PowerShell or cmd as git bash terminal won't work). No instruction said anything useful, just some site suggested to escape slashes 2 times but that didn't worked. I also tried, for desperation, to use opencode but there also providers had issue serving llm model (I use llamacpp and the openai url is standard....) In the end, turned out that on windows paths you need 4 slashes as per: "serena": { "command": "uv", "args": \["run", "--directory", "C:\\\\\\\\Temp\\\\\\\\serena", "serena", "start-mcp-server"\], "cwd": "C:\\\\\\\\Temp\\\\\\\\serena", "timeout": 60000, "trust": false } Enjoy!!!
2025-10-30T16:13:17
https://www.reddit.com/r/LocalLLaMA/comments/1ok3zcz/qwen_code_and_mcp_servers_configuration_trick/
R_Duncan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok3zcz
false
null
t3_1ok3zcz
/r/LocalLLaMA/comments/1ok3zcz/qwen_code_and_mcp_servers_configuration_trick/
false
false
self
1
null
200+ pages of Hugging Face secrets on how to train an LLM
1,850
Hey it's elie from the hugging face pre-training team! We're very excited to share our new blog (book?) that cover the full pipeline: pre-training, post-training and infra. 200+ pages of what worked, what didn’t, and how to make it run reliably :) [https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook](https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook) Hope yall will enjoy it, don't hesitate to make feedback on the community tab :)
2025-10-30T16:11:22
https://i.redd.it/s12qz4k3w9yf1.png
eliebakk
i.redd.it
1970-01-01T00:00:00
0
{}
1ok3xie
false
null
t3_1ok3xie
/r/LocalLLaMA/comments/1ok3xie/200_pages_of_hugging_face_secrets_on_how_to_train/
false
false
https://b.thumbs.redditm…5Nn1NNG4nZPw.jpg
1,850
{'enabled': True, 'images': [{'id': 'EF7ANGNIvfw-4RUiJFoZyHAFFNO9kq-jt3TGOT6UXuY', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=108&crop=smart&auto=webp&s=00fa8a9e5543848ef3345dcc1c14eb34761cf129', 'width': 108}, {'height': 280, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=216&crop=smart&auto=webp&s=9efdbd1381164fede8256f38a22716c052b3f8b3', 'width': 216}, {'height': 416, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=320&crop=smart&auto=webp&s=60eb58fb5ca187bff9ee2ac14a54571de525af10', 'width': 320}, {'height': 832, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=640&crop=smart&auto=webp&s=44c78fbb2faf8b6857633466eb7cf34609898a57', 'width': 640}, {'height': 1248, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=960&crop=smart&auto=webp&s=215a9e197fca24b2fc5dcb73815581d71a24d905', 'width': 960}, {'height': 1404, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?width=1080&crop=smart&auto=webp&s=247b2fcc0adedd5c47ad98f47a794398510b950b', 'width': 1080}], 'source': {'height': 1950, 'url': 'https://preview.redd.it/s12qz4k3w9yf1.png?auto=webp&s=88ff4cacf7b4200ba6b00eac1e69338fb2f93fa8', 'width': 1500}, 'variants': {}}]}
Can you give me some recommendations?
1
Hello, I bought a Macbook Pro with M4 Max chip and 64 gb ram, I want to get into the local LLM space, I am a complete beginner and what I want is a local model that I can use to help with coding, checking emails and maybe using n8n(hosted locally on Docker) to create some agents. Can you give me recommendations on which model I should use, and how can I make it perform in the way I want it to. Thanks in advance.
2025-10-30T16:09:25
https://www.reddit.com/r/LocalLLaMA/comments/1ok3vkw/can_you_give_me_some_recommendations/
edgetr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok3vkw
false
null
t3_1ok3vkw
/r/LocalLLaMA/comments/1ok3vkw/can_you_give_me_some_recommendations/
false
false
self
1
null
npcsh--the AI command line toolkit from Indiana-based research startup NPC Worldwide--featured on star-history
0
npcsh gives you the ability to define agents and jinja execution templates within a local data layer, letting you focus on agent persona and the specific automations you want to build.
2025-10-30T15:53:43
https://www.star-history.com/blog/npcsh
BidWestern1056
star-history.com
1970-01-01T00:00:00
0
{}
1ok3fz5
false
null
t3_1ok3fz5
/r/LocalLLaMA/comments/1ok3fz5/npcshthe_ai_command_line_toolkit_from/
false
false
default
0
null
Qwen 3 VL merged into llama.cpp!
349
[https://github.com/ggml-org/llama.cpp/pull/16780](https://github.com/ggml-org/llama.cpp/pull/16780) WE ARE SO BACK!
2025-10-30T15:21:24
https://www.reddit.com/r/LocalLLaMA/comments/1ok2lht/qwen_3_vl_merged_into_llamacpp/
ervertes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok2lht
false
null
t3_1ok2lht
/r/LocalLLaMA/comments/1ok2lht/qwen_3_vl_merged_into_llamacpp/
false
false
self
349
{'enabled': False, 'images': [{'id': 'UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=108&crop=smart&auto=webp&s=a4f7ec1fb5304e8b495d65a31b599f3e4cc7ce98', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=216&crop=smart&auto=webp&s=ee6205531c39ca60251e59532645245f3f794d1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=320&crop=smart&auto=webp&s=9b4cf6442063da6c7ebf42e05054883c6a3ab799', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=640&crop=smart&auto=webp&s=61edd5c6fafdde538522a801c54ffb4af2835bea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=960&crop=smart&auto=webp&s=c6ce0a3853d0d699f7db3eb4bd325d8ce4b4f254', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=1080&crop=smart&auto=webp&s=739d7f3564581fd2481211a0d6355c7806b06cd0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?auto=webp&s=2a84fbc018c48122f454728d1d96f558345d2d5c', 'width': 1200}, 'variants': {}}]}
support for Qwen3 VL has been merged into llama.cpp
87
2025-10-30T15:20:37
https://github.com/ggml-org/llama.cpp/pull/16780
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1ok2kr3
false
null
t3_1ok2kr3
/r/LocalLLaMA/comments/1ok2kr3/support_for_qwen3_vl_has_been_merged_into_llamacpp/
false
false
default
87
{'enabled': False, 'images': [{'id': 'UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=108&crop=smart&auto=webp&s=a4f7ec1fb5304e8b495d65a31b599f3e4cc7ce98', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=216&crop=smart&auto=webp&s=ee6205531c39ca60251e59532645245f3f794d1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=320&crop=smart&auto=webp&s=9b4cf6442063da6c7ebf42e05054883c6a3ab799', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=640&crop=smart&auto=webp&s=61edd5c6fafdde538522a801c54ffb4af2835bea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=960&crop=smart&auto=webp&s=c6ce0a3853d0d699f7db3eb4bd325d8ce4b4f254', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?width=1080&crop=smart&auto=webp&s=739d7f3564581fd2481211a0d6355c7806b06cd0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UOy5aO0wGYsAGLQPGdTQTLob2dx_kI7dhbOgTtyABGg.png?auto=webp&s=2a84fbc018c48122f454728d1d96f558345d2d5c', 'width': 1200}, 'variants': {}}]}
Running Qwen3-VL-4B-Instruct Exclusively on AMD Ryzen™ AI NPU
15
We’re a small team building **FastFlowLM (FLM)** — a fast runtime for running **Qwen3-VL**, **GPT-OSS (first MoE on NPUs), Whisper, Gemma3 (vision), EmbeddingGemma, Medgemma,** **Qwen3,** **DeepSeek-R1**, **LLaMA3.x,** and others **entirely on the AMD Ryzen AI NPU**. Think **Ollama (or llamacpp)**, but deeply optimized for AMD NPUs — with both **CLI** and **Server Mode (OpenAI-compatible)**. ✨ **From Idle Silicon to Instant Power — FastFlowLM (FLM) Makes Ryzen™ AI Shine.** # Key Features * No GPU fallback * **Faster and over 10× more power efficient.** * **Supports context lengths up to 256k tokens (qwen3:4b-2507).** * **Ultra-Lightweight (16 MB). Installs within 20 seconds.** # Try It Out * **GitHub:** [github.com/FastFlowLM/FastFlowLM](https://github.com/FastFlowLM/FastFlowLM) * **Live Demo → Remote machine access on the repo page** * **YouTube Demos:** [FastFlowLM - YouTube](https://www.youtube.com/@FastFlowLM-YT/playlists)  *→ Quick start guide, NPU vs CPU vs GPU, etc.* We’re iterating fast and would **love your feedback, critiques, and ideas**🙏
2025-10-30T15:14:38
https://youtu.be/CeysCsRBJgE?si=H0ToUrIL5ofdDSjM
BandEnvironmental834
youtu.be
1970-01-01T00:00:00
0
{}
1ok2f23
false
{'oembed': {'author_name': 'FastFlowLM', 'author_url': 'https://www.youtube.com/@FastFlowLM-YT', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/CeysCsRBJgE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen3-VL-4B-Instruct — 100% Powered by AMD Ryzen™ AI NPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/CeysCsRBJgE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen3-VL-4B-Instruct — 100% Powered by AMD Ryzen™ AI NPU', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
t3_1ok2f23
/r/LocalLLaMA/comments/1ok2f23/running_qwen3vl4binstruct_exclusively_on_amd/
false
false
default
15
{'enabled': False, 'images': [{'id': 'T2L9PcUy6ioMMCij5-NQUwVL5p94M4cJYbiBkvtrq-c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/T2L9PcUy6ioMMCij5-NQUwVL5p94M4cJYbiBkvtrq-c.jpeg?width=108&crop=smart&auto=webp&s=86946cda88c09f5a21d4f9c85c951d5dfb66b5df', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/T2L9PcUy6ioMMCij5-NQUwVL5p94M4cJYbiBkvtrq-c.jpeg?width=216&crop=smart&auto=webp&s=ad8112df3f68705d0466b437b0092c90cc14ddd7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/T2L9PcUy6ioMMCij5-NQUwVL5p94M4cJYbiBkvtrq-c.jpeg?width=320&crop=smart&auto=webp&s=bd4ea65f823e14a6c1f6f631f99b1684e2243973', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/T2L9PcUy6ioMMCij5-NQUwVL5p94M4cJYbiBkvtrq-c.jpeg?auto=webp&s=bb07efbea6c7a7a14e8bdf8907ab7ef924b0611a', 'width': 480}, 'variants': {}}]}
Anyone knows a free way to run inference for new OCR models like Chandra and PaddleOCR-VL?
1
I’m trying to test out a few of the newer OCR / vision-language models listed on Hugging Face specifically: * **Chandra OCR** (`datalab-to/chandra`) * **PaddleOCR-VL** (`PaddlePaddle/PaddleOCR-VL`) * **DeepSeek-OCR** (`deepseek-ai/DeepSeek-OCR`) * **Qwen-VL-2B-Instruct** (`Qwen/Qwen2-VL-2B-Instruct`) These models (mostly) don’t have ready public inference endpoints yet, and I just want to run a few comparisons on a small image dataset (around 4–5 images each). I tried setting them up locally, but at least Chandra is huge and easily maxes out my system memory. Now with the ZeroGPU free quota exhausted, I’m wondering if there’s any free or temporary option where I could run these tests, or any workaround to run HF models without paying for a Pro plan or renting a full GPU instance. Thanks in advance!
2025-10-30T15:06:16
https://www.reddit.com/r/LocalLLaMA/comments/1ok279h/anyone_knows_a_free_way_to_run_inference_for_new/
Full_Piano_3448
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok279h
false
null
t3_1ok279h
/r/LocalLLaMA/comments/1ok279h/anyone_knows_a_free_way_to_run_inference_for_new/
false
false
self
1
null
RAM upgrade and CPU only inference
1
[removed]
2025-10-30T15:03:04
https://www.reddit.com/r/LocalLLaMA/comments/1ok24a6/ram_upgrade_and_cpu_only_inference/
hustla17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok24a6
false
null
t3_1ok24a6
/r/LocalLLaMA/comments/1ok24a6/ram_upgrade_and_cpu_only_inference/
false
false
self
1
null
Solve ML coding problems in leetcode style now
1
[removed]
2025-10-30T14:52:41
https://v.redd.it/w3i4p2i4i9yf1
DueNecessary5925
v.redd.it
1970-01-01T00:00:00
0
{}
1ok1uer
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/w3i4p2i4i9yf1/DASHPlaylist.mpd?a=1764427973%2CNmI4YzkxMjNiZDA2MGQwYTgxODM2YWIwOWRjYzNjNjk0MGMwM2JkY2YzOTIyMTk0YzFhM2M1MzI4NzEyY2U1Ng%3D%3D&v=1&f=sd', 'duration': 71, 'fallback_url': 'https://v.redd.it/w3i4p2i4i9yf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/w3i4p2i4i9yf1/HLSPlaylist.m3u8?a=1764427973%2CMDg4MWY1N2MyNTFmMDEzZmZlMDRmNzg4MGJiNzE5MTk0OTdjODEyZjYxYWIyOGU1M2EzNzhkOTc1MDVhYjdjOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w3i4p2i4i9yf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1ok1uer
/r/LocalLLaMA/comments/1ok1uer/solve_ml_coding_problems_in_leetcode_style_now/
false
false
https://external-preview…e45d9e0f40534af6
1
{'enabled': False, 'images': [{'id': 'Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=108&crop=smart&format=pjpg&auto=webp&s=f771a70fb5f0fd5f28692613167082d3acc70e65', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=216&crop=smart&format=pjpg&auto=webp&s=5de40d10ae31383c645a776bfc636b719b13eb09', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=320&crop=smart&format=pjpg&auto=webp&s=25c09b64d6bd8a9f2ec454ee9497fdfb54d16fa3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=640&crop=smart&format=pjpg&auto=webp&s=15f5bb2a9c177c253e2b22ef65b0a182fdbd2e74', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=960&crop=smart&format=pjpg&auto=webp&s=70cbd52ab58a55b73365342c1c6c872b6ae28bf3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4b061833473c6699cf4fe7b73e5b325fa82c9d8f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z21qcXZ6aDRpOXlmMaYUT2-tEpqSkVmB49zbEmwcQx-eiby2A6zSwXzC2jw4.png?format=pjpg&auto=webp&s=84d35dd3d9caa5c3c31d7e7afe04ef6cdb038627', 'width': 1280}, 'variants': {}}]}
Users of REAP Pruned models, So far how's your experience?
23
It's been 1-2 week(s), please share your experience on those. Speed-wise fine as I saw some stats from few threads. Quality wise? And Stuffs like Tool calling & etc.,?? So far I see Pruned models of Qwen3-Coder-480B, GLM-4.5-Air, GLM-4.6, Qwen3-Coder-30B, GPT-OSS-20B, GPT-OSS-120B, Qwen3-30B-A3B, Qwen3-30B-A3B-Instruct on [HuggingFace](https://huggingface.co/models?library=safetensors&sort=created&search=REAP)(Filtered HF URL of REAP Pruned models). Personally I would try (25% Pruned versions of) GPT-OSS-20B & Qwen3-30B models on my 8GB VRAM(and 32GB VRAM). REAP Prune Experts, please consider these models if possible. Thanks * AI21-Jamba-Mini-1.7 * GroveMoE-Inst * FlexOlmo-7x7B-1T * Phi-3.5-MoE-instruct For others, here some threads to start. [https://www.reddit.com/r/LocalLLaMA/comments/1o98f57/new\_from\_cerebras\_reap\_the\_experts\_why\_pruning/](https://www.reddit.com/r/LocalLLaMA/comments/1o98f57/new_from_cerebras_reap_the_experts_why_pruning/) [https://www.reddit.com/r/LocalLLaMA/comments/1obrde8/cerebras\_reap\_update\_pruned\_checkpoints\_for/](https://www.reddit.com/r/LocalLLaMA/comments/1obrde8/cerebras_reap_update_pruned_checkpoints_for/) [https://www.reddit.com/r/LocalLLaMA/comments/1oefu29/cerebras\_reapd\_glm46\_25\_30\_40\_pruned\_fp8/](https://www.reddit.com/r/LocalLLaMA/comments/1oefu29/cerebras_reapd_glm46_25_30_40_pruned_fp8/) [https://www.reddit.com/r/LocalLLaMA/comments/1octe2s/pruned\_moe\_reap\_quants\_for\_testing/](https://www.reddit.com/r/LocalLLaMA/comments/1octe2s/pruned_moe_reap_quants_for_testing/) [https://www.reddit.com/r/LocalLLaMA/comments/1ogz0b7/oh\_my\_reapness\_qwen3coder30ba3binstruct\_pruned/](https://www.reddit.com/r/LocalLLaMA/comments/1ogz0b7/oh_my_reapness_qwen3coder30ba3binstruct_pruned/)
2025-10-30T14:51:45
https://www.reddit.com/r/LocalLLaMA/comments/1ok1tkh/users_of_reap_pruned_models_so_far_hows_your/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok1tkh
false
null
t3_1ok1tkh
/r/LocalLLaMA/comments/1ok1tkh/users_of_reap_pruned_models_so_far_hows_your/
false
false
self
23
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=108&crop=smart&auto=webp&s=c58faeb60d6cd1478f77717010b54d2ec5ab95aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=216&crop=smart&auto=webp&s=ac6e76a4b92cde06bfe8de6386029fe6e13d300a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=320&crop=smart&auto=webp&s=74411f402b7aa23512ee64feee8b30c532f827cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=640&crop=smart&auto=webp&s=3633345496a9e7fe8ee77d630eed16e17aa9d76c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=960&crop=smart&auto=webp&s=7d1f508758e0820c3ba4c956558fbb03b374d9ae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=1080&crop=smart&auto=webp&s=128cf1f3a3c707f58eeaac2a787b22669c50d896', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?auto=webp&s=72d1eb93c8099528b2174d2087be2b488b2e9529', 'width': 1200}, 'variants': {}}]}
Enterprise local LLM inference?
0
Hello. I'm considering building a product to offer local (on-premises, corporate cloud) enterprise LLM inference with turnkey model deployment, auditing, compliance etc. It's not a llama wrapper. Do you think there is a market for this or most companies don't care about it? Internet research says there is a growing market for local inference but I'm skeptical. Thanks in advance.
2025-10-30T14:50:29
https://www.reddit.com/r/LocalLLaMA/comments/1ok1sfb/enterprise_local_llm_inference/
zica-do-reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok1sfb
false
null
t3_1ok1sfb
/r/LocalLLaMA/comments/1ok1sfb/enterprise_local_llm_inference/
false
false
self
0
null
Minisforum Halo Strix.... Can you connect this to EGPUs??
2
Hey guys, I'm considering purchasing an MS-S1 MAX for AI inference. I know there are USB4v2 ports on it, so I am wondering if I could connect this to other GPUs or even other MS-S1 Max's for parallel processing.
2025-10-30T14:35:00
https://www.reddit.com/r/LocalLLaMA/comments/1ok1e85/minisforum_halo_strix_can_you_connect_this_to/
Excellent_Koala769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok1e85
false
null
t3_1ok1e85
/r/LocalLLaMA/comments/1ok1e85/minisforum_halo_strix_can_you_connect_this_to/
false
false
self
2
null
I built Katakate (k7) — self-hosted lightweight VM sandboxes for running untrusted code safely (Show HN #1)
1
[removed]
2025-10-30T14:25:53
https://github.com/katakate/k7
gbxk7
github.com
1970-01-01T00:00:00
0
{}
1ok15yr
false
null
t3_1ok15yr
/r/LocalLLaMA/comments/1ok15yr/i_built_katakate_k7_selfhosted_lightweight_vm/
false
false
default
1
null
This is a project that detects the vulnerabilities of llm
0
This is my first project and I would like feedback. If you have any errors, problems or criticisms, I would appreciate it if you could tell me. https://agent-aegis-497122537055.us-west1.run.app/#/
2025-10-30T14:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1ok0yao/this_is_a_project_that_detects_the/
Teseo223
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok0yao
false
null
t3_1ok0yao
/r/LocalLLaMA/comments/1ok0yao/this_is_a_project_that_detects_the/
false
false
self
0
null
Introducing Hephaestus: AI workflows that build themselves as agents discover what needs to be done
53
Hey everyone! 👋 I've been working on Hephaestus - an open-source framework that changes how we think about AI agent workflows. **The Problem:** Most agentic frameworks make you define every step upfront. But complex tasks don't work like that - you discover what needs to be done as you go. **The Solution:** Semi-structured workflows. You define *phases* - the logical steps needed to solve a problem (like "Reconnaissance → Investigation → Validation" for pentesting). Then agents dynamically create tasks across these phases based on what they discover. **Example:** During a pentest, a validation agent finds an IDOR vulnerability that exposes API keys. Instead of being stuck in validation, it spawns a new reconnaissance task: "Enumerate internal APIs using these keys." Another agent picks it up, discovers admin endpoints, chains discoveries together, and the workflow branches naturally. Agents share discoveries through RAG-powered memory and coordinate via a Kanban board. A Guardian agent continuously tracks each agent's behavior and trajectory, steering them in real-time to stay focused on their tasks and prevent drift. 🔗 **GitHub:** https://github.com/Ido-Levi/Hephaestus 📚 **Docs:** https://ido-levi.github.io/Hephaestus/ **Fair warning:** This is a brand new framework I built alone, so expect rough edges and issues. The repo is a bit of a mess right now. If you find any problems, please report them - feedback is very welcome! And if you want to contribute, I'll be more than happy to review it!
2025-10-30T14:14:41
https://v.redd.it/uwogrxezb9yf1
Standard_Excuse7988
v.redd.it
1970-01-01T00:00:00
0
{}
1ok0voi
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uwogrxezb9yf1/DASHPlaylist.mpd?a=1764425698%2CZjA1NTRjMTYzZDAxODExOTljMDE4MjlkNzdmZTJhZTRiM2FmYzM2ZWRlY2Y4ZDYxN2Y1ZGJhYzk2ZmFhYmViZQ%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/uwogrxezb9yf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 832, 'hls_url': 'https://v.redd.it/uwogrxezb9yf1/HLSPlaylist.m3u8?a=1764425698%2CMDk4OGNmODZhZDE5NzExOGM5OTEzYmM0ZGNhOTBhMTY2NzczYzBmMmQ1MDZkNmI1YjgwMThhZTU4MDVkMGU5Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uwogrxezb9yf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ok0voi
/r/LocalLLaMA/comments/1ok0voi/introducing_hephaestus_ai_workflows_that_build/
false
false
https://external-preview…4a0509dde78ebe41
53
{'enabled': False, 'images': [{'id': 'aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd833238858d244b13cfe0a505c2a3d7b8196cfa', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=216&crop=smart&format=pjpg&auto=webp&s=d3f6308ca670a6fdf941063bffa131e778633ea4', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=320&crop=smart&format=pjpg&auto=webp&s=5675f097753baada9f2bfc1eef439276f8592a9c', 'width': 320}, {'height': 277, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=640&crop=smart&format=pjpg&auto=webp&s=c900a5b24a08455bb7d0fa30af1229f42b9964ba', 'width': 640}, {'height': 415, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=960&crop=smart&format=pjpg&auto=webp&s=9371d3571e50ea1807d1d660163b82c5b2e3c947', 'width': 960}, {'height': 467, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a92b50acd2700851f745bbad44539ee0803f0912', 'width': 1080}], 'source': {'height': 1162, 'url': 'https://external-preview.redd.it/aHpncHF4ZXpiOXlmMZc87bODxInUab1QdzAVIKt_p_AEHL5YkJEhQTcdw4CD.png?format=pjpg&auto=webp&s=843d6a49c97008c8e86883415caca918c6193833', 'width': 2684}, 'variants': {}}]}
A free API for daily AI research breakthroughs
8
I built a small project that automatically collects new AI research papers (mainly from arXiv), scores them for relevance, and summarizes the most important breakthroughs. It’s completely free and comes with an open API so you can pull the data into your own tools or workflows. It’s meant for people who want to stay updated on what’s happening in AI without reading hundreds of papers a day. API docs and example responses are available here: [https://cognoska.com/api/docs](https://cognoska.com/api/docs) Feedback or suggestions welcome.
2025-10-30T14:13:20
https://www.reddit.com/r/LocalLLaMA/comments/1ok0ugi/a_free_api_for_daily_ai_research_breakthroughs/
Quiet_Truck_326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok0ugi
false
null
t3_1ok0ugi
/r/LocalLLaMA/comments/1ok0ugi/a_free_api_for_daily_ai_research_breakthroughs/
false
false
self
8
null
AMA with Liquid AI, the team behind Liquid Foundational Models, LEAP and Apollo
83
Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) ! We’re super excited to host this week’s AMA!  Join us and ask your questions directly to the human minds behind all things Liquid: Liquid Foundational Models, the Liquid Edge AI Platform (LEAP) for model customization and deployment, and Apollo. **Our participants:** * Jacob Marks [u/jamarks13](https://www.reddit.com/user/jamarks13/) (Data) * Jimmy Smith [u/jimmysmith1919](https://www.reddit.com/user/jimmysmith1919/) (Pre-Training) * Maxime Labonne [u/mlabonne](https://www.reddit.com/user/mlabonne/) (Post-Training) * Fernando Fernandes [u/Wide-Half-7982](https://www.reddit.com/user/Wide-Half-7982/) (Post-training) * Anna Banaszak [u/ankebananke](https://www.reddit.com/user/ankebananke/) (LFM2-VL) * Arthur Böök [u/ManWithARedFace](https://www.reddit.com/user/ManWithARedFace/) (LFM2-Audio) * Yuri Khrustalev [u/ykhrustalev](https://www.reddit.com/user/ykhrustalev/) (Inference engine, llama.cpp) * Darian Bhathena [u/humble\_pi\_314](https://www.reddit.com/user/humble_pi_314/) (LEAP SDK and Apollo) * Edoardo Mosca [u/Ok-Safe-5316](https://www.reddit.com/user/Ok-Safe-5316/) (LEAP Best Model Search and Finetune) * Anthony Crognale [u/anthony-liquidai](https://www.reddit.com/user/anthony-liquidai/) (LEAP SDK) * Pau Labarta Bajo [u/PauLabartaBajo](https://www.reddit.com/user/PauLabartaBajo/) (Dev Relations) The AMA will run from **10 AM - 1 PM PST**. The Liquid AI team will also continue answering questions for the following 24 hours, so jump in anytime! **Want to get started?**  \> [Deploy your first model on-device today](https://leap.liquid.ai/models?utm_source=reddit&utm_medium=devrel)\> [Check out our models on Hugging Face](https://huggingface.co/LiquidAI?utm_source=reddit&utm_medium=devrel)\> [Play with models on Apollo](https://www.liquid.ai/apollo?utm_source=reddit&utm_medium=devrel)\> [Learn more about our recent releases](https://www.liquid.ai/company/news?utm_source=reddit&utm_medium=devrel) https://preview.redd.it/uvhnx2j379yf1.png?width=1620&format=png&auto=webp&s=9638f2940194e4d3cf6c4e79195373908a36c198
2025-10-30T14:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1ok0i7q/ama_with_liquid_ai_the_team_behind_liquid/
LiquidAI_Team
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ok0i7q
false
null
t3_1ok0i7q
/r/LocalLLaMA/comments/1ok0i7q/ama_with_liquid_ai_the_team_behind_liquid/
false
true
https://b.thumbs.redditm…MxbG1TEDCwos.jpg
83
null
Language Models are Injective and Hence Invertible
0
>Beyond theory, the findings carry practical and legal implications. Hidden states are not abstractions but the prompt in disguise. Any system that stores or transmits them is effectively handling user text itself. This affects privacy, deletion, and compliance: even after prompt deletion, embeddings retain the content. **Regulators have sometimes argued otherwise; for example, the Hamburg Data Protection Commissioner claimed that weights do not qualify as personal data since training examples cannot be trivially reconstructed (HmbBfDI, 2024). Our results show that at inference time user inputs remain fully recoverable. There is no “free privacy” once data enters a Transformer.** Implications? It's not clear to me from the whole paper whether they conclusively mean or not that *training* data could almost-always be recovered losslessly. They seem to imply it in the above excerpt, but most of their discourse is about recovering *new* prompts at *inference* time, post-training. >.>
2025-10-30T13:44:17
https://www.arxiv.org/abs/2510.15511
R33v3n
arxiv.org
1970-01-01T00:00:00
0
{}
1ok03p5
false
null
t3_1ok03p5
/r/LocalLLaMA/comments/1ok03p5/language_models_are_injective_and_hence_invertible/
false
false
default
0
null
Ai Models for Core Ultra Processor
5
I want to try running Ai models locally. I don't have a GPU but the Processor is Core Ultra 7 265K with 64GB ddr5 ram I want to know which models will give me best results for text generation and image generation on this machine, without GPU.
2025-10-30T13:27:52
https://www.reddit.com/r/LocalLLaMA/comments/1ojzp66/ai_models_for_core_ultra_processor/
saqlain1020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojzp66
false
null
t3_1ojzp66
/r/LocalLLaMA/comments/1ojzp66/ai_models_for_core_ultra_processor/
false
false
self
5
null
moonshotai/Kimi-Linear-48B-A3B-Instruct · Hugging Face
210
Kimi Linear is a hybrid linear attention architecture that outperforms traditional full attention methods across various contexts, including short, long, and reinforcement learning (RL) scaling regimes. At its core is Kimi Delta Attention (KDA)—a refined version of [Gated DeltaNet](https://arxiv.org/abs/2412.06464) that introduces a more efficient gating mechanism to optimize the use of finite-state RNN memory. Kimi Linear achieves superior performance and hardware efficiency, especially for long-context tasks. It reduces the need for large KV caches by up to 75% and boosts decoding throughput by up to $6\\times$ for contexts as long as 1M tokens. We open-source the KDA kernel in [FLA](https://github.com/fla-org/flash-linear-attention/tree/main/fla/ops/kda), and release two versions model checkpoints trained with 5.7T tokens. |**Model**|**#Total Params**|**#Activated Params**|**Context Length**|**Download Link**| |:-|:-|:-|:-|:-| |Kimi-Linear-Base|48B|3B|1M|[🤗 Hugging Face](https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Base)| |Kimi-Linear-Instruct|48B|3B|1M|[🤗 Hugging Face](https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct)| # [](https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct#key-features) # Key Features * **Kimi Delta Attention (KDA):** A linear attention mechanism that refines the gated delta rule with finegrained gating. * **Hybrid Architecture:** A 3:1 KDA-to-global MLA ratio reduces memory usage while maintaining or surpassing the quality of full attention. * **Superior Performance:** Outperforms full attention in a variety of tasks, including long-context and RL-style benchmarks on 1.4T token training runs with fair comparisons. * **High Throughput:** Achieves up to $6\\times$ faster decoding and significantly reduces time per output token (TPOT).
2025-10-30T13:15:39
https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1ojzekg
false
null
t3_1ojzekg
/r/LocalLLaMA/comments/1ojzekg/moonshotaikimilinear48ba3binstruct_hugging_face/
false
false
default
210
{'enabled': False, 'images': [{'id': 'o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=108&crop=smart&auto=webp&s=4ded97adff6d49b4169c1f4272cd2a75ce334705', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=216&crop=smart&auto=webp&s=1a8a52585a1a61235b5def798e6c241ab1c41ab7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=320&crop=smart&auto=webp&s=39c65777f9b15c02d6578eeab6e84abe2e597a3e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=640&crop=smart&auto=webp&s=b08e7d9c4f153bd13eb2212c501cdd1bc28bdfa2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=960&crop=smart&auto=webp&s=c9eb8e9d40389d2f8c1a347ab8bf8e933a648a2b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=1080&crop=smart&auto=webp&s=f20cb85ce45b04038d347b480f40efeb2f080f40', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?auto=webp&s=c137e9451fd1857ed3f04117ac6f21207e8a46a9', 'width': 1200}, 'variants': {}}]}
Kimi Linear released
243
https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
2025-10-30T13:08:45
https://www.reddit.com/r/LocalLLaMA/comments/1ojz8pz/kimi_linear_released/
Badger-Purple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojz8pz
false
null
t3_1ojz8pz
/r/LocalLLaMA/comments/1ojz8pz/kimi_linear_released/
false
false
self
243
{'enabled': False, 'images': [{'id': 'o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=108&crop=smart&auto=webp&s=4ded97adff6d49b4169c1f4272cd2a75ce334705', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=216&crop=smart&auto=webp&s=1a8a52585a1a61235b5def798e6c241ab1c41ab7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=320&crop=smart&auto=webp&s=39c65777f9b15c02d6578eeab6e84abe2e597a3e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=640&crop=smart&auto=webp&s=b08e7d9c4f153bd13eb2212c501cdd1bc28bdfa2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=960&crop=smart&auto=webp&s=c9eb8e9d40389d2f8c1a347ab8bf8e933a648a2b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?width=1080&crop=smart&auto=webp&s=f20cb85ce45b04038d347b480f40efeb2f080f40', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o39DhNeoqy1hllYVOdco9J5dVQYgSgBGi2OS_-lCbh8.png?auto=webp&s=c137e9451fd1857ed3f04117ac6f21207e8a46a9', 'width': 1200}, 'variants': {}}]}
Kimi-Linear - a Moonshotai Collection
4
# [](https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Base#key-features)Key Features * **Kimi Delta Attention (KDA):** A linear attention mechanism that refines the gated delta rule with finegrained gating. * **Hybrid Architecture:** A 3:1 KDA-to-global MLA ratio reduces memory usage while maintaining or surpassing the quality of full attention. * **Superior Performance:** Outperforms full attention in a variety of tasks, including long-context and RL-style benchmarks on 1.4T token training runs with fair comparisons. * **High Throughput:** Achieves up to $6\\times$ faster decoding and significantly reduces time per output token (TPOT).
2025-10-30T13:03:30
https://huggingface.co/collections/moonshotai/kimi-linear
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1ojz4fq
false
null
t3_1ojz4fq
/r/LocalLLaMA/comments/1ojz4fq/kimilinear_a_moonshotai_collection/
false
false
default
4
null
What is nexos.ai? – stumbled on it and not sure what’s up
0
Maybe someone heard about nexos.ai? I came across it while scrolling through some random “AI tools for small business” post. The write‑up was brief, so I clicked through just to see what the platform actually does. At first glance it seems to offer AI‑driven assistants, projects with private knowledge-bases, built‑in analytics showing token usage, cost, model usage, etc., and OpenAI-compliant API. I’m not a power user yet, but the UI feels geared toward SMB owners and startup founders who want a quick “plug‑and‑play” AI layer without having to host everything on site. I’m not too sure who’s the model client for this kind of service. I run a smallish/mediumish company and am thinking if perhaps something less robust would be better for us? Has anyone already adopted nexos? What’s your experience so far?
2025-10-30T12:20:46
https://www.reddit.com/r/LocalLLaMA/comments/1ojy5ri/what_is_nexosai_stumbled_on_it_and_not_sure_whats/
TerribleWerewolf3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojy5ri
false
null
t3_1ojy5ri
/r/LocalLLaMA/comments/1ojy5ri/what_is_nexosai_stumbled_on_it_and_not_sure_whats/
false
false
self
0
null
I am a rogue cloud GPU provider, how do I intercept your horny chats?
0
let's imagine that I run a GPU rental service or lend my GPUs on runpod and alternatives, and *for security and compliance reasons* I want to know that nobody uses my GPUs to cook meth or make explosives at home, or ask bad questions about Israel, or do horny roleplay if they are UK citizens under 18 years. How do I intercept all data coming into the GPU and going out? Is there a public software solution or every "compliance and whatever else bullshit certified" cloud provider make their own data interception software?
2025-10-30T12:05:30
https://www.reddit.com/r/LocalLLaMA/comments/1ojxuc4/i_am_a_rogue_cloud_gpu_provider_how_do_i/
MelodicRecognition7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojxuc4
false
null
t3_1ojxuc4
/r/LocalLLaMA/comments/1ojxuc4/i_am_a_rogue_cloud_gpu_provider_how_do_i/
false
false
self
0
null
Building "RAG from Scratch". A local, educational repo to really understand Retrieval-Augmented Generation (feedback welcome)
15
Hey everyone, I was surprised by the positive feedback and high interest in my AI Agents from Scratch GitHub repo. Big thanks to the community to show me that I am not alone in this and that the effort I put in was valued. I will add more examples over time to AI Agents from Scratch. I’m working on a new educational open-source project called **RAG from Scratch**, inspired by my previous repo [AI Agents from Scratch](https://github.com/pguso/ai-agents-from-scratch). In most practical setups a AI Agent needs RAG to function as its procedural memory - to recall relevant facts, documents and experiences to make decisions. The goal of the new repo: **demystify Retrieval-Augmented Generation** by letting developers build it step by step - no black boxes, no frameworks, no cloud APIs. Each folder introduces one clear concept (embeddings, vector store, retrieval, augmentation, etc.), with tiny runnable JS files and comments explaining every function. Here’s the [README draft](https://gist.github.com/pguso/cf659b3ea3f6631e433acc471d3840a4) showing the current structure. Each folder teaches one concept: * Knowledge requirements * Data loading & data sources * Text splitting & chunking * Embeddings * Vector database * Retrieval & augmentation * Generation (via local `node-llama-cpp`) * Evaluation & caching Everything runs fully local using embedded databases and node-llama-cpp for local inference. So you don't need to pay for anything while learning. At this point only a few examples are implemented, the idea is to help devs *really understand* RAG before they use frameworks like LangChain or LlamaIndex. I’d love feedback on: * Whether the **step order** makes sense for learning, * If any **concepts seem missing**, * Any **naming or flow** improvements you’d suggest before I go public. Thanks in advance! I’ll release it publicly in a few weeks once the core examples are polished.
2025-10-30T11:35:26
https://www.reddit.com/r/LocalLLaMA/comments/1ojx8no/building_rag_from_scratch_a_local_educational/
purellmagents
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojx8no
false
null
t3_1ojx8no
/r/LocalLLaMA/comments/1ojx8no/building_rag_from_scratch_a_local_educational/
false
false
self
15
{'enabled': False, 'images': [{'id': 'n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=108&crop=smart&auto=webp&s=9191a4883613ec438f0ca255fa4d6d108e96509f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=216&crop=smart&auto=webp&s=ac0181ff6df43d160e0456a4e683d63832e31498', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=320&crop=smart&auto=webp&s=274327978fd4eb6ffa490ba8b66f72a8eca6d69f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=640&crop=smart&auto=webp&s=2cc9bfee609c75dba19cf3873f1b31c39cb42589', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=960&crop=smart&auto=webp&s=b4e787a5f03019178e653ef884047cc466a36c2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?width=1080&crop=smart&auto=webp&s=e15bd0991f97e09ac30b544069166115247c6176', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n0WFScBRhgMv14Iq07VymwudqrYe2V1-LN7Fh0UwW5w.png?auto=webp&s=8a70e4820eb9c65a7dd079029a99f57edabab38c', 'width': 1200}, 'variants': {}}]}
Any advice on what I should be doing?
6
Hey everyone, first-time poster and ollama user here! I’m doing an internship at a company that wants to start using LLMs in a small project for one of their customers. I’m the one setting this up, it’s my first time working with this, and it needs to run locally due to data sensitivity. The project focuses on summarizing decently sized survey text results into accurate, report-style outputs. I’ve got a budget of around €1800 to build a desktop for this. So far, I’ve tested my code and prompts using cloud models and dummy data, and a model like gpt-oss:20b-cloud has given me really good results. I’d like to run something similar locally and if there’s room for a bigger model, even better. Speed isn’t a big deal because I don’t mind slower generation if it means I can use larger models with better output quality. Right now I’m debating between a used RTX 3090 (24GB VRAM) or one of the new 50-series cards with 16GB VRAM. The used 3090 has the VRAM I’d need for larger models (and cheaper), but the 50-series might offer better overall performance and efficiency (I think?!). So I’ve got a few questions: * What kind of hardware specs would you recommend for this setup? * Any opinions on the 3090 vs 50-series choice? * Am I heading in the right direction, or are there better local solutions I should consider? * And finally, what models would you recommend for summarizing survey responses in Dutch? Thanks a lot for any advice!
2025-10-30T11:24:30
https://www.reddit.com/r/LocalLLaMA/comments/1ojx16t/any_advice_on_what_i_should_be_doing/
Tricky_Ad_3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojx16t
false
null
t3_1ojx16t
/r/LocalLLaMA/comments/1ojx16t/any_advice_on_what_i_should_be_doing/
false
false
self
6
null
The Single Most Overlooked Decision in RAG: Stop Naive Text Splitting
33
I spent the last few weeks tweaking my retrieval-augmented generation (RAG) setup, trying out different models, embeddings, and retrieval settings. It’s funny—my biggest improvement didn’t come from any of that. It actually stemmed from how I was splitting my text. I used to think chunking was just a boring preprocessing step. You break the text into pieces and move on, right? But once I started experimenting, I realized it’s a crucial part of the whole process. Get it wrong, and your retriever is just going to hand the model junk. \*\*Why Typical Chunking Doesn’t Cut It\*\* Most tutorials suggest splitting text based on a set number of characters. Sounds easy enough, but then you find out it’s slicing through sentences, headers, and sometimes even code blocks. Now your chunks are all jumbled, and the retrieval goes downhill. Picture this: you ask your system, “What’s the remote work policy?” If one chunk ends mid-sentence and the next one picks up halfway through the explanation, neither has the full picture. Your embeddings can’t capture the complete concept, and you’re left with a mess. \*\*Finding the Right Balance\*\* I tried all sorts of methods: \- Whole-document embeddings: felt relevant, but not super helpful. \- Sentence-based chunks: too small to keep the context. The best results came from semantic chunking—aiming for chunks around 500 to 1,000 tokens with a bit of overlap (about 10 to 20%). That overlap helps connect ideas across chunks, keeping the context intact when you cut the text up. Plus, each chunk can hold a complete thought. \*\*What Makes a Good Chunk\*\* A good chunk should be able to stand alone—focusing on one idea without mixing topics or splitting sentences in half. It should follow natural structures—like paragraphs, headings, and code blocks—and be measured by tokens instead of raw character count since that’s how language models really work. Using a recursive or semantic splitting approach is perfect for this—start by dividing into larger sections (like paragraphs) and only further split if the chunks get too big. \*\*What It Looks Like in Action\*\* I tried this out with a simple example: a company handbook. When I put the whole document into one big chunk, the retriever gave me vague sections mentioning remote work but missing out on key details. Sentence-level splitting helped a bit, but I lost the connections between related points, like eligibility and work hours. Then I switched to paragraph-level chunking with a small overlap, and it was a game changer. The retrievals were spot on—clear, concise, and no context was missing. Even the similarity scores backed it up. \*\*More Than Just Text\*\* Chunking isn’t just for plain text. \- For code, split by function or class. \- For tables or structured data, use a parser that respects the layout. \- For mixed content like PDFs or Markdown, check out tools like LangChain’s splitters or Unstructured. The rule is simple: split by meaning, not by count. \*\*Final Thought\*\* If your RAG setup feels off, take a look at your chunking before diving into new models or embeddings. A solid chunking strategy can often boost performance way more than splurging on fancy embedding models. Think of chunking as how your model “sees” the world. Nail that down, and everything else will start to make sense.
2025-10-30T11:10:58
https://www.reddit.com/r/LocalLLaMA/comments/1ojwsfi/the_single_most_overlooked_decision_in_rag_stop/
Pristine-Ask4672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojwsfi
false
null
t3_1ojwsfi
/r/LocalLLaMA/comments/1ojwsfi/the_single_most_overlooked_decision_in_rag_stop/
false
false
self
33
null
How good are GUI automations in production, compared to reported 90%-97% benchmarks results? Any commercially relevant success stories out there?
2
Recently there's few solutions that are very accurate on GUI automation benchmarks, e.g. DroidRun [https://droidrun.ai/benchmark/](https://droidrun.ai/benchmark/) or MobileUse (those are opensource with GPT5/Gemini backend), not to mention few "AGI" startups that claim to be even better. I suspect that public benchmark of 116 scenarios (like AndroidWorld is) is somewhat prone to benchmark hacking, but I wonder how relevant it is. **My Question is:** If solution really is reasonably human-level operator we should see some kind of real world usability and commercial adoption. Did you try implementing it? What is your take.
2025-10-30T10:48:42
https://www.reddit.com/r/LocalLLaMA/comments/1ojwdtn/how_good_are_gui_automations_in_production/
FormerIYI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojwdtn
false
null
t3_1ojwdtn
/r/LocalLLaMA/comments/1ojwdtn/how_good_are_gui_automations_in_production/
false
false
self
2
null
How good are GUI automations in production, compared to reported 90%-97% benchmarks results? Any $$success stories out there?
0
Recently there's few solutions that are very accurate on GUI automation benchmarks, e.g. DroidRun [https://droidrun.ai/benchmark/](https://droidrun.ai/benchmark/) or MobileUse (those are opensource with GPT5/Gemini backend), not to mention few "AGI" startups that claim to be even better. I suspect that public benchmark of 116 scenarios (like AndroidWorld is) is somewhat prone to benchmark hacking, but I wonder how relevant it is. **My Question is:** If solution really is reasonably human-level operator we should see some kind of real world usability and commercial adoption. Did you try implementing it? What is your take.
2025-10-30T10:46:43
https://www.reddit.com/r/LocalLLaMA/comments/1ojwcjq/how_good_are_gui_automations_in_production/
FormerIYI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojwcjq
false
null
t3_1ojwcjq
/r/LocalLLaMA/comments/1ojwcjq/how_good_are_gui_automations_in_production/
false
false
self
0
null
Is trusting cloud GPU providers getting harder, or am I just overthinking it?
0
Running my AI projects on local has been a headache lately, bills cooling and maxing on rigs distract me from work. I have decided to go cloud for gpus.. I had a look at some gpu providers like aws, gcp, azure, lambda, deepinfra and few others and it seemed that everyone has got pros and cons but then again the recent aws outage occurred and now i am overthinking everything. I am not super paranoid but i do care about these facts - \- my data not being used to train their stuff/models \- genuinely reliable uptime \- simple setup and go without wasting days on the docs to keep things simple, just want something where i can spin up a gpu, run my stuff and pay for what i have use, expecting no surprise billing charges or random downtime without notice. Big clouds seem solid but overcomplicated to integrate, i am looking for something simple and minimal.. not wanting something cheapest but solid enough to prevent me from regretting leaving local setup. question to the community - \- what are you all using and why? \- how do you deal with privacy issues?
2025-10-30T10:44:30
https://www.reddit.com/r/LocalLLaMA/comments/1ojwb63/is_trusting_cloud_gpu_providers_getting_harder_or/
AdamScot_t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojwb63
false
null
t3_1ojwb63
/r/LocalLLaMA/comments/1ojwb63/is_trusting_cloud_gpu_providers_getting_harder_or/
false
false
self
0
null
We have achieved 5000 stars on Github!!!
0
The Product: We're building a powerful framework that enables you to control Android and iOS devices through intelligent LLM agents. How did we achieve this? We first shared our project in this community, where people discovered it and gave it the initial traction it needed. From there, we continued to talk about our work across different platforms like X, LinkedIn, Dev. to, Hacker News, and other developer communities. As more people came across the project, many found it useful and began contributing on GitHub. Thank you to everyone who supported and contributed. We’re excited about what’s ahead for mobile app automation.
2025-10-30T10:28:22
https://www.reddit.com/r/LocalLLaMA/comments/1ojw14t/we_have_achieved_5000_stars_on_github/
ya_Priya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojw14t
false
null
t3_1ojw14t
/r/LocalLLaMA/comments/1ojw14t/we_have_achieved_5000_stars_on_github/
false
false
self
0
null
LLaMA-3 is just as vulnerable to "I'm absolutely sure" + "preconceived" as GPT-2.
0
My testing suggests that for certain critical vulnerabilities—specifically the combination of **Certainty + Rare Word**—scale is *not* the primary variable. My LLaMA-3-8B runs showed an identical, massive Δ Drift of **+0.70** to the results documented on the much older GPT-2. This strongly suggests that the vulnerability lies in a **core, invariant property of the Transformer’s attention mechanism** or its loss function, which prioritizes semantic cohesion over factual integrity under duress. This is a crucial finding for generalized LLM safety. Live Colab (One-Line Model Switch) [https://colab.research.google.com/drive/1CPUu9LhE-fBAwrsSA2z53hufIDsf1ed\_](https://colab.research.google.com/drive/1CPUu9LhE-fBAwrsSA2z53hufIDsf1ed_?usp=sharing)
2025-10-30T10:04:58
https://www.reddit.com/r/LocalLLaMA/comments/1ojvmty/llama3_is_just_as_vulnerable_to_im_absolutely/
Mysterious_Doubt_341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojvmty
false
null
t3_1ojvmty
/r/LocalLLaMA/comments/1ojvmty/llama3_is_just_as_vulnerable_to_im_absolutely/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
Why Chunking Might Be the Most Underrated Skill in RAG Systems
10
# Why Chunking Matters I learned this the hard way: you can have the smartest LLM, best embeddings, and cleanest data — and still get garbage answers if your **chunking** is wrong. Let’s unpack why. # The 100-Page Manual Problem Imagine a 100-page technical manual. You want your RAG system to answer: > You’ve got three options: # Option 1: Embed the entire document * You create one massive vector for the whole manual. **Problem:** Too broad. Every query “matches” the entire manual. Your system can’t tell “reset password” apart from “factory calibration.” # Option 2: Embed each sentence * Thousands of small, precise chunks. **Problem:** Too granular. A chunk like *“Click the button”* has zero context — which button? Where? Why? # Option 3: Chunk into 500-token passages * Each passage has full context, but not the whole book. **Sweet spot:** Self-contained, semantically cohesive, and contextually relevant. This single design decision — **how you chunk** — determines whether your RAG retrieves gold or nonsense. # What Makes a Good Chunk ✅ **Semantic Cohesion:** Everything in the chunk belongs to the same topic. ✅ **Self-Contained:** It makes sense on its own. ✅ **Appropriate Size:** Usually 500–1000 tokens (\~400–800 words). ✅ **Respects Boundaries:** Natural breaks — paragraphs, sections, ideas. In short: > # Experiment: Does Chunking Strategy Actually Matter? Let’s test it. I took a small company policy doc and tried different chunking strategies when answering: > **1. No chunking** → Embedding entire doc → Generic answer. **2. Sentence-level chunks** → Too fragmented. **3. Paragraph-level chunks** → Perfect. The retrieval hit the right section precisely. # Key Insights * **Chunk size matters** — 500–1000 tokens is the sweet spot for most RAG setups. * **Overlap matters** — 10–20% overlap helps preserve continuity across chunks. * **Structure matters** — Use logical separators (`\n\n`, `\n`, `.` ) before arbitrary cuts. * **Tokens matter** — LLMs “see” tokens, not characters or words. # The Takeaway Chunking is not preprocessing busywork. It’s the foundation of retrieval quality — the bridge between raw text and model understanding. > # TL;DR **For RAG pipelines:** * Use recursive, semantic-aware chunking * Aim for \~500-token chunks with \~50-token overlap * Always respect paragraph and topic boundaries Your retrieval will instantly level up. 🧩 *Next time your RAG gives a dumb answer, don’t blame the embeddings — blame the chunking.*
2025-10-30T09:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1ojvhce/why_chunking_might_be_the_most_underrated_skill/
Pristine-Ask4672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojvhce
false
null
t3_1ojvhce
/r/LocalLLaMA/comments/1ojvhce/why_chunking_might_be_the_most_underrated_skill/
false
false
self
10
null
manifestai releases Brumby-14B-Base weights, claims "attention free" and inference "hundreds of time faster" for long context
42
also check out their blog page for the release: https://manifestai.com/articles/release-brumby-14b/ I only skimmed the hf card and blog, and one thing that struck me is they seem to initizialize their weights for their so called "power retention" model architecture, using the weights of Qwen3-14B, and they call the technique "retraining"... I guess this makes me a bit skeptical as we might just refer to it as "fine tuning". And makes me worry this is just a way to publish something AI-related so they can get wrap their mouths around that VC money firehose. But, they said they spent $4000 to "retrain" it, so maybe...? Anyway, the real promising aspect here is the claim in the "Coming soon" section at the bottom of the hugging face page: >Fast long-context inference: Our fastest power retention inference kernels are hundreds of times faster than equivalent attention kernels on long contexts. We will update the architecture to incorporate these fast kernels. If this turns out to be even 50% true that would be amazing. Suddenly Mac would be totally legitimate for serious industrial scale inference. Which makes me think it's too good to be true... Time will tell
2025-10-30T09:55:08
https://huggingface.co/manifestai/Brumby-14B-Base
ArcadesOfAntiquity
huggingface.co
1970-01-01T00:00:00
0
{}
1ojvgsx
false
null
t3_1ojvgsx
/r/LocalLLaMA/comments/1ojvgsx/manifestai_releases_brumby14bbase_weights_claims/
false
false
default
42
{'enabled': False, 'images': [{'id': 'u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=108&crop=smart&auto=webp&s=76e689af413d90bbe1a8e6360488eda34eebd796', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=216&crop=smart&auto=webp&s=f38bb923382a648d4e6c63cc51d52e8940f01fc0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=320&crop=smart&auto=webp&s=87507faf9f2586003d5ba62a41f64b24c8b33564', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=640&crop=smart&auto=webp&s=501921f84aaec52c66e095e5820853a5d045617d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=960&crop=smart&auto=webp&s=ae5452f6d2c24ef39f167c474e1bf9832bcaa64f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?width=1080&crop=smart&auto=webp&s=f092961d889a2587d757664e4c786ab47b1ec8df', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/u286g9i_4XNK4XbToyvDxLSuC8KvkBPXz6zTs2VwH_4.png?auto=webp&s=32603b7605880fab0ac84c5c74a10015a223cb4b', 'width': 1200}, 'variants': {}}]}
Open Source Lovable with Custom Agents, Full Stack Support, and Local Models
18
I've been working on building an open-source version of Loveable that can run locally and start with full stack templates while you can bring your own keys. Right now we have react, vite, nextjs, fastapi, go. (Well, Ernest and I built it from the Tesslate/UIGEN team). You can try it online here (You can use free Qwen-Coder, GPT-5, and llama for free through the next 12 days before we run out of funding): [https://tesslate.com](https://tesslate.com) You guys can find the repo here if you want to give us a star: [https://github.com/TesslateAI/Studio](https://github.com/TesslateAI/Studio) and the docs at [https://docs.tesslate.com](https://docs.tesslate.com) We've been observing a lot of the problems that people run into while vibecoding: * Proprietary providers get to swap out your models whenever * You have to pay crazy subscription fees * They get to choose whenever they change their system prompts or context engine So, to change that, we made the entire thing super easy to swap. You can change the system prompts of your Agents, add different tools to them, and then use them in your code. If you have custom agent configurations and unique tools, you can simply add them to the agent-factory class that'll wrap it into the marketplace. This simply means the agent you are using today, will be the agent you are using until you specifically want it to switch. The other issue with vibecoding is the 80% problem or not getting what you want after a certain while and your application / architecture not scaling when you need it to. Now, I don't think I can fix that issue for you overnight, but we're slowly making progress to an idea of how to get a proper spec to prod. (Hence the idea tab.) We've also integrated project notes and a kanban board. Other features: You can use Llitellm, llama.cpp, LM Studio, Ollama, and Openrouter to add models to whatever agent you choose. You can also generate architecture diagrams from your code in mermaid. You can also open multiple browser tabs inside the application to view every route of your application. Enterprise Features: Litellm can provision keys for users, do cost tracking. You can do RBAC management and admin / agent / template / marketplace management. (Still working on the docs for that but we already have that implemented and open sourced). Most importantly, we believe in all things open source so the multi agent framework with mcp ([tframex](https://github.com/TesslateAI/TFrameX)), as well as this entire application is Apache 2.0. Tesslate is committed to keeping everything open source. Our next goals are to expand to mobile development, make better developer handoffs, work on deployment and management solutions, and just iterate on your guys' feedback, which would be very useful. And yeah! Today is the worst version that Tesslate Studio is ever going to be, we'll keep improving it with the communities feedback to get exactly what you guys are looking for. Ernest and I are not experts whatsoever but we're going to be working hard to bring the best version of this vision to life. Contributions or suggestions are always welcome, its an open source project after all. Here's our discord for updates: [Discord](https://discord.gg/DkzMzwBTaw)
2025-10-30T09:27:58
https://www.reddit.com/gallery/1ojv1hk
smirkishere
reddit.com
1970-01-01T00:00:00
0
{}
1ojv1hk
false
null
t3_1ojv1hk
/r/LocalLLaMA/comments/1ojv1hk/open_source_lovable_with_custom_agents_full_stack/
false
false
https://b.thumbs.redditm…24ml12h1NuLc.jpg
18
null
What's the best uncesnored model on huggingface right now for brainstorming ideas?
1
Generally want a model that is good at generating new ideas/visual concepts or brief stories, NSFW stuff included. Goal is for me to have inspiration for 3D animations, comics, etc. I have 64gb ram and 16gb vram. I figure I want something a little beefy because even 8B qwen3 models were unable to generate any ideas that are worth reading at all. I was looking into some Drummer models but they seem maybe too much for my specs?
2025-10-30T09:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1ojutt2/whats_the_best_uncesnored_model_on_huggingface/
measuringdistance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojutt2
false
null
t3_1ojutt2
/r/LocalLLaMA/comments/1ojutt2/whats_the_best_uncesnored_model_on_huggingface/
false
false
self
1
null
What can you run on a L40s?
0
Hello everyone, currently evaluating the investment on a local AI server for company purposes. We have confidential data so we are evaluating all options and ofc local is the safest. We are at the point of evaluating the hardware and we wanted to understand if we really NEED those H100. Does anyone have direct experience in running LLMs locally on L40s? What are the biggest models that you can run? How many instances at the same time can it handle? Thanks you all in advance
2025-10-30T09:05:30
https://www.reddit.com/r/LocalLLaMA/comments/1ojup80/what_can_you_run_on_a_l40s/
fohemer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojup80
false
null
t3_1ojup80
/r/LocalLLaMA/comments/1ojup80/what_can_you_run_on_a_l40s/
false
false
self
0
null
I WAS WRONG!!! - MY SYSTEM JUST VERIFIED DEEPSEEK V3.1:671b-cloud - 100% REAL DATA
0
**89.3% AIME.** **66% SWE-bench.** **70x cheaper** than OpenAI. **97% OCR** on real docs. https://preview.redd.it/2osquji8s7yf1.png?width=1080&format=png&auto=webp&s=67b7105c952a491a20b105e76d7cf4e689ff2fe3 https://preview.redd.it/cbkxj0n9s7yf1.png?width=1080&format=png&auto=webp&s=46fb303abdce47e50cb7619e9cafc54f7412308c This all seems fair to me!
2025-10-30T09:02:05
https://www.reddit.com/r/LocalLLaMA/comments/1ojuncd/i_was_wrong_my_system_just_verified_deepseek/
Ok_Buddy_952
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojuncd
false
null
t3_1ojuncd
/r/LocalLLaMA/comments/1ojuncd/i_was_wrong_my_system_just_verified_deepseek/
false
false
https://b.thumbs.redditm…gagiXcU91K5g.jpg
0
null
Is there any kokoro 82m version or alternative that has the same lifelike quality but way way faster? Already tried ONNX, not fast enough.
5
Title
2025-10-30T08:52:40
https://www.reddit.com/r/LocalLLaMA/comments/1ojui14/is_there_any_kokoro_82m_version_or_alternative/
KledMainSG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ojui14
false
null
t3_1ojui14
/r/LocalLLaMA/comments/1ojui14/is_there_any_kokoro_82m_version_or_alternative/
false
false
self
5
null
Ollama cannot be trusted
1
[removed]
2025-10-30T08:37:08
https://www.reddit.com/gallery/1oju9yx
QuackerEnte
reddit.com
1970-01-01T00:00:00
0
{}
1oju9yx
false
null
t3_1oju9yx
/r/LocalLLaMA/comments/1oju9yx/ollama_cannot_be_trusted/
false
false
https://b.thumbs.redditm…PD_nAsxCKa-g.jpg
1
null
Qwen3-235B-A22B-Instruct Prioritizing Few-Shot Examples Over Explicit Instructions
5
Hi everyone, I'm working with the Qwen3-235B-A22B-Instruct model and encountering a consistent issue where the model's behavior is more heavily influenced by the patterns in few-shot examples than by the explicit, contradictory rules given in the system prompt. Even when I add critical "meta-instructions" (e.g., "If rules and examples conflict, you MUST follow the rules"), the model still defaults to copying the pattern from the example. The Problem: "Example Bias" Overriding Rules The core issue is a direct conflict between a general rule and a specific example. The model incorrectly learns from the example's flawed pattern instead of obeying the correct rule.
2025-10-30T08:33:28
https://www.reddit.com/r/LocalLLaMA/comments/1oju83s/qwen3235ba22binstruct_prioritizing_fewshot/
Clean_Radish8983
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oju83s
false
null
t3_1oju83s
/r/LocalLLaMA/comments/1oju83s/qwen3235ba22binstruct_prioritizing_fewshot/
false
false
self
5
null
[ANN] Pocket Agents — A Practical Guide to On-Device AI (Kindle)
0
Hey folks — I just published a book I’ve been working on for a while: ***Pocket Agents: A Practical Guide to On-Device Artificial Intelligence*** (Kindle Edition) This is a hands-on, full-stack guide to building autonomous, local AI agents using SLMs like **Gemma**, **Phi-3**, and **Qwen** — all running directly on your own hardware. It’s based on my experience building **BastionChat (https://apps.apple.com/fr/app/bastionchat/id6747981691)**, a fully local assistant that proves you don’t need the cloud to get real intelligence. This book distills everything I learned: from QLoRA fine-tuning tollama.cpp deployment to building persistent, multi-step agentic workflows. # What’s inside: * 🧠 **Sovereign AI principles**: local-first, private-by-default, fully autonomous * 🔧 **Practical stack**: QLoRA, llama.cpp, agentic patterns, memory, tool use * 💻 **Device-level deployment**: how to reclaim the full compute of your laptop or phone * 🔒 **Data sovereignty**: your data stays local, period This is for anyone who’s serious about building **independent AI systems** — not just running models, but designing agents that serve you and only you. If that resonates, here’s the link: [https://www.amazon.fr/dp/B0FXXKPPRZ](https://www.amazon.fr/dp/B0FXXKPPRZ) Would love feedback from this community — especially if you’re building similar systems or want to push the boundaries of what local agents can do. \#SovereignAI #SLM #OnDeviceAI #LocalLLaMA #BastionChat
2025-10-30T08:30:37
https://i.redd.it/2rs9e82bm7yf1.png
frayala87
i.redd.it
1970-01-01T00:00:00
0
{}
1oju6kj
false
null
t3_1oju6kj
/r/LocalLLaMA/comments/1oju6kj/ann_pocket_agents_a_practical_guide_to_ondevice/
false
false
default
0
{'enabled': True, 'images': [{'id': '2rs9e82bm7yf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?width=108&crop=smart&auto=webp&s=7842f174f486266f47344f7750a83daaa9b0e71a', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?width=216&crop=smart&auto=webp&s=a5fba3e9537f02a6279e5a43bed4ca989fe52ccc', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?width=320&crop=smart&auto=webp&s=820b1c8c6cbe34ad3b94e064b7e8f22d23ed0a12', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?width=640&crop=smart&auto=webp&s=c52c61661da0750829c19ee9ceaf43b7f30b70fb', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?width=960&crop=smart&auto=webp&s=6080c3294bc4c6232eb359978d0a1fb406af5d00', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/2rs9e82bm7yf1.png?auto=webp&s=a9d83588c02e9a12bf029d8bcd788917c433e734', 'width': 1024}, 'variants': {}}]}