title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
You're using HuggingFace wrong. Stop downloading pre-quantized GGUFs and start building hardware-optimized, domain-specific models. Here's the pipeline I built to do it. | 0 | **TL;DR:** Downloading TheBloke's Q4\_K\_M and calling it a day is lazy and you're leaving massive performance on the table. I built [LlamaPajamas](https://github.com/llama-farm/LlamaPajamas) (experimental / open-source), a pipeline that downloads full-precision models, converts them to the optimal format for your specific hardware (CoreML/TensorRT/ONNX for vision/SST, MLX/GGUF/TensorRT-LLM for LLMs), and then applies importance quantization with domain-specific calibration data. An 8B model quantized for YOUR use case beats a 70B general-purpose model for YOUR task. Also discovered most quantization benchmarks are lying to you.
# The problem with how everyone uses HuggingFace
Go to any [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) thread. "What model should I download?" And everyone recommends some pre-quantized GGUF.
That's fine for playing around. It's completely wrong for production or for real workloads.
Here's what you're doing when you download a pre-quantized model:
1. Someone else decided which quantization format to use
2. Someone else decided which calibration data to use (usually generic web text)
3. Someone else decided which weights to preserve and which to compress
4. You have no idea if any of those decisions match your use case
You're running a model that was optimized for nobody in particular on hardware it wasn't optimized for.
And then you wonder why your local setup feels worse than the APIs.
# The approach that actually works
Download the full-precision model. Do your own conversion. Do your own quantization with your own calibration data.
Yes, it takes more time. Yes, it requires understanding what you're doing. But you end up with a model that's actually optimized for your hardware and your task instead of some generic middle ground.
That's what LlamaPajamas does. It's the pipeline for doing this properly.
# Different model types need completely different backends
This is where most people screw up. They treat all AI models the same. "Just convert it to GGUF and run it."
No. Different architectures run best on completely different backends.
**Vision and Speech models (Whisper, YOLO, ViT, CLIP)**
These are mostly matrix multiplications and convolutions. They're well-suited for:
* **CoreML** on Apple Silicon → Uses the Neural Engine and GPU properly. Whisper-tiny runs in 2 seconds for a 1-minute clip on M1 Max.
* **TensorRT** on NVIDIA → Graph optimization and tensor cores. YOLO inference at 87ms per frame.
* **ONNX** for CPU/AMD → Portable, runs everywhere, good enough performance.
You probably know this, but Do NOT run vision models through GGUF or MLX. That's not what those backends are for and they really don't support it (yet).
**Large Language Models**
LLMs have different compute patterns. Attention mechanisms, KV caches, sequential token generation. They need:
* **MLX** on Apple Silicon → Apple's ML framework built for LLMs on M-series chips. Way better than CoreML for text generation.
* **GGUF** for CPU/universal → llama.cpp's format. Works everywhere, highly optimized for CPU inference, and this is where you do importance quantization.
* **TensorRT-LLM** on NVIDIA → Not regular TensorRT. TensorRT-LLM is specifically optimized for autoregressive generation, KV caching, and batched inference on NVIDIA GPUs.
Notice that CoreML isn't in the LLM list. CoreML is great for vision but it's not designed for the sequential generation pattern of LLMs. MLX is what you want on Apple Silicon for text.
Similarly, regular TensorRT is great for vision but you need TensorRT-LLM for language models. Different optimization strategies entirely.
# The quantization stack: format first, then hyper-compress
Once you've got the right backend format, then you quantize. And for LLMs, you should be going way more aggressive than Q4\_K\_M.
**The GGUF quantization ladder:**
|Format|Compression|Use Case|
|:-|:-|:-|
||
|F16|1x|Baseline, too big for most uses|
|Q8\_0|2x|Overkill for most tasks|
|Q4\_K\_M|4x|Where most people stop|
|IQ4\_XS|5x|Where you should start looking|
|IQ3\_XS|6x|Sweet spot for most use cases|
|IQ2\_XS|8x|Aggressive but works with good calibration|
Most people stop at Q4\_K\_M because that's what the pre-quantized downloads offer. You're missing the whole point.
IQ (importance quantization) uses calibration data to figure out which weights matter. Generic calibration preserves weights that matter for generic tasks. Domain-specific calibration preserves weights that matter for YOUR task.
# Domain-specific calibration changes everything
This is the core insight that most people miss.
We created 7 calibration datasets:
|Domain|Use Case|
|:-|:-|
||
|General|Multi-purpose balanced|
|Tool Calling|Function/API calling|
|Summarization|Text compression|
|RAG|Document Q&A|
|Medical|Healthcare/diagnosis|
|Military|Defense/tactical|
|Tone Analysis|Sentiment/emotion|
**Real results**: A medical model quantized with medical calibration data maintains 95%+ task accuracy at IQ3\_XS (900MB). The same model with general calibration drops to 85%.
That's 10% accuracy difference from calibration data alone at the same file size.
A well-calibrated IQ3\_XS model for your specific domain will outperform a generic Q4\_K\_M for your task. Smaller file, better performance. That's not magic, that's just optimizing for what you actually care about instead of what some random person on the internet cared about.
# The calibration lesson that cost us
We built all these calibration datasets and felt good about ourselves. Then tool\_calling quantization completely failed.
Turns out llama-imatrix needs at least 4,096 tokens to generate a useful importance matrix. Our tool\_calling dataset only had 1,650 tokens.
Had to rebuild everything. Medical prompts went from "diagnose chest pain" to full clinical scenarios with differential diagnosis, test ordering, and treatment plans. Each calibration file needs to hit that token threshold or your importance matrix is garbage.
Check your token counts before running quantization. Learned this the hard way.
# Your evaluation is lying to you
LlamaPajamas has a built-in evaluation tool - the first time I did it completely wrong (a lesson I am sure many have run into).
We were running evaluations and getting 90%+ accuracy on quantized models. Great! Ship it!
The evaluation was garbage.
Our "lenient mode" accepted any answer containing the right letter. Correct answer is "A"? We'd accept:
* "A"
* "A."
* "A) Because the mitochondria is the powerhouse of the cell"
* "The answer is A"
In production, most of those are WRONG. If your system expects "A" and gets "A) Because...", that's a parsing failure.
We built strict mode. Exact matches only.
Accuracy dropped from 90% to \~50%.
That's the truth. That's what your model actually does. The 90% number was a lie that made us feel good.
We also built category-specific prompts:
* Math: "Answer with ONLY the number. No units. No explanations."
* Multiple choice: "Answer with ONLY the letter. No punctuation."
* Tool calling: "Output ONLY the function name."
If you're not evaluating with strict exact-match, you don't know what your model can actually do, expecially in an agentic / tool calling world.
# Handling thinking models
Some models output reasoning in `<think>` tags:
<think>
The question asks about cellular respiration which is option B
</think>
B
Our regex broke when outputs got truncated mid-tag. Fixed it with two-pass extraction: remove complete tags first, then clean up unclosed tags.
Thinking models can reason all they want internally but still need exact final answers.
# Actual benchmark results
**Vision (YOLO-v8n)**
* CoreML FP16: 6.2MB, 87ms per frame on M1 (m laptop)
* TensorRT FP16: 6MB, 45ms per frame on RTX 3090
**Speech (Whisper-Tiny)**
* CoreML INT8: 39MB, 2.1s for 1-minute audio
* ONNX: 39MB, 3.8s same audio on CPU
**LLM (Qwen3 1.7B)**
|Format|Size|Strict Accuracy|
|:-|:-|:-|
||
|F16 baseline|3.8 GB|78%|
|Q4\_K\_M|1.2 GB|75%|
|IQ3\_XS (general)|900 MB|73%|
|IQ3\_XS (domain)|900 MB|76% on domain tasks|
|IQ2\_XS|700 MB|68%|
The sweet spot is IQ3\_XS with domain calibration. You get 6x compression with minimal accuracy loss on your target task. For 8B models that's 15GB down to 2.5GB.
# How to use the pipeline
Install:
git clone https://github.com/llama-farm/llama-pajamas
cd llama-pajamas
curl -LsSf https://astral.sh/uv/install.sh | sh
./setup.sh
Download full model and convert to GGUF F16:
cd quant
uv run llama-pajamas-quant quantize \
--model Qwen/Qwen3-1.7B\
--format gguf \
--precision F16 \
--output ./models/qwen3-1.7b
IQ quantize with your domain calibration:
uv run llama-pajamas-quant iq quantize \
--model ./models/qwen3-1.7b/gguf/F16/model.gguf \
--domain medical \
--precision IQ3_XS \
--output ./models/qwen3-1.7b-medical-iq3
Evaluate with strict mode (no lying to yourself):
uv run llama-pajamas-quant evaluate llm \
--model-dir ./models/qwen3-1.7b-medical-iq3/*.gguf \
--num-questions 140
Convert vision model to CoreML:
uv run llama-pajamas-quant quantize \
--model yolov8n \
--format coreml \
--precision fp16 \
--output ./models/yolo-coreml
# What we're building next
**Automatic calibration generation**: Describe your use case, get calibration data generated automatically.
**Quality prediction**: Estimate accuracy at different quantization levels before running the full process.
**Mobile export**: Direct to CoreML for iOS, TFLite for Android.
# The caveat: general-use GGUFs have their place
Look, there are a lot of great pre-quantized GGUFs out there. TheBloke did great work. Bartowski's quants are solid. For playing around with different models and getting a feel for what's out there, they're fine.
But here's my question: why are you running models locally for "general use"?
If you just want a general-purpose assistant, use Claude or ChatGPT. They're better at it than any local model and you don't have to manage infrastructure.
The reason to run locally is privacy, offline access, or specialization. And if you need privacy or offline access, you probably have a specific use case. And if you have a specific use case, you should be fine-tuning and using domain-specific iMatrix quantization to turn your model into a specialist.
A 3B model fine-tuned on your data and quantized with your calibration will destroy a generic 8B model for your task. Smaller, faster, better. That's the whole point.
Stop downloading generic quants and hoping they work for your use case. Download full models, fine-tune if you can, and quantize with calibration data that matches what you're actually trying to do.
That's how you get local AI that actually competes with the APIs.
# Links
GitHub: [https://github.com/llama-farm/LlamaPajamas](https://github.com/llama-farm/LlamaPajamas)
Happy to answer questions about hardware-specific optimization, calibration data design, or why your current evaluation is probably lying to you.
P.S.
Why LlamaPajamas - you shouldn't just make pajamas 1 size fits all, they need to be specialized for the hardware (the animal). Plus my daughter and son love the book :)
https://preview.redd.it/cujx94sly82g1.png?width=1024&format=png&auto=webp&s=aae93c60959c3ab33c74a1c09c931f5a8bd599c7
| 2025-11-19T17:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p1dkzh/youre_using_huggingface_wrong_stop_downloading/ | badgerbadgerbadgerWI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1dkzh | false | null | t3_1p1dkzh | /r/LocalLLaMA/comments/1p1dkzh/youre_using_huggingface_wrong_stop_downloading/ | false | false | 0 | null | |
SAM 3: Segment Anything with Concepts, by Meta Superintelligence Labs | 231 | 2025-11-19T17:10:27 | https://huggingface.co/facebook/sam3 | xenovatech | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p1df5y | false | null | t3_1p1df5y | /r/LocalLLaMA/comments/1p1df5y/sam_3_segment_anything_with_concepts_by_meta/ | false | false | 231 | {'enabled': False, 'images': [{'id': '4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=108&crop=smart&auto=webp&s=637a3838ffd450c9527b687ec681839a39817e8e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=216&crop=smart&auto=webp&s=eff6c2409fa77233001b38b9e06eec5c984d83e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=320&crop=smart&auto=webp&s=02cced67766456c20e3388394fd803f193082700', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=640&crop=smart&auto=webp&s=88abe479f5e76ceeae47c92c033ef5091fe19a40', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=960&crop=smart&auto=webp&s=7121b7d4e5b99910627bf61ccb8964a84ad2b255', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?width=1080&crop=smart&auto=webp&s=dab2c1ded64265e9a568353990426f4451bb5773', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4Uyf8OlIkFBtIXR-wKdBIOZqZgS3NkQSX04eUeTDY7w.png?auto=webp&s=007ebac157b6f78bd032623a523ad910719d4d48', 'width': 1200}, 'variants': {}}]} | ||
LM Studio > 2+ MCP Servers ??? | 1 | I would like to add several MCP servers to LM Studio. But when I edit mcp.json in LM Studio (Install > Edit mcp.json), one server works fine, but when I add a second one, I can no longer save mcp.json (the button is grayed out). **Help, what am I doing wrong**?
**Example:**
{
"mcpServers": {
"duckduckgo": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp/duckduckgo"
]
}
"time": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"mcp/time"
]
}
}
} | 2025-11-19T16:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p1cuyt/lm_studio_2_mcp_servers/ | AdDizzy8160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1cuyt | false | null | t3_1p1cuyt | /r/LocalLLaMA/comments/1p1cuyt/lm_studio_2_mcp_servers/ | false | false | self | 1 | null |
MacOS 26.2 to add full 'Neural Accelerator' support for M5 chips | 24 | No more need for the custom MLX patch in the Weinbach anrticle. Also clustering at 80Gb/s over Tb5.
[https://www.engadget.com/ai/you-can-turn-a-cluster-of-macs-into-an-ai-supercomputer-in-macos-tahoe-262-191500778.html](https://www.engadget.com/ai/you-can-turn-a-cluster-of-macs-into-an-ai-supercomputer-in-macos-tahoe-262-191500778.html) | 2025-11-19T16:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p1coup/macos_262_to_add_full_neural_accelerator_support/ | PracticlySpeaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1coup | false | null | t3_1p1coup | /r/LocalLLaMA/comments/1p1coup/macos_262_to_add_full_neural_accelerator_support/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=108&crop=smart&auto=webp&s=192ec3a5f7e1d079074f4e16560c1a0cb7101aaf', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=216&crop=smart&auto=webp&s=98ea2302b8aa35164b7ee655554a3fcc8dcc3223', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=320&crop=smart&auto=webp&s=519abbb03c3dc8bdfaf68b3fe88c8548773565dd', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=640&crop=smart&auto=webp&s=60f2af809ca89cb92e5fa37a7d3b633a9b430053', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=960&crop=smart&auto=webp&s=7369dc94ef4ff697445fc0a60183176796d1e7ea', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?width=1080&crop=smart&auto=webp&s=5762d741f82da13199f2ddb138edbfca15214c4f', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/D1ukRA4siuEaCFz6DVadqElt2e20I1ADaPyNRatEi6Q.jpeg?auto=webp&s=fe5f44400f3bbc00ce38babdba197305cfa1766a', 'width': 1200}, 'variants': {}}]} |
Total noob looking for Knowledge Distillation/ Fine Tuning Advice | 2 | I made my own JSON sample by taking truetomes 100k dataset and adapting a sample of 200 or so questions and replaced the answers with the LLM of my choice as I like its responses over others.
Where i'm struggling is now the next step. The original dataset was in parquet, but I had to convert it to JSON single line to make it usable to get a new dataset with answers.
Does the community suggest any resources I can perform the fine tuning/knowledge distillation against Gemma 12B with this new data set? I'm tearing through unsloth documents and some other sites but I'm getting a bit lost.
Any tips to validate the dataset as some of the user questions contain code and I want to make sure the formatting will come across correctly when I perform the KD/FT.
This is purely a learning experience so no negative results if I flop, only positives. | 2025-11-19T16:36:40 | https://www.reddit.com/r/LocalLLaMA/comments/1p1chrh/total_noob_looking_for_knowledge_distillation/ | _Fancy_Bear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1chrh | false | null | t3_1p1chrh | /r/LocalLLaMA/comments/1p1chrh/total_noob_looking_for_knowledge_distillation/ | false | false | self | 2 | null |
GLM 4.5 Air on strix point...? | 2 | I have an AMD HX 370 notebook with 64GB ram (no nvidia card), which isn't particularly fast for LLMs but works ok for small models, including Qwen3 32B and Gemma 27B.
I was considering upgrading to 96GB ram to try a bigger models like GLM 4.5 Air, however I'm not sure if that's a bad idea, if it would even run at all?
Did anybody try to run this model on similar hardware?
Thanks! | 2025-11-19T16:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p1c93o/glm_45_air_on_strix_point/ | mark_haas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1c93o | false | null | t3_1p1c93o | /r/LocalLLaMA/comments/1p1c93o/glm_45_air_on_strix_point/ | false | false | self | 2 | null |
About the person worried about an OpenAI data breach | 1 | [removed] | 2025-11-19T16:17:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p1bz8g/about_the_person_worried_about_an_openai_data/ | Cool-Current-134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1bz8g | false | null | t3_1p1bz8g | /r/LocalLLaMA/comments/1p1bz8g/about_the_person_worried_about_an_openai_data/ | false | false | self | 1 | null |
Best Courses for learning LLM | 5 | I have been playing with local LLMs for a while, and really want to get serious with it. I would like to take a course or something to really amp up my ability but I am not sure where to start. Has anyone taken any classes or can recommend a video course or anything? Thank you in advance. | 2025-11-19T16:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p1brbk/best_courses_for_learning_llm/ | PersonSuitTV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1brbk | false | null | t3_1p1brbk | /r/LocalLLaMA/comments/1p1brbk/best_courses_for_learning_llm/ | false | false | self | 5 | null |
AMA with MiniMax — Ask Us Anything! | 180 | Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/)! We’re really excited to be here, thanks for having us.
I’m Skyler ([u/OccasionNo6699](https://www.reddit.com/user/OccasionNo6699/)), head of engineering at MiniMax, the lab behind:
* [MiniMax-M2](https://x.com/MiniMax__AI/status/1982674798649160175?s=20)
* [Hailuo 2.3](https://x.com/Hailuo_AI/status/1983382728343994414)
* [MiniMax Speech 2.6](https://x.com/Hailuo_AI/status/1983661667872600296)
* [MiniMax Music 2.0](https://x.com/Hailuo_AI/status/1983964920493568296)
Joining me today are:
* Pengyu Zhao, [u/Wise\_Evidence9973](https://www.reddit.com/user/Wise_Evidence9973/) — Head of LLM Research
* Jade Cai, [u/srtng](https://www.reddit.com/user/srtng/) — Head of Developer Community
**The AMA will run from 8AM-11AM PST with our core MiniMax tech team continuing to follow up on questions over the next 48 hours.** | 2025-11-19T15:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p1b550/ama_with_minimax_ask_us_anything/ | OccasionNo6699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1b550 | false | null | t3_1p1b550 | /r/LocalLLaMA/comments/1p1b550/ama_with_minimax_ask_us_anything/ | false | true | self | 180 | null |
RamaLama strives to make working with AI simple, straightforward, and familiar by using OCI containers. | 2 | 2025-11-19T15:28:43 | https://github.com/containers/ramalama | autodidacticasaurus | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p1anwi | false | null | t3_1p1anwi | /r/LocalLLaMA/comments/1p1anwi/ramalama_strives_to_make_working_with_ai_simple/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=108&crop=smart&auto=webp&s=ca79a22d5d4393d0176d113feb41b0cad470ca2b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=216&crop=smart&auto=webp&s=29b6e5769f0fe2a4f73a99ff35fa67831f9fbfd7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=320&crop=smart&auto=webp&s=a78334baa390d87297f03b284010e0618b690d8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=640&crop=smart&auto=webp&s=4d9a8e2a60de8c4d7d50e0ff91a24b4d2e2e4060', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=960&crop=smart&auto=webp&s=aaa9333fd19a1853554e7f9fa6fa7d3bd0982d4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?width=1080&crop=smart&auto=webp&s=05302f4277f0bae2c8129d5206a86789ff28c4f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Vy3A11YYoT7GF2K1mf33E4_0fJuNt7SW0a58HiNtw5c.png?auto=webp&s=0f631a7a907c44253e6903130eaecf1a1818df68', 'width': 1200}, 'variants': {}}]} | |
advice to create a full PHP site with qwen3 32B with 5070 and 13k context | 0 | Hello
I'm using qwen3 32B thinking to desing a full website but i run out of context as only the thinking part takes 9000 token.
I know html php js and so on so i can read it but is there a way to save tokens on the thinking or have a bigger thinking context?
Here my prompt: llama-server -hf bartowski/Qwen\_Qwen3-VL-32B-Thinking-GGUF -b 2048 -ub 2048 --threads 4 -c 13192 --n-gpu-layers 24 -ot "\[1-2\]\[0-2\].\*\_exps.=CPU" -ot "\[2-9\].\*\_exps.=CPU" --device Vulkan0 --prio 3 --no-mmap -fa on --jinja
| 2025-11-19T15:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p1a4dk/advice_to_create_a_full_php_site_with_qwen3_32b/ | Flimsy_Leadership_81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1a4dk | false | null | t3_1p1a4dk | /r/LocalLLaMA/comments/1p1a4dk/advice_to_create_a_full_php_site_with_qwen3_32b/ | false | false | self | 0 | null |
LLM for learning | 0 | I want to learn about hacking. I was thinking about running an uncensored model with my RTX4060ti 8gb. Which model should be best? I am noob so your advice would be appreciated. | 2025-11-19T14:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p19dio/llm_for_learning/ | link_29328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p19dio | false | null | t3_1p19dio | /r/LocalLLaMA/comments/1p19dio/llm_for_learning/ | false | false | self | 0 | null |
A Model-Agnostic Cognitive Framework: Tonious Part 2 — Multimodal Moments, Structured Recall, and Mode-Adaptive Reasoning | 0 | Part 2 of the Tonious demonstration.
Tonious is *not a model*.
It’s a model-agnostic cognitive architecture designed to layer symbolic structure, memory routing, and multimodal reasoning on top of any LLM—7B, 70B, or whatever comes next.
This update shows three interacting subsystems:
1. **Video → Trinity Stream → Moments Pipeline**
Raw video is compressed into a structured “Trinity Stream” (Scene / Voice / Environment).
These streams are then converted into temporally ordered **moments**, reducing ambiguity and cognitive load before the model sees anything.
Even a 7B model can process multi-minute videos because Tonious performs the segmentation, ordering, and abstraction upstream of the LLM.
2. **Mode-Adaptive Reasoning (General / Video / Recall)**
Each mode enforces its own ruleset, ensuring the model behaves deterministically:
— *General*: normal conversation
— *Video*: constrained temporal reasoning over extracted moments
— *Recall*: structured memory retrieval through an external symbolic layer
The model never “hallucinates modes” because its ontology is externally enforced.
3. **Tree-of-Life Memory Layer**
Tonious uses a symbolic memory graph to store conversation states.
Recall does not rely on model weights or fine-tuning.
Instead, the model is *prompted* with retrieved nodes from the graph, producing consistent long-form recall even on small models.
The entire stack is **scalable** and **model-agnostic**.
Swap out a 7B for a 70B or Qwen3, and the architecture immediately inherits the improvement without retraining.
Part 3 will expand on the architectural consequences of this approach.
Constructive critique is welcome. | 2025-11-19T14:29:21 | https://www.reddit.com/gallery/1p1953b | GriffinThibault | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p1953b | false | null | t3_1p1953b | /r/LocalLLaMA/comments/1p1953b/a_modelagnostic_cognitive_framework_tonious_part/ | false | false | 0 | null | |
Need Advice: Dual GPU (3080 Ti + 5080) or Sell and Upgrade for Local LLM Work? | 2 | I’m setting up a dedicated machine to run an LLM for software-development assistance and for generating episodic content (text-to-image and text-to-video). The system will act like a local server, accessible over Ethernet. It has 64 GB of DDR5, but I’m stuck on what GPU configuration makes the most sense.
Right now I have a 3080 Ti and a 5080 available. My motherboard supports one PCIe 5.0 slot and one PCIe 4.0 slot. Should I install both and run a multi-GPU setup, or would it be better to sell them and buy something like an AMD Radeon AI PRO R9700 card instead?
Any advice from people running local LLMs or generative models would be appreciated. | 2025-11-19T14:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p18znx/need_advice_dual_gpu_3080_ti_5080_or_sell_and/ | Shoddy_Bed3240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p18znx | false | null | t3_1p18znx | /r/LocalLLaMA/comments/1p18znx/need_advice_dual_gpu_3080_ti_5080_or_sell_and/ | false | false | self | 2 | null |
Scale-out is the silent killer of LLM applications. Are we solving the wrong problem? | 0 | Everyone's obsessed with cold starts. But cold starts are a one-time cost. The real architecture breaker is slow scale-out.
When traffic spikes and you need to spin up a new replica of a 70B model, you're looking at 5-10 minutes of loading and warm-up. By the time your new node is ready, your users have already timed out.
You're left with two terrible choices:
· Over-provision and waste thousands on idle GPUs.
· Under-provision and watch your service break under load.
How are you all handling this? Is anyone actually solving the scale-out problem, or are we just accepting this as the cost of doing business? Appreciate it.
| 2025-11-19T14:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p18x37/scaleout_is_the_silent_killer_of_llm_applications/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p18x37 | false | null | t3_1p18x37 | /r/LocalLLaMA/comments/1p18x37/scaleout_is_the_silent_killer_of_llm_applications/ | false | false | self | 0 | null |
Meituan Longcat releases AMO Bench: Kimi k2 Thinking is the best Math AI | 21 | ERROR: type should be string, got "https://preview.redd.it/k6cgs17t082g1.png?width=2816&format=png&auto=webp&s=e0fd6ad58bd315eb3bf5fa0b376d71f21b7342a0\n\nhttps://preview.redd.it/8t68e95w082g1.png?width=2506&format=png&auto=webp&s=ddd00c72663e56e797b5d74982d89939554beb2c\n\n**Original Problems**: 50 brand-new problems designed by human experts\n\n**Guaranteed Difficulty**: Cross-validated to ensure at least IMO-level difficulty\n\n**Automatic Grading**: Hybrid grading algorithm with 99.2% scoring accuracy\n\n**Human-Annotated CoT**: Facilitate further case analysis to improve model" | 2025-11-19T14:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p18lim/meituan_longcat_releases_amo_bench_kimi_k2/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p18lim | false | null | t3_1p18lim | /r/LocalLLaMA/comments/1p18lim/meituan_longcat_releases_amo_bench_kimi_k2/ | false | false | 21 | null | |
Meituan Longcat releases AMO Bench: Kimi k2 Thinking achieved SOTA performance | 1 | [removed] | 2025-11-19T14:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p18j84/meituan_longcat_releases_amo_bench_kimi_k2/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p18j84 | false | null | t3_1p18j84 | /r/LocalLLaMA/comments/1p18j84/meituan_longcat_releases_amo_bench_kimi_k2/ | false | false | 1 | null | |
AralingBench | 5 | Check out our latest work at kaust ! A new benchmark for evaluating Arabic linguistic capabilities of large language models …
Being good at solving math problems in Arabic doesn’t mean you’re good at Arabic !
Support with an upvote: [https://huggingface.co/papers/2511.14295](https://huggingface.co/papers/2511.14295)
| 2025-11-19T13:59:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p18eqp/aralingbench/ | LowChance4561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p18eqp | false | null | t3_1p18eqp | /r/LocalLLaMA/comments/1p18eqp/aralingbench/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=108&crop=smart&auto=webp&s=acc7e98cdf7039f12638c962eb6d82b59e3e8b4c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=216&crop=smart&auto=webp&s=c5c31c1d7838dd931bf1b8b30aba5ecae9a4e2d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=320&crop=smart&auto=webp&s=a3e3fb35ddf39426b2a8f8bc7aa1915c09150b37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=640&crop=smart&auto=webp&s=762a68b251503d5c399df79bec558aaead8efa76', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=960&crop=smart&auto=webp&s=a56230e05a92fd5b9c60163b71cc0bffa3de45da', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?width=1080&crop=smart&auto=webp&s=eedb0bc3ca0997c97421344413f1e60b1218a39b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x79sIdgFvP_ia3M0auvFIy16jM7XvjyyTpMneU1sfl8.png?auto=webp&s=7e63b14ac7fb8e8ce643df96db5ccff36f8d51b1', 'width': 1200}, 'variants': {}}]} |
continuedev alternatives for local tab completion in vscode | 3 | I use Roo Code for agentic coding and continue for in-editor tab completion. Continue was never really good at that, but I feel it is getting worse lately. Very unreliable, just ignores me 30% of the time (idling), multiline completions don't work properly, output is not formatted properly etc. I disabled roo to make sure, there is no interference causing continue to behave poorly. Also I don't need any of the other features, continue offers, so it's overkill for just tab completion.
A while ago I had a pretty good results with tabbyml (way better than continue at the time). Unfortunately it comes with its own inference engine. I cannot use that because I can only run one coding model (on llama.cpp) at a time. Roo and continue are sharing the same model in my setup.
Any alternatives that just do completions and support llama.cpp and ideally work nicely with qwen coder models? | 2025-11-19T13:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p183dv/continuedev_alternatives_for_local_tab_completion/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p183dv | false | null | t3_1p183dv | /r/LocalLLaMA/comments/1p183dv/continuedev_alternatives_for_local_tab_completion/ | false | false | self | 3 | null |
How does AI Video models "think?" | 0 | Smart brains here explained to me how LLM and Image models think in such a caveman-understandable-way I could finally make full use of it.
But what about the AI Video models?
What's going on inside their zeros and ones when they learn and then generate a video, even though knowing its the same as image models that needs to be trained on upside/downside to actually know what those even mean. I find the interpretation and "averaging of the input prompt" of these models much worse and weird than image models.
They don't understand physics, they mimic the patterns, but shouldn't it be able to do something like making an object or arm grow larger/super long just like they understand "sways from left to side" or "walks towards target x then stops"? Yet they don't have a clue... | 2025-11-19T13:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1p17wob/how_does_ai_video_models_think/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p17wob | false | null | t3_1p17wob | /r/LocalLLaMA/comments/1p17wob/how_does_ai_video_models_think/ | false | false | self | 0 | null |
Are Intel’s Arrow/Lunar Lake CPUs viable for running local LLMs, or is Ryzen still the better choice? | 1 | Hey r/LocalLLaMA
In the last months I was experimenting with differect LLMs (mainly GPT‑OSS‑120B). My current setup looks like this:
GPU: RTX 4090 Mobile
CPU: AMD Ryzen 9 7945HX (16‑core, 32 thread)
RAM: 96GB DDR5 SO-DIMM
Mode:l GPT‑OSS‑120B
Task: Generate a short poem in LMStudio in WIN11
Measured throughput: \~17 tokens/s
I’m considering an upgrade that would swap the CPU for Intel’s latest mobile offering and bump the GPU to the new RTX 5090 Mobile. Specifically:
CPU: Intel 275HX
GPU: RTX 5090 Mobile (≈30 % more CUDA cores & faster tensor ops than the 4090 Mobile)
My questions:
CPU suitability:
How does the Arrow/Lunar Lake architecture compare to Zen 4 when it comes to the heavy matrix‑multiply / attention workloads that LLM inference runs?
Are there any known bottlenecks (e.g., memory bandwidth, cache latency) that would make Intel CPUs a liability for local LLMs?
Overall performance impact:
Assuming the RTX 5090 Mobile gives me roughly a 30 % raw GPU speedup, will the Intel i9‑275HX be able to keep the GPU fed well enough to translate that into a noticeable token‑per‑second gain?
Or would the Ryzen 7945HX still be the more balanced choice because of its higher core count and better multi‑threaded efficiency?
TL;DR: With a 4090 Mobile + Ryzen 7945HX I’m at \~17 t/s. If I move to an Intel 275HX + RTX 5090 Mobile, will I see a real speed bump, or am I better off staying with AMD?
Any benchmark data, personal experience, or even educated speculation would be hugely appreciated! Thanks in advance. | 2025-11-19T13:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p17p4r/are_intels_arrowlunar_lake_cpus_viable_for/ | AIMasterChief | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p17p4r | false | null | t3_1p17p4r | /r/LocalLLaMA/comments/1p17p4r/are_intels_arrowlunar_lake_cpus_viable_for/ | false | false | self | 1 | null |
Meta AI at it's finest | 0 | help me | 2025-11-19T13:21:09 | https://www.reddit.com/gallery/1p17iok | what-th-BOOM | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p17iok | false | null | t3_1p17iok | /r/LocalLLaMA/comments/1p17iok/meta_ai_at_its_finest/ | false | false | 0 | null | |
Meta AI at it's finest | 0 | help me | 2025-11-19T13:20:54 | https://www.reddit.com/gallery/1p17ihl | what-th-BOOM | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p17ihl | false | null | t3_1p17ihl | /r/LocalLLaMA/comments/1p17ihl/meta_ai_at_its_finest/ | false | false | 0 | null | |
First Step: Large Language Model / Next Step: Structure-Based Intelligence Model | 0 | Large Language Models represent the first stage of machine intelligence.
They generate language, predict tokens, and produce fluent answers.
This linguistic capability is powerful — but it is only step one.
We believe the next step is clear: AI must move from prediction to internal simulation. From outputs to processes. From sentences to structure.
Human intelligence is not built on next-token prediction.
It is built on running internal processes, simulating future scenarios, evaluating threats and opportunities, and choosing actions that support survival, continuity, and development.
This is why the second stage of AI must be Structure-Based Intelligence.
Structure-Based Intelligence maintains internal states, performs dynamic reasoning across time, evaluates uncertainty, organizes cognition as a structured process, shifts between active / reactive / dormant states, and — most importantly — internally pre-enacts the future before acting.
LLMs are storytellers of possible realities.
Structure-Based Intelligence is an agent that lives through its modeled reality.
This is the next step AI will realize — the architecture that turns prediction into intelligence. | 2025-11-19T13:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p178ul/first_step_large_language_model_next_step/ | Hefty_Document_9466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p178ul | false | null | t3_1p178ul | /r/LocalLLaMA/comments/1p178ul/first_step_large_language_model_next_step/ | false | false | self | 0 | null |
How to evaluate LLMs using LM-harness ? | 3 | LM-harness supports various benchmarks and Hugging Face models. However, how can we evaluate with hugging face inference APIs instead of loading the models locally. Does anyone have an idea to use lm-harness with hugging face inference API let me know please. | 2025-11-19T13:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p178j7/how_to_evaluate_llms_using_lmharness/ | Lonely-Highlight-447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p178j7 | false | null | t3_1p178j7 | /r/LocalLLaMA/comments/1p178j7/how_to_evaluate_llms_using_lmharness/ | false | false | self | 3 | null |
I built a hardened multi-tenant agent orchestrator because existing frameworks kept breaking in production. | 2 | I’ve been working on agent systems for months and kept hitting the same problems:
most popular frameworks feel like prototypes — no authentication, no tenancy,
no audit trail, no redaction, no way to safely run real workloads.
So I built **Trident Engine**.
It’s a local-first, production-grade orchestrator with:
* HMAC-signed agent requests
* Tenant isolation + scoped storage
* Automatic PII redaction
* Deterministic routing and error handling
* GPU/CPU safe execution
* Clean SDK + simple config
This isn’t a toy agent loop.
It’s the thing I needed to ship my own projects safely.
If anyone here is building local agents, running LLaMA/Ollama workloads,
or experimenting with small multi-agent systems, would love feedback. | 2025-11-19T13:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p1785w/i_built_a_hardened_multitenant_agent_orchestrator/ | Substantial_Ad5570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1785w | false | null | t3_1p1785w | /r/LocalLLaMA/comments/1p1785w/i_built_a_hardened_multitenant_agent_orchestrator/ | false | false | self | 2 | null |
I built an API to handle document versioning for RAG (so I stop burning embedding credits) | 3 | 2025-11-19T13:06:58 | https://www.raptordata.dev | YummyJuice | raptordata.dev | 1970-01-01T00:00:00 | 0 | {} | 1p17777 | false | null | t3_1p17777 | /r/LocalLLaMA/comments/1p17777/i_built_an_api_to_handle_document_versioning_for/ | false | false | default | 3 | null | |
tried a memory system for local llama and its kinda interesting | 9 | Running Llama 3 8B (Q4 quant) on my 3090 with ollama for coding assistant stuff. Been hitting issues with longer sessions where it loses track of earlier context.
Tried summarization but it loses too much detail. Tried vector DB with embeddings but didnt work well for my use case. I know I could use RoPE scaling to extend context but that gets slow and eats VRAM fast on my setup.
The problem isnt just context size, its that the model doesnt actually maintain state between turns.
Found this memory system called EverMemOS on github couple days ago. Took like 3 hours to get it working because the docs are kinda sparse. Had to mess with the config to get it running with ollama.
The approach is different from RAG. Instead of retrieving chunks, it does some kind of state management. Not totally sure how it works under the hood, something about maintaining context differently.
Did some testing. Base setup starts losing coherence around turn 15-20 in my tests (pretty long responses each turn, like 200-300 tokens). With the memory system went to turn 40+ and it still tracked earlier context. VRAM usage went up about 3GB. Base Llama 3 8B Q4 uses \~6GB, with the memory system it goes to \~9GB.
Whats interesting is it seems to understand references better. Like if I mention something from turn 8 at turn 35, it actually gets it. Not perfect tho, still messes up sometimes.
Been running it for 2 days now. Not sure if its worth the extra VRAM for my use case but the approach seems promising.
Code is here if anyone wants to try: github.com/EverMind-AI/EverMemOS
Setup is a bit annoying tho. You need decent VRAM, probably 10GB+ depending on your model size. Gotta configure it manually. Says it works with different backends but I only tried it with ollama.
Has anyone tried memory systems vs just extending context? Wondering if theres better approaches out there. | 2025-11-19T13:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p1724h/tried_a_memory_system_for_local_llama_and_its/ | Independent_Plum_489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1724h | false | null | t3_1p1724h | /r/LocalLLaMA/comments/1p1724h/tried_a_memory_system_for_local_llama_and_its/ | false | false | self | 9 | null |
Why do you use open-source LLMs ? | 40 | Hey,
I’ve been following the progress of AI closely since AlphaGo (crazy how much progress we’ve had since then !), reading papers and such, but I’ve never really experimented in the technical side of things.
After stumbling on this sub, I’m really curious. Why do you guys do it ? What do you find fun about it ? What’s your use cases ?
Thanks for your answers :)
| 2025-11-19T12:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p16kxx/why_do_you_use_opensource_llms/ | MrTorgue7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p16kxx | false | null | t3_1p16kxx | /r/LocalLLaMA/comments/1p16kxx/why_do_you_use_opensource_llms/ | false | false | self | 40 | null |
Whyyy? | 0 | 2025-11-19T12:36:09 | chriskevini | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p16jly | false | null | t3_1p16jly | /r/LocalLLaMA/comments/1p16jly/whyyy/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gc7xivalk72g1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?width=108&crop=smart&auto=webp&s=cda96dd778868676a617319dbb0840d17223b486', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?width=216&crop=smart&auto=webp&s=258ef0449ced23ef458482a3e16f2df678f422a3', 'width': 216}, {'height': 80, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?width=320&crop=smart&auto=webp&s=e7aaf5ed1476efdd1fe570663607586643318201', 'width': 320}, {'height': 161, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?width=640&crop=smart&auto=webp&s=3b19571e604c8729e926c12fa7783431db9b1066', 'width': 640}, {'height': 242, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?width=960&crop=smart&auto=webp&s=83c038dcafc8db53d76ba48413cdc70c8e792853', 'width': 960}], 'source': {'height': 258, 'url': 'https://preview.redd.it/gc7xivalk72g1.png?auto=webp&s=1fb3aac500c7407ed6c6d9328940828236901c25', 'width': 1023}, 'variants': {}}]} | ||
Mistral right now, watching Gemini 3 drop. | 0 | 2025-11-19T12:32:40 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p16h38 | false | null | t3_1p16h38 | /r/LocalLLaMA/comments/1p16h38/mistral_right_now_watching_gemini_3_drop/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4p6jn484k72g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?width=108&crop=smart&auto=webp&s=f0953c2377fbb897e9866311b16490baa3683049', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?width=216&crop=smart&auto=webp&s=14a49420b8f12c77367de4c441dd2d3fe3c1a0ee', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?width=320&crop=smart&auto=webp&s=1a80d4c9c9e6600b96f390ae1a15cf20ccdc9bc2', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?width=640&crop=smart&auto=webp&s=4210b2670cc26b018cc956dfd8d76b1e0f49a1fa', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?width=960&crop=smart&auto=webp&s=b438ca97523a966c3b77150d6c4e992a63154848', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/4p6jn484k72g1.jpeg?auto=webp&s=edc4f5f9c1b6924d98f79a0a46ffce6a7e7bf529', 'width': 1024}, 'variants': {}}]} | ||
Mistral right now. | 0 | 2025-11-19T12:18:52 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p166t6 | false | null | t3_1p166t6 | /r/LocalLLaMA/comments/1p166t6/mistral_right_now/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'usbdzsknh72g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?width=108&crop=smart&auto=webp&s=210ead7013f9ffb49ac3962c2c98f32c00995a65', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?width=216&crop=smart&auto=webp&s=f9bb97bd62136e40b08e5992b515a654cb4466ea', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?width=320&crop=smart&auto=webp&s=f359a7d8f7127ccd28a2ca7b2cc93deabb24163c', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?width=640&crop=smart&auto=webp&s=4e6af8d1378619e9f082bc260ce2e684bf7fd86f', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?width=960&crop=smart&auto=webp&s=73103fcd115c41d945e589a859eb31712bf0b520', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/usbdzsknh72g1.jpeg?auto=webp&s=3e9f248de703763e9123c288da1caf7479f42ac8', 'width': 1024}, 'variants': {}}]} | ||
[Release] DragonMemory: 16× semantic compression for local RAG context (open-source, AGPL) | 12 | Hey everyone,
I’ve been experimenting with a small “memory engine” for local LLM setups and just open-sourced it:
\*\*DragonMemory – a 16× semantic compression layer for RAG contexts\*\*, built to sit in front of your local LLaMA (or any other backend).
\*\*Repo:\*\* [DragonMemory](https://github.com/Freeky7819/DragonMemory)
\### What it does (in plain terms)
Instead of throwing raw document chunks straight into a vector DB, DragonMemory:
\- takes \*\*token embeddings\*\* for a chunk,
\- compresses them \*\*16:1 along the sequence dimension\*\* (16 token embeddings → 1 “latent” vector),
\- learns to reconstruct the \*sentence-level\* meaning from that latent,
\- and then uses these compressed vectors for RAG retrieval.
So it’s not just dropping tokens or doing scalar quantization – it’s a \*\*learned sequence compressor\*\* that tries to preserve the original embedding space.
You can still use your usual stack (Ollama / local LLaMA / whatever) as the generator. DragonMemory just gives it a denser, cheaper memory.
\### How it compares to quantization
A quick summary of how I see it:
\- \*\*Quantization\*\*:
‣ shrinks \*\*each vector\*\* (fewer bits / lower precision),
‣ usually doesn’t model sequence structure explicitly.
\- \*\*DragonMemory\*\*:
‣ shrinks the \*\*sequence itself\*\* (16 token embeddings → 1 latent),
‣ is trained to keep \*\*sentence-level semantics\*\* (cosine similarity) and retrieval quality.
You can actually \*\*stack them\*\*: first compress sequences with DragonMemory, then quantize the resulting latent vectors if you want even more savings.
\### What I’ve measured so far
All numbers are from my local experiments (no cherry-picking, full eval scripts in the repo):
\- \*\*Compression ratio:\*\* 1:16 along the sequence.
\- \*\*Teacher model:\*\* MiniLM-style sentence embeddings.
\- \*\*Semantic reconstruction (sentence cosine):\*\*
\- Wikitext-2: \~\*\*0.90\*\* cosine after 16× compression.
\- Technical report (Slovenian): \~0.85.
\- Long-form literature sample (Frankenstein): \~0.88–0.89.
\*\*RAG recall (on internal tests):\*\*
\- self-recall@1 = 1.0 across datasets (gets the original chunk back),
\- partial-recall@3 in the \~0.6–1.0 range depending on corpus (technical docs vs. literature).
Everything runs locally; license is \*\*AGPL-3.0\*\* (I want it to stay open and not silently disappear into closed SaaS backends).
\### Limitations / honest notes
\- This is \*\*not\*\* a drop-in replacement for Faiss, Chroma, etc. – it’s a \*\*layer in front of them\*\*.
\- It’s focused on \*\*semantic retrieval\*\*, not bit-perfect reconstruction of the original text.
\- It’s early-stage research code, not a polished commercial product (yet) – expect rough edges.
\### Why I’m posting here
Local LLaMA setups live or die by context and memory cost. I wanted to see how far a learned sequence compressor can go before recall quality breaks – and 16× with decent cosine surprised me.
If anyone here:
\- wants to benchmark this on their own RAG pipeline,
\- has feedback on the architecture or eval setup,
\- or sees obvious ways to plug it into existing LocalLLaMA stacks (text-gen-webui, llama.cpp pipelines, etc.),
I’d love to hear it.
Happy to answer questions and share more detailed logs if useful. | 2025-11-19T12:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1p15wbk/release_dragonmemory_16_semantic_compression_for/ | freeky78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p15wbk | false | null | t3_1p15wbk | /r/LocalLLaMA/comments/1p15wbk/release_dragonmemory_16_semantic_compression_for/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=108&crop=smart&auto=webp&s=65c195763ff72a586483017db9f5d8f132e937ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=216&crop=smart&auto=webp&s=820092358ea341c570f98742f7022527f725fd10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=320&crop=smart&auto=webp&s=b9d781ff6a038a7c04413e28df871a72e5e00dd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=640&crop=smart&auto=webp&s=aac6698be2cd5abe0d0c05442736ba05c5e3e764', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=960&crop=smart&auto=webp&s=7b094173f97a3f9b0946087a6e1dc047de08e5f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?width=1080&crop=smart&auto=webp&s=c8a4728e274210e942be9a3e8d36607986104ec0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9rgCojWeCtmzhvtH7vF3ZKsxPHOTdL7rcCbwPAZUJTs.png?auto=webp&s=2d70d9e6b569cbf2ebfdb7f80870ce8e0af5a283', 'width': 1200}, 'variants': {}}]} |
Is 64GB vs 128GB ($700) Worth It? | 7 | I just picked up an M4 Max Mac Studio with 64GB Unified memory. It runs great but I obviously can’t run models like gpt-oss-120b. Considering the bump to 128GB is an additional $700, is it honestly worth it to upgrade? Any feedback is greatly appreciated.
[View Poll](https://www.reddit.com/poll/1p15qcm) | 2025-11-19T11:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p15qcm/is_64gb_vs_128gb_700_worth_it/ | PersonSuitTV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p15qcm | false | null | t3_1p15qcm | /r/LocalLLaMA/comments/1p15qcm/is_64gb_vs_128gb_700_worth_it/ | false | false | self | 7 | null |
[Release] Memory-Isolated Recursive Compression (MIRC). A local-first probabilistic compression utility for Apple Silicon. Research Preview (Open Source) | 5 | Kaipsul is an independent AI research lab. We are releasing MIRC (Memory-Isolated Recursive Compression) as a research preview.
As documents get longer, language models distribute attention across exponentially more tokens. This dilutes attention mechanisms, degrading retrieval accuracy and instruction following.
MIRC is a native macOS utility designed to increase signal density. It utilizes algorithmic segmentation and probabilistic compression to process datasets locally on Apple Silicon.
It is written in pure Swift and leverages parallel processing on the Neural Engine directly via FoundationModels.
**Links**:
GitHub (MIT): [https://github.com/Kaipsul/MIRC](https://github.com/Kaipsul/MIRC)
Website: [https://kaipsul.com](https://kaipsul.com)
**Requirements**:
macOS 26.0+
Apple Silicon (M1+) | 2025-11-19T11:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p1563m/release_memoryisolated_recursive_compression_mirc/ | mn29 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1563m | false | null | t3_1p1563m | /r/LocalLLaMA/comments/1p1563m/release_memoryisolated_recursive_compression_mirc/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=108&crop=smart&auto=webp&s=f5df4b3c961b5d95d88adfeccc66011204ad0546', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=216&crop=smart&auto=webp&s=ab013e1e15dcfe212a957a2a6d5986c239b40464', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=320&crop=smart&auto=webp&s=789c773d426b902664d0e3c9f94227ea8d21f816', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=640&crop=smart&auto=webp&s=7aab6c8d59d6d31cc3f87a51af9047c8d3abef58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=960&crop=smart&auto=webp&s=786e9d997b1972756494ce30beaac6ae949ec3f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?width=1080&crop=smart&auto=webp&s=3ed8d16dc46d7e689d89bf62a2630a0d588a5e08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-elyBYh9SC61UsMujCaec6BAQt8b03bS3uPZvaAeEs0.png?auto=webp&s=e39a91ccbc63b5b8eb269c78960c5871f59e1852', 'width': 1200}, 'variants': {}}]} |
Ai2 DR Tulu: An open, end-to-end training recipe for long-form deep research | 1 | Ai2 on 𝕏: [https://x.com/allen\_ai/status/1990803193014395004](https://x.com/allen_ai/status/1990803193014395004)
Blog: [https://allenai.org/blog/dr-tulu](https://allenai.org/blog/dr-tulu)
Paper: [http://allenai.org/papers/drtulu](http://allenai.org/papers/drtulu)
Models and data: [https://huggingface.co/collections/rl-research/dr-tulu](https://huggingface.co/collections/rl-research/dr-tulu)
Code: [https://github.com/rlresearch/DR-Tulu](https://github.com/rlresearch/DR-Tulu) | 2025-11-19T11:23:28 | https://allenai.org/blog/dr-tulu | Nunki08 | allenai.org | 1970-01-01T00:00:00 | 0 | {} | 1p153kr | false | null | t3_1p153kr | /r/LocalLLaMA/comments/1p153kr/ai2_dr_tulu_an_open_endtoend_training_recipe_for/ | false | false | default | 1 | null |
3gb vram, i5 4570, ddr3 16gb ram | 1 | I have this spare computer lying around that has the specs listed above. I saw Pewdiepie’s video and I wondered what I could run on it. Based on my research it seems I can run LLMs with around 2B-3B parameters. But I really want it to replace my current AI setup which is just ChatGPT, I don’t like them having all of my sensitive data. At this point anything is fine because I just want it to solve a few math problems, help me with linux and be fast enough that it’s actually usable (preferably 20 tokens/s.)
What could be some models I could try out on this system so that I can choose the best one? | 2025-11-19T11:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p14zg9/3gb_vram_i5_4570_ddr3_16gb_ram/ | bingbongbeeinnit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p14zg9 | false | null | t3_1p14zg9 | /r/LocalLLaMA/comments/1p14zg9/3gb_vram_i5_4570_ddr3_16gb_ram/ | false | false | self | 1 | null |
Ai2 Deep Research Tulu (DR Tulu): 8B deep research agent model | 1 | Ai2 on 𝕏: [https://x.com/allen\_ai/status/1990803193014395004](https://x.com/allen_ai/status/1990803193014395004)
Blog: [https://allenai.org/blog/dr-tulu](https://allenai.org/blog/dr-tulu)
Paper: [http://allenai.org/papers/drtulu](http://allenai.org/papers/drtulu)
Models and data: [https://huggingface.co/collections/rl-research/dr-tulu](https://huggingface.co/collections/rl-research/dr-tulu)
Code: [https://github.com/rlresearch/DR-Tulu](https://github.com/rlresearch/DR-Tulu) | 2025-11-19T11:12:11 | https://v.redd.it/mfdk7mye572g1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p14vz5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mfdk7mye572g1/DASHPlaylist.mpd?a=1766142744%2CYWNmZDQ0ZWUxZDgwYWIzYzkzNjMxYWEzMjZjZmUxYWIzMWI1YTQ3NTM2NTdjYWJhN2M4MGY0ZTY0NWE3NjgxNg%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/mfdk7mye572g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/mfdk7mye572g1/HLSPlaylist.m3u8?a=1766142744%2CZjExYTI5ZjMyY2IzYjI2M2FkNmJkOTBhOTVlZDE2ODllNDVmNzU0NzkxY2M3NWVjYjMxNzZlZmY3NjhjZjA4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mfdk7mye572g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1p14vz5 | /r/LocalLLaMA/comments/1p14vz5/ai2_deep_research_tulu_dr_tulu_8b_deep_research/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=108&crop=smart&format=pjpg&auto=webp&s=6308d03d87cdd5ea63093a46b508140aab4f7bb3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=216&crop=smart&format=pjpg&auto=webp&s=d4845899fe437946d2f84bf29ec5dc88da0d41a5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=320&crop=smart&format=pjpg&auto=webp&s=869a287572cdacdc0cf4bf7f2802686aed4619ed', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=640&crop=smart&format=pjpg&auto=webp&s=f74227890b99a9595c74f1c0c52024ecfa403b31', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=960&crop=smart&format=pjpg&auto=webp&s=9dfd31a88ad7704205b39869f87bc62268c619f7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=64c4f23f017bea182ab12a718ed5a333caca326f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dGxsczVxeWU1NzJnMcAY2XkQDBAhKY1HkNzaHfu44Gse2gRIXROQIZ_GU8fK.png?format=pjpg&auto=webp&s=9e5ba71001a895e2f31636a9ed36198a17263d0d', 'width': 1920}, 'variants': {}}]} | |
A robot with a "personality" | 1 | [removed] | 2025-11-19T11:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p14n91/a_robot_with_a_personality/ | No_Dot9595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p14n91 | false | null | t3_1p14n91 | /r/LocalLLaMA/comments/1p14n91/a_robot_with_a_personality/ | false | false | self | 1 | null |
What model currently offers the best consistency for complex text-processing tasks? | 2 | I’m looking for recommendations on a locally-runnable model that prioritizes output consistency over raw creativity.
Specifically, I need something that can handle structured text analysis, follow detailed instructions reliably, and produce stable results across repeated prompts.
Key criteria I care about:
• deterministic or near-deterministic behavior at low temperature
• strong reasoning on long texts
• ability to maintain internal structure across multiple runs
• low hallucination rate
• good performance on CPU/GPU without insane hardware requirements
I’ve tested a few mid-sized LLMs already, but the variation between runs is still too high for my use case.
What would you say is the current best open-source model for this kind of task?
Would love to hear real benchmarks or personal experience. | 2025-11-19T10:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p14kp0/what_model_currently_offers_the_best_consistency/ | braveloop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p14kp0 | false | null | t3_1p14kp0 | /r/LocalLLaMA/comments/1p14kp0/what_model_currently_offers_the_best_consistency/ | false | false | self | 2 | null |
are you seeking final end of life | 0 | Practical Explanation ( For Example ) :- \`1st of all can you tell me every single seconds detail from that time when you born ?? ( i need every seconds detail ?? that what- what you have thought and done on every single second )
can you tell me every single detail of your \`1 cheapest Minute Or your whole hour, day, week, month, year or your whole life ??
if you are not able to tell me about this life then what proof do you have that you didn't forget your past ? and that you will not forget this present life in the future ?
that is Fact that Supreme Lord Krishna exists but we posses no such intelligence to understand him.
there is also next life. and i already proved you that no scientist, no politician, no so-called intelligent man in this world is able to understand this Truth. cuz they are imagining. and you cannot imagine what is god, who is god, what is after life etc.
\_\_\_\_\_\_\_
for example :Your father existed before your birth. you cannot say that before your birth your father don,t exists.
So you have to ask from mother, "Who is my father?" And if she says, "This gentleman is your father," then it is all right. It is easy.
Otherwise, if you makes research, "Who is my father?" go on searching for life; you'll never find your father.
( now maybe...maybe you will say that i will search my father from D.N.A, or i will prove it by photo's, or many other thing's which i will get from my mother and prove it that who is my Real father.{ So you have to believe the authority. who is that authority ? she is your mother. you cannot claim of any photo's, D.N.A or many other things without authority ( or ur mother ).
if you will show D.N.A, photo's, and many other proofs from other women then your mother. then what is use of those proofs ??} )
same you have to follow real authority. "Whatever You have spoken, I accept it," Then there is no difficulty. And You are accepted by Devala, Narada, Vyasa, and You are speaking Yourself, and later on, all the acaryas have accepted. Then I'll follow.
I'll have to follow great personalities. The same reason mother says, this gentleman is my father. That's all. Finish business. Where is the necessity of making research? All authorities accept Krsna, the Supreme Personality of Godhead. You accept it; then your searching after God is finished.
Why should you waste your time?
\_\_\_\_\_\_\_
all that is you need is to hear from authority ( same like mother ). and i heard this truth from authority " Srila Prabhupada " he is my spiritual master.
im not talking these all things from my own.
\_\_\_\_\_\_\_\_\_\_\_
in this world no \`1 can be Peace full. this is all along Fact.
cuz we all are suffering in this world 4 Problems which are Disease, Old age, Death, and Birth after Birth.
tell me are you really happy ?? you can,t be happy if you will ignore these 4 main problem. then still you will be Forced by Nature.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
if you really want to be happy then follow these 6 Things which are No illicit s.ex, No g.ambling, No d.rugs ( No tea & coffee ), No meat-eating ( No onion & garlic's )
5th thing is whatever you eat \`1st offer it to Supreme Lord Krishna. ( if you know it what is Guru parama-para then offer them food not direct Supreme Lord Krishna )
and 6th " Main Thing " is you have to Chant " hare krishna hare krishna krishna krishna hare hare hare rama hare rama rama rama hare hare ".
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
If your not able to follow these 4 things no illicit s.ex, no g.ambling, no d.rugs, no meat-eating then don,t worry but chanting of this holy name ( Hare Krishna Maha-Mantra ) is very-very and very important.
Chant " hare krishna hare krishna krishna krishna hare hare hare rama hare rama rama rama hare hare " and be happy.
if you still don,t believe on me then chant any other name for 5 Min's and chant this holy name for 5 Min's and you will see effect. i promise you it works And chanting at least 16 rounds ( each round of 108 beads ) of the Hare Krishna maha-mantra daily.
\_\_\_\_\_\_\_\_\_\_\_\_
Here is no Question of Holy Books quotes, Personal Experiences, Faith or Belief. i accept that Sometimes Faith is also Blind. Here is already Practical explanation which already proved that every\`1 else in this world is nothing more then Busy Foolish and totally idiot.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Source(s):
every \`1 is already Blind in this world and if you will follow another Blind then you both will fall in hole. so try to follow that person who have Spiritual Eyes who can Guide you on Actual Right Path. ( my Authority & Guide is my Spiritual Master " Srila Prabhupada " )
\_\_\_\_\_\_\_\_\_\_\_\_\_
if you want to see Actual Purpose of human life then see this link : ( triple w ( d . o . t ) asitis ( d . o . t ) c . o . m {Bookmark it })
read it complete. ( i promise only readers of this book that they { he/she } will get every single answer which they want to know about why im in this material world, who im, what will happen after this life, what is best thing which will make Human Life Perfect, and what is perfection of Human Life. ) purpose of human life is not to live like animal cuz every\`1 at present time doing 4 thing which are sleeping, eating, s.ex & fear. purpose of human life is to become freed from Birth after birth, Old Age, Disease, and Death. | 2025-11-19T10:38:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p1491d/are_you_seeking_final_end_of_life/ | pani343 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1491d | false | null | t3_1p1491d | /r/LocalLLaMA/comments/1p1491d/are_you_seeking_final_end_of_life/ | false | false | self | 0 | null |
Our AI assistant keeps getting jailbroken and it’s becoming a security nightmare | 297 | We built an internal AI helper for our support team, and no matter how many guardrails we add, people keep finding ways to jailbreak it. Employees aren’t doing it maliciously, they’re just curious and want to see what happens, but suddenly the assistant is spitting out stuff it’s absolutely not supposed to.
We’ve tried regex filters, prompt-hardening, even manual review nothing sticks.
Feels like every week we patch one exploit and three more show up.
Anyone actually found a scalable way to test and secure an AI model before it goes public?
| 2025-11-19T10:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p145pj/our_ai_assistant_keeps_getting_jailbroken_and_its/ | Comfortable_Clue5430 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p145pj | false | null | t3_1p145pj | /r/LocalLLaMA/comments/1p145pj/our_ai_assistant_keeps_getting_jailbroken_and_its/ | false | false | self | 297 | null |
MMaDA-Parallel: Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation | 7 | 2025-11-19T10:30:09 | https://github.com/tyfeld/MMaDA-Parallel | nnxnnx | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p144b7 | false | null | t3_1p144b7 | /r/LocalLLaMA/comments/1p144b7/mmadaparallel_parallel_multimodal_large_diffusion/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=108&crop=smart&auto=webp&s=e2c45bf3381775c1628998b1ba590eefde70e9c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=216&crop=smart&auto=webp&s=61eb70d2e69b46628d4ff0a0988e78a07bc3db7a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=320&crop=smart&auto=webp&s=60118551debd0c66ec8f99800b681aadf16714f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=640&crop=smart&auto=webp&s=8c0e3b5a66c656fb49a6dd75150951a439bbf532', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=960&crop=smart&auto=webp&s=d3be0afdcff54323a7eeac4483c1750719e2e6fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?width=1080&crop=smart&auto=webp&s=3eb68a0220ef9d901951652491b9684532b18368', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BNfLwSNQqEYYvYecpIB7R48pXR8iUAP_F5JnpzrQUxw.png?auto=webp&s=ce5e803fe8aa57e35d3b4ca8ec0052c0d9a1ac88', 'width': 1200}, 'variants': {}}]} | ||
I built an "Antigravity Workspace" for Gemini 3. It forces the AI to follow "Artifact-First" protocols automatically. (Open Source Template) | 0 | Hey everyone,
Like many of you, I've been playing with the new **Google Antigravity IDE** and **Gemini 3**.
I noticed that without strict prompting, the Agent tends to be a bit chaotic (creating files in random places, skipping tests, etc.).
So I built an opinionated **Workspace Template** that pre-loads the Agent with a "Senior Engineer" persona using the new .antigravity/rules.md system.
**What it does:**
* 🚀 **Zero-Config:** Just clone and open. The AI auto-detects the rules.
* 🧠 **Deep Think:** Forces Gemini 3 to output a <thought> block before writing code.
* 📂 **Artifact-First:** Forces the AI to write plans/logs into an artifacts/ folder first.
* 🛠️ **Ready-to-use:** Includes Docker & Makefile.
**Repo here:** [https://github.com/study8677/antigravity-workspace-template](https://github.com/study8677/antigravity-workspace-template)
It's fully open source (MIT). Would love to verify if this setup improves the "hallucination rate" for others too! Let me know what you think. | 2025-11-19T10:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p142ey/i_built_an_antigravity_workspace_for_gemini_3_it/ | Direct-Employ-3290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p142ey | false | null | t3_1p142ey | /r/LocalLLaMA/comments/1p142ey/i_built_an_antigravity_workspace_for_gemini_3_it/ | false | false | self | 0 | null |
Anyone on arm? | 3 | I don't mean only inference, but more about general ml/ai dev, do you see any limitations of using a arm cpu?
I'm thinking about repurposing gh200 into workstation during the day and cluster during the night.
What are your thoughts? | 2025-11-19T10:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p140na/anyone_on_arm/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p140na | false | null | t3_1p140na | /r/LocalLLaMA/comments/1p140na/anyone_on_arm/ | false | false | self | 3 | null |
Is qwen3vl 235B is supposed to be this slow? | 0 | Heya, I managed to get access to server with 40G A100 and 96 RAM. Tried loading Qwen3-VL-235B-A22B-Thinking-GGUF UD-IQ3\_XXS using llama.cpp.
Configuration is: --ctx-size 40000 --n-cpu-moe 64 --prio 2 --temp 1.0 --repeat-penalty 1.0 --min-p 0.0 --top-k 20 --top-p 0.95 --presence\_penalty 0.0 --image-min-tokens 1024 --jinja --flash-attn on -ctk q8\_0 -ctv q8\_0
Takes most of vram, but output speed is 6.2 tps. I never tried MOE before, but from what I read I thought I would get at least 15. I did not find any comprehensive data on running this specific model not on huge cluster (outside of some guy running it at 2tps), so my question is, where my expectations too high?
Or am I missing something? | 2025-11-19T10:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1p13rbq/is_qwen3vl_235b_is_supposed_to_be_this_slow/ | shapic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p13rbq | false | null | t3_1p13rbq | /r/LocalLLaMA/comments/1p13rbq/is_qwen3vl_235b_is_supposed_to_be_this_slow/ | false | false | self | 0 | null |
How do I actually use the Ryzen AI chip… for anything? | 27 | I bought this mini PC mainly because of its “AI” capabilities, so… how am I actually supposed
to use the AMD Ryzen AI engine? I’d love to run a local LLM or add some kind of
facial-recognition tool to my media server, but I can’t seem to find any solid documentation
on how to enable ROCm or make use of the NPU in a homelab environment.
Spec:https://acemagic.com/products/acemagic-f3a-mini-pc
What am I missing here? Im on windows. | 2025-11-19T09:53:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p13haw/how_do_i_actually_use_the_ryzen_ai_chip_for/ | Muted_Head_1636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p13haw | false | null | t3_1p13haw | /r/LocalLLaMA/comments/1p13haw/how_do_i_actually_use_the_ryzen_ai_chip_for/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=108&crop=smart&auto=webp&s=0ee5092961b6105895568dbf2bee61aad43cdcce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=216&crop=smart&auto=webp&s=78067e0ac4d3d241d9238cd78f8b980fb0bcdf8b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=320&crop=smart&auto=webp&s=2dfca421ff6a6bdf6c5f33912b6d4aa6b20ef301', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=640&crop=smart&auto=webp&s=9ff42dd0862dc63fc199d74cc20b8575254fd526', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=960&crop=smart&auto=webp&s=5d1cc529d00067e6f58dd61c3464038dcc1b458e', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?width=1080&crop=smart&auto=webp&s=8160e2fefd40880cf08389a8956474123db7af5a', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/RkLJTFxETWebbJiesprmKvDaBojuxDFymwH8r33dQnQ.png?auto=webp&s=bb943dc46c0c890c1419e86c34195ba80ddf08c0', 'width': 1600}, 'variants': {}}]} |
Any H200 owners out there? | 0 | Does anyone own an H200.
Who are the trusted suppliers? Located in the US and quotes are varying 8-10k for the same hardware. | 2025-11-19T09:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p1380r/any_h200_owners_out_there/ | itsthewolfe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p1380r | false | null | t3_1p1380r | /r/LocalLLaMA/comments/1p1380r/any_h200_owners_out_there/ | false | false | self | 0 | null |
Note: Gemini 3.0 Pro is less accessible than 2.5 Pro | 1 | [removed] | 2025-11-19T09:11:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p12tex/note_gemini_30_pro_is_less_accessible_than_25_pro/ | Guardian-Spirit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p12tex | false | null | t3_1p12tex | /r/LocalLLaMA/comments/1p12tex/note_gemini_30_pro_is_less_accessible_than_25_pro/ | false | false | self | 1 | null |
Note: Gemini 3.0 Pro is less accessible than Gemini 2.5 Pro | 1 | [removed] | 2025-11-19T09:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p12s8i/note_gemini_30_pro_is_less_accessible_than_gemini/ | Guardian-Spirit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p12s8i | false | null | t3_1p12s8i | /r/LocalLLaMA/comments/1p12s8i/note_gemini_30_pro_is_less_accessible_than_gemini/ | false | false | self | 1 | null |
Building real-time speech translation (VAD→ASR→MT→TTS) - struggling with latency | 5 | I'm also working on this. Trying to build a real-time speech translation system, but honestly the results are pretty rough so far. Really curious how commercial simultaneous interpretation systems manage to hit that claimed 3-second average for first-word latency.
It's just a weekend project at this point. My pipeline is VAD → ASR → MT → TTS. Tried using nllb-200-distilled-600M and Helsinki-NLP/opus-mt-en-x for translation but neither worked that well. Even though I went with Kokoro TTS (smallest parameter count), the overall TTS latency is still way too high.
\---
repo: [https://github.com/xunfeng1980/e2e-audio-mt](https://github.com/xunfeng1980/e2e-audio-mt) | 2025-11-19T09:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p12rwx/building_realtime_speech_translation_vadasrmttts/ | Big_Fix_7606 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p12rwx | false | null | t3_1p12rwx | /r/LocalLLaMA/comments/1p12rwx/building_realtime_speech_translation_vadasrmttts/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=108&crop=smart&auto=webp&s=18051f72a7e9f615744b27d19f8ed65834de85e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=216&crop=smart&auto=webp&s=32df9e84724f8e4ed2ba26945128eb60779d9b1a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=320&crop=smart&auto=webp&s=82fa36eb6e5a85b70e252142ce7e4b794fde09ee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=640&crop=smart&auto=webp&s=a7d3663558e8018319cc5d2b54716c72274d43ec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=960&crop=smart&auto=webp&s=a039f381ba2b4325eee3eac400913c9e23478dbc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?width=1080&crop=smart&auto=webp&s=f9dc3b6515a7aa6996454843d60b4b279982ca90', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xMuDx6mMJT_SMnpt-lQ3e2X5XDsdj2wWJAijOuXo4YQ.png?auto=webp&s=5136eda87fdc6a02dfa196bb8f187c3d6a11dcbd', 'width': 1200}, 'variants': {}}]} |
RAM prices exploding should I grab old stock now for rag? | 20 | I need some advice.
I have 32GB RAM in my PC right now, but since it’s my work machine I usually have around 10GB free. I’m also running an RTX 3090.
I want to build RAG setups for two AI projects. I found a 96GB DDR5 6000MHz kit that’s still being sold for the *old price* (\~1690), and the store told me RAM prices are about to spike because the market is going crazy.
The idea is that if I buy the 96GB, I’ll probably sell my current 32GB kit.
My dilemma:
* I *can* rely on the OpenAI API and avoid running big models locally.
* But I’m scared the API costs will pile up over time and end up costing more than just buying the RAM once.
* On the other hand, maybe I don’t even need so much RAM if I mostly stick to OpenAI.
So I’m torn:
Should I buy the 96GB now before prices jump?
Or skip it and just rely on the API, even though long-term costs worry me?
Anyone with experience running local models or using OpenAI heavily your advice would help a lot. Thanks! | 2025-11-19T08:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p12jz3/ram_prices_exploding_should_i_grab_old_stock_now/ | Working_Opposite4167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p12jz3 | false | null | t3_1p12jz3 | /r/LocalLLaMA/comments/1p12jz3/ram_prices_exploding_should_i_grab_old_stock_now/ | false | false | self | 20 | null |
Got free passes for a big Virtual GenAI summit (OpenAI, Google, Microsoft, LangChain etc.) | 0 | Hey folks,
Just a heads up, Packt is running a pretty stacked virtual GenAI summit called GenAI Nexus 2025 on Nov 20–21, and it actually looks legit. It’s two full days of sessions focused on things people here actually care about:
• Building and deploying real AI agents • RAG, A2A, context engineering, and other practical workflows • Live workshops, deep-dives, and case studies (not fluffy keynote stuff)
Speakers include people like Harrison Chase, Chip Huyen, Prof. Tom Yeh, Dr. Ali Arsanjani, plus a bunch more folks doing actual hands-on work in AI from OpenAI, Google, Microsoft, LangChain, etc.
If you’re into LLMs, agents, or just want to see how teams are actually shipping GenAI systems in the wild, this looks worth checking out.
I’ve got a small batch of free passes I can share with this community. If you want to attend, simply fill the registration and you’ll be sent the virtual summit link to join.
Link for registration in comment!
Let’s build cool stuff together. 🚀 | 2025-11-19T08:33:35 | alimhabidi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p126tx | false | null | t3_1p126tx | /r/LocalLLaMA/comments/1p126tx/got_free_passes_for_a_big_virtual_genai_summit/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'egCRvp2hSqa03XaPW4tuX0X32HRtlqH6zO6PlX20M7c', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=108&crop=smart&auto=webp&s=716652b9b40e79ba16a0c2bb94ccc9c6c320c25f', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=216&crop=smart&auto=webp&s=b79c81967589b6e446eccbfd385e697f0ae2572f', 'width': 216}, {'height': 130, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=320&crop=smart&auto=webp&s=ae89c96f8fad991fe423ec1dd8553fcf55e7e42c', 'width': 320}, {'height': 261, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=640&crop=smart&auto=webp&s=f45bb4d367cb343fa3822839c59fbd7c15ec88fc', 'width': 640}, {'height': 392, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=960&crop=smart&auto=webp&s=524aa0ffd8c686676eaea5aef628c1ab77d35086', 'width': 960}, {'height': 441, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?width=1080&crop=smart&auto=webp&s=3f790bec6f6ba4c01dc963d8fd38d04cf34cda66', 'width': 1080}], 'source': {'height': 481, 'url': 'https://preview.redd.it/7zu40cigd62g1.jpeg?auto=webp&s=7c1550466311ecd6162aae02c8db70f60f551081', 'width': 1177}, 'variants': {}}]} | ||
Movie subtitle translation into Bangla | 0 | I am a subtitle translator. I need to translate subtitles of 2/3 movies into Bangla (Bangladesh) for my client. The original languages are usually English, Hindi, Russian... Currently I am using ChatGPT for automation of the translations. I input chunks of 300 lines one by one into ChatGPT. Sometimes the quality of translation is so good. But most of the time it gives bad translations. Then it becomes very tough for me to deliver accurately translated subtitle to my client. I prefer quality and accuracy over speed. Please suggest how can I get good results in AI automated translations? Which AI would be best for me? | 2025-11-19T08:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p126nr/movie_subtitle_translation_into_bangla/ | Lonewolf89bd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p126nr | false | null | t3_1p126nr | /r/LocalLLaMA/comments/1p126nr/movie_subtitle_translation_into_bangla/ | false | false | self | 0 | null |
TOON Format Explained: Why AI Teams Are Looking Beyond JSON | 1 | [removed] | 2025-11-19T08:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1p11yxj/toon_format_explained_why_ai_teams_are_looking/ | Normal-Emu-3443 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11yxj | false | null | t3_1p11yxj | /r/LocalLLaMA/comments/1p11yxj/toon_format_explained_why_ai_teams_are_looking/ | false | false | self | 1 | null |
🚀 NVIDIA DGX Spark vs. Alternatives: Escaping the RTX 3060 (6GB) for Medical LLM Research | 0 | Hi r/LocalLLaMA 🚀 ,
I am currently struggling with my **medical LLM research** (language models only, no images/video) on my existing **RTX 3060 6GB laptop GPU**. As you can imagine, this is a **major bottleneck**—even simple LoRA experiments on small models are cumbersome due to the severe lack of VRAM. It's time to scale up.
Planned operations include: **Intensive fine-tuning (LoRA/QLoRA), distillation, and pruning/quantization** of large models (targeting 7B to 70B+) for clinical applications.
**I am mainly considering two directions for a new setup:**
1. **NVIDIA DGX Spark:** Full power, maximum VRAM, and complete compatibility with the **CUDA** ecosystem. This is the ideal solution to ensure research freedom when loading and optimizing large LLMs.
2. **AMD-based Alternatives (e.g., future Strix Halo/similar):** This option is theoretically cheaper, but **I honestly dread** the potential extra effort and debugging associated with **ROCm** and the general lack of ecosystem maturity compared to CUDA, especially for specialized LLM tasks (LoRA, QLoRA, distillation, etc.). I need to focus on research, not fighting drivers.
**My questions to the community:**
* For someone focused **purely on research fine-tuning and optimization of LLMs (LoRA/Distillation)**, and who wants to avoid software friction—**is the DGX Spark (or an equivalent H100 cluster) the only viable path?**
* Are experiments like **LoRA on 70B+ models** even **feasible** when attempting to use non-NVIDIA/non-high-VRAM alternatives?
* Has anyone here successfully used **AMD (Strix Halo or MI300 series) for advanced LLM research** involving LoRA and distillation? How *painful* is it compared to CUDA?
Any perspective from an LLM researcher is greatly appreciated. Thank you! | 2025-11-19T08:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1p11xus/nvidia_dgx_spark_vs_alternatives_escaping_the_rtx/ | Muted-Examination278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11xus | false | null | t3_1p11xus | /r/LocalLLaMA/comments/1p11xus/nvidia_dgx_spark_vs_alternatives_escaping_the_rtx/ | false | false | self | 0 | null |
What are the most unique models that are under 15b you encountered | 20 | I'm not talking about nsfw, I know remember people claiming some models have personality, I would like to see what models you have encountered that were unique and fun to chat with. | 2025-11-19T08:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1p11od1/what_are_the_most_unique_models_that_are_under/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11od1 | false | null | t3_1p11od1 | /r/LocalLLaMA/comments/1p11od1/what_are_the_most_unique_models_that_are_under/ | false | false | self | 20 | null |
Hosting a deep-dive on agentic orchestration for customer-facing AI | 1 | Hey everyone, we (Parlant open-source) are hosting a live webinar on Compliant Agentic Orchestration next week.
We’ll walk through:
• A reliablility-first approach
• Accuracy optimization strategies
• Real-life lessons
If you’re building or experimenting with customer-facing agents, this might be up your alley.
Adding the link in the first comment.
Hope to see a few of you there, we’ll have time for live Q&A too.
Thanks! | 2025-11-19T07:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p11ml8/hosting_a_deepdive_on_agentic_orchestration_for/ | Chozee22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11ml8 | false | null | t3_1p11ml8 | /r/LocalLLaMA/comments/1p11ml8/hosting_a_deepdive_on_agentic_orchestration_for/ | false | false | self | 1 | null |
Sanity check for a Threadripper + Dual RTX 6000 Ada node (Weather Forecasting / Deep Learning) | 0 | Hola!!
tldr
I’m in the process of finalizing a spec for a dedicated AI workstation/server node. The primary use case is training deep learning models for weather forecasting (transformers/CFD work), involving parallel processing of wind data.
We are aiming for a setup that is powerful now but "horizontally scalable" later (i.e., we plan to network multiple of these nodes together in the future).
Here is the current draft build:
• GPU: 2x NVIDIA RTX 6000 Ada (Plan to scale to 4x later)
• CPU: AMD Threadripper PRO 7985WX (64-Core)
• Motherboard: ASUS Pro WS WRX90E-SAGE SE
• RAM: 512GB DDR5 ECC (8-channel population)
• Storage: Enterprise U.2 NVMe drives (Micron/Solidigm)
• Chassis: Fractal Meshify 2 XL (with industrial 3000RPM fans)
My main questions for the community:
1. Motherboard Quirks: Has anyone deployed the WRX90E-SAGE SE with 4x double-width cards? I want to ensure the spacing/thermals are manageable on air cooling before we commit.
2. Networking: Since we plan to cluster these later, is 100GbE sufficient, or should we be looking immediately at InfiniBand if we want these nodes to talk efficiently?
3. The "Ada" Limitation: We chose the RTX 6000 Ada for the raw compute/VRAM density, fully aware they lack NVLink. For those doing transformer training, has the PCIe bottleneck been a major issue for you with model parallelism, or is software sharding (DeepSpeed/FSDP) efficient enough?
Any advice or "gotchas" regarding this specific hardware combination would be greatly appreciated.
Thanks! | 2025-11-19T07:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1p11m4z/sanity_check_for_a_threadripper_dual_rtx_6000_ada/ | Icy_Gas8807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11m4z | false | null | t3_1p11m4z | /r/LocalLLaMA/comments/1p11m4z/sanity_check_for_a_threadripper_dual_rtx_6000_ada/ | false | false | self | 0 | null |
I built a full TOON Format toolkit for devs using LLMs (feedback welcome) | 0 | I’ve been experimenting with the TOON data format to reduce token usage in LLM applications.
To make the workflow easier, I built Toonkit — a full web-based toolkit:
• JSON/XML/Markdown/CSV → TOON converter
• TOON → JSON/XML/CSV/Markdown
• Token estimator (JSON vs TOON)
• TOON beautifier & validator
• Schema builder
• Playground & snippets
It’s free to use right now. If you’re into LLM tooling or data compression,
I’d love your feedback.
Link: [https://toonkit.online](https://toonkit.online) | 2025-11-19T07:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p11ke8/i_built_a_full_toon_format_toolkit_for_devs_using/ | Joelina0310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p11ke8 | false | null | t3_1p11ke8 | /r/LocalLLaMA/comments/1p11ke8/i_built_a_full_toon_format_toolkit_for_devs_using/ | false | false | self | 0 | null |
vLLM 0.11.1 Seems to Be Bringing Massive Speedup on Turing GPUs | 33 | vllm v0.11.1 using a new FLASHINFER backend and re-enables FP16 support on Turing GPUs, resulting in a much better performance on Volta and Turing GPUs (close to lmdeploy, better in prefill, worse in decode).
Hoping someone with a V100, T4, 2080Ti(22GB) or Titan RTX can have a similar test.
Here is a brief Qwen3-4B-Inst-2507 throughput benchmark of on my [Tesla T10 16GB](https://www.techpowerup.com/gpu-specs/tesla-t10-16-gb.c4036) (a rare Tesla GPU close to RTX 2080, but 16GB).
I am using these commands to serve all of these models:
CUDA_VISIBLE_DEVICES=1 vllm serve Qwen3-4B-Instruct-2507 --gpu_memory_utilization 0.9 --port 8000 --max-model-len 16k
CUDA_VISIBLE_DEVICES=1 lmdeploy serve api_server Qwen3-4B-Instruct-2507 --server-port 8000 --session-len 16384
# Prefill Heavy: PP8192/TG1 (Parallel 16)
vllm 0.11.0
vllm bench serve --dataset-name random --num-prompts 16 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 8192 --random-output-len 1
INFO 11-19 14:58:30 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7f020b929620>, seed=0, num_prompts=16, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=8192, random_output_len=1, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 14:58:32 [datasets.py:507] Sampling input_len from [8192, 8192] and output_len from [1, 1]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 01:21 elapsed, 31635:35:38 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [04:48<00:00, 18.02s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 16
Maximum request concurrency: 16
Benchmark duration (s): 288.39
Total input tokens: 130981
Total generated tokens: 16
Request throughput (req/s): 0.06
Output token throughput (tok/s): 0.06
Peak output token throughput (tok/s): 1.00
Peak concurrent requests: 16.00
Total Token throughput (tok/s): 454.23
---------------Time to First Token----------------
Mean TTFT (ms): 125794.42
Median TTFT (ms): 111166.06
P99 TTFT (ms): 283469.41
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 0.00
Median TPOT (ms): 0.00
P99 TPOT (ms): 0.00
---------------Inter-token Latency----------------
Mean ITL (ms): 0.00
Median ITL (ms): 0.00
P99 ITL (ms): 0.00
==================================================
vllm 0.11.1
vllm bench serve --dataset-name random --num-prompts 64 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 8192 --random-output-len 1
INFO 11-19 14:47:01 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7f2572149620>, seed=0, num_prompts=64, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=8192, random_output_len=1, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 14:47:04 [datasets.py:507] Sampling input_len from [8192, 8192] and output_len from [1, 1]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 00:01 elapsed, 642:35:16 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [01:50<00:00, 1.72s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 64
Maximum request concurrency: 16
Benchmark duration (s): 110.03
Total input tokens: 523886
Total generated tokens: 64
Request throughput (req/s): 0.58
Output token throughput (tok/s): 0.58
Peak output token throughput (tok/s): 1.00
Peak concurrent requests: 17.00
Total Token throughput (tok/s): 4761.83
---------------Time to First Token----------------
Mean TTFT (ms): 24172.28
Median TTFT (ms): 27210.15
P99 TTFT (ms): 28380.61
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 0.00
Median TPOT (ms): 0.00
P99 TPOT (ms): 0.00
---------------Inter-token Latency----------------
Mean ITL (ms): 0.00
Median ITL (ms): 0.00
P99 ITL (ms): 0.00
==================================================
lmdeploy
vllm bench serve --dataset-name random --num-prompts 64 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 8192 --random-output-len 1
INFO 11-19 15:16:51 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7fa4823b5620>, seed=0, num_prompts=64, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=8192, random_output_len=1, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 15:16:53 [datasets.py:507] Sampling input_len from [8192, 8192] and output_len from [1, 1]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 00:01 elapsed, 756:41:43 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [01:58<00:00, 1.85s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 64
Maximum request concurrency: 16
Benchmark duration (s): 118.10
Total input tokens: 523886
Total generated tokens: 124
Request throughput (req/s): 0.54
Output token throughput (tok/s): 1.05
Peak output token throughput (tok/s): 8.00
Peak concurrent requests: 18.00
Total Token throughput (tok/s): 4437.05
---------------Time to First Token----------------
Mean TTFT (ms): 24981.20
Median TTFT (ms): 28008.93
P99 TTFT (ms): 29259.25
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 1803.85
Median TPOT (ms): 1869.74
P99 TPOT (ms): 1937.03
---------------Inter-token Latency----------------
Mean ITL (ms): 895.75
Median ITL (ms): 0.33
P99 ITL (ms): 1936.55
==================================================
# Decode heavy: PP512/TG512 (Parallel 16)
v0.11.0
vllm bench serve --dataset-name random --num-prompts 16 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 512 --random-output-len 512
INFO 11-19 15:08:12 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7fe684875620>, seed=0, num_prompts=16, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=512, random_output_len=512, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 15:08:14 [datasets.py:507] Sampling input_len from [512, 512] and output_len from [512, 512]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 00:40 elapsed, 15758:20:48 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [03:02<00:00, 11.43s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 16
Maximum request concurrency: 16
Benchmark duration (s): 182.80
Total input tokens: 8177
Total generated tokens: 7681
Request throughput (req/s): 0.09
Output token throughput (tok/s): 42.02
Peak output token throughput (tok/s): 75.00
Peak concurrent requests: 16.00
Total Token throughput (tok/s): 86.75
---------------Time to First Token----------------
Mean TTFT (ms): 18188.82
Median TTFT (ms): 16467.30
P99 TTFT (ms): 22968.20
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 322.22
Median TPOT (ms): 325.09
P99 TPOT (ms): 327.25
---------------Inter-token Latency----------------
Mean ITL (ms): 322.22
Median ITL (ms): 307.80
P99 ITL (ms): 389.45
==================================================
v0.11.1
vllm bench serve --dataset-name random --num-prompts 64 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 512 --random-output-len 512
INFO 11-19 14:54:10 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7f76d6b1d580>, seed=0, num_prompts=64, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=512, random_output_len=512, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 14:54:12 [datasets.py:507] Sampling input_len from [512, 512] and output_len from [512, 512]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 00:12 elapsed, 4714:00:33 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [01:11<00:00, 1.11s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 64
Maximum request concurrency: 16
Benchmark duration (s): 71.04
Total input tokens: 32565
Total generated tokens: 31353
Request throughput (req/s): 0.90
Output token throughput (tok/s): 441.34
Peak output token throughput (tok/s): 512.00
Peak concurrent requests: 31.00
Total Token throughput (tok/s): 899.75
---------------Time to First Token----------------
Mean TTFT (ms): 591.82
Median TTFT (ms): 599.07
P99 TTFT (ms): 1251.87
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 33.70
Median TPOT (ms): 34.11
P99 TPOT (ms): 35.13
---------------Inter-token Latency----------------
Mean ITL (ms): 33.68
Median ITL (ms): 32.30
P99 ITL (ms): 35.16
==================================================
lmdeploy:
vllm bench serve --dataset-name random --num-prompts 64 --backend vllm --host 10.249.42.202 --port 8000 --max-concurrency 16 --tokenizer Qwen3-0.6B --model Qwen3-4B-Instruct-2507 --random-input-len 512 --random-output-len 512
INFO 11-19 15:14:54 [__init__.py:216] Automatically detected platform cuda.
Namespace(subparser='bench', bench_type='serve', dispatch_function=<function BenchmarkServingSubcommand.cmd at 0x7f3146319580>, seed=0, num_prompts=64, dataset_name='random', no_stream=False, dataset_path=None, no_oversample=False, custom_output_len=256, custom_skip_chat_template=False, spec_bench_output_len=256, spec_bench_category=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, sharegpt_output_len=None, blazedit_min_distance=0.0, blazedit_max_distance=1.0, random_input_len=512, random_output_len=512, random_range_ratio=0.0, random_prefix_len=0, random_batch_size=1, random_mm_base_items_per_request=1, random_mm_num_mm_items_range_ratio=0.0, random_mm_limit_mm_per_prompt={'image': 255, 'video': 0}, random_mm_bucket_config={(256, 256, 1): 0.5, (720, 1280, 1): 0.5, (720, 1280, 16): 0.0}, hf_subset=None, hf_split=None, hf_name=None, hf_output_len=None, prefix_repetition_prefix_len=256, prefix_repetition_suffix_len=256, prefix_repetition_num_prefixes=10, prefix_repetition_output_len=128, label=None, backend='vllm', endpoint_type=None, base_url=None, host='10.249.42.202', port=8000, endpoint='/v1/completions', header=None, max_concurrency=16, model='Qwen3-4B-Instruct-2507', tokenizer='Qwen3-0.6B', use_beam_search=False, logprobs=None, request_rate=inf, burstiness=1.0, trust_remote_code=False, disable_tqdm=False, profile=False, save_result=False, save_detailed=False, append_result=False, metadata=None, result_dir=None, result_filename=None, ignore_eos=False, percentile_metrics='ttft,tpot,itl', metric_percentiles='99', goodput=None, request_id_prefix='benchmark-serving', top_p=None, top_k=None, min_p=None, temperature=None, tokenizer_mode='auto', served_model_name=None, lora_modules=None, ramp_up_strategy=None, ramp_up_start_rps=None, ramp_up_end_rps=None, ready_check_timeout_sec=600)
INFO 11-19 15:14:57 [datasets.py:507] Sampling input_len from [512, 512] and output_len from [512, 512]
Starting initial single prompt test run...
Waiting for endpoint to become up in 600 seconds
| | 00:14 elapsed, 5459:10:19 remaining
Initial test run completed. Starting main benchmark run...
Traffic request rate: inf
Burstiness factor: 1.0 (Poisson process)
Maximum request concurrency: 16
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [01:05<00:00, 1.03s/it]
tip: install termplotlib and gnuplot to plot the metrics
============ Serving Benchmark Result ============
Successful requests: 64
Maximum request concurrency: 16
Benchmark duration (s): 65.94
Total input tokens: 32565
Total generated tokens: 30895
Request throughput (req/s): 0.97
Output token throughput (tok/s): 468.55
Peak output token throughput (tok/s): 560.00
Peak concurrent requests: 32.00
Total Token throughput (tok/s): 962.42
---------------Time to First Token----------------
Mean TTFT (ms): 1051.63
Median TTFT (ms): 1118.93
P99 TTFT (ms): 1370.53
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 30.14
Median TPOT (ms): 30.31
P99 TPOT (ms): 32.24
---------------Inter-token Latency----------------
Mean ITL (ms): 30.11
Median ITL (ms): 29.66
P99 ITL (ms): 31.83
================================================== | 2025-11-19T07:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p115u6/vllm_0111_seems_to_be_bringing_massive_speedup_on/ | lly0571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p115u6 | false | null | t3_1p115u6 | /r/LocalLLaMA/comments/1p115u6/vllm_0111_seems_to_be_bringing_massive_speedup_on/ | false | false | self | 33 | null |
Would going from 64GB to 128GB ($700) be wroth it? | 0 | I am currently considering upgrading my Mac that I use for a lot of tasks including LLMs. I have been getting in to AI and LLM over this past year and it's been really fun. I am going to be getting at least a 64GB model, but I could potentially bump it to 128GB but I am not sure if it's worth it. I know I would need the 128GB to run the 120b gpt-oos model, but I am not sure that's worth it over the 20b or not. Or if there are other models that need 64-120GB of RAM that would make this a worthwhile upgrade. Any feedback is appreciated. | 2025-11-19T07:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p110bm/would_going_from_64gb_to_128gb_700_be_wroth_it/ | PersonSuitTV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p110bm | false | null | t3_1p110bm | /r/LocalLLaMA/comments/1p110bm/would_going_from_64gb_to_128gb_700_be_wroth_it/ | false | false | self | 0 | null |
[Advice needed] Foreign language extraction using Qwen | 1 | For an image like below, would it be possible to extract its vendor name and translate into english? The constraint is a small VRAM(16GB). I have tried using Qwen3VL-8B on 4bit quant
Gemini2.5 pro works, but i need it to be able to work locally.
Any advice and tips would be useful
https://preview.redd.it/6mw8tosfz52g1.png?width=357&format=png&auto=webp&s=f81dcbe1c4ceecc9f6d1f4a83d02325c4eefbccc
| 2025-11-19T07:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p10zwb/advice_needed_foreign_language_extraction_using/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p10zwb | false | null | t3_1p10zwb | /r/LocalLLaMA/comments/1p10zwb/advice_needed_foreign_language_extraction_using/ | false | false | 1 | null | |
Chatterbox on m4 macbook.How long do I need to generate a 60 min audio lenghth?? | 0 | Would be a great favour if somebody can help!!! | 2025-11-19T06:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1p10gwn/chatterbox_on_m4_macbookhow_long_do_i_need_to/ | lethargickid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p10gwn | false | null | t3_1p10gwn | /r/LocalLLaMA/comments/1p10gwn/chatterbox_on_m4_macbookhow_long_do_i_need_to/ | false | false | self | 0 | null |
Most people in this LocalLLaMA are hypocritical. | 99 | When posts about qwen max appear, there are a lot of comments saying that it shouldn't be discussed.
https://preview.redd.it/2rr3u5vzk52g1.png?width=1062&format=png&auto=webp&s=2adc0fc48f38eb189a05ecdb9ec1f366bd84e50b
However, when Gemini 3 and gpt 5 were discussed, not a single comment objected to their being discussed.
| 2025-11-19T05:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p0zn5j/most_people_in_this_localllama_are_hypocritical/ | Ok_houlin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0zn5j | false | null | t3_1p0zn5j | /r/LocalLLaMA/comments/1p0zn5j/most_people_in_this_localllama_are_hypocritical/ | false | false | 99 | null | |
First Build Plan and Parts | 0 | Assembling my first Local LLM build aiming to spend right under 1500$. In total ended up spending just under 1300$ all in.
Parts List:
Intel 9900k
ASUS Maximus XI Hero WiFi
128GB DDR4-3200
RTX 3090
1 TB NVME SSD
1000W PSU
I got the motherboard, CPU, and Ram combo on ebay for 500$, and bought the 3090 in non working condition for 400$ and fixed it.
My goal project with this is to run a live audio transcriber for the classes I attend, summarize the info, and sort it into different subjects while also marking down deadlines/reminders in my calendar. I have not began to setup the software yet so any recommendations on Models to run would be greatly appreciated. I'm super new to this but am very excited to get involved in the hobby.
TIA
| 2025-11-19T05:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1p0zn17/first_build_plan_and_parts/ | MacCollins44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0zn17 | false | null | t3_1p0zn17 | /r/LocalLLaMA/comments/1p0zn17/first_build_plan_and_parts/ | false | false | self | 0 | null |
Free Listing For Builders | 1 | [removed] | 2025-11-19T05:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p0ze86/free_listing_for_builders/ | One-Incident2448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0ze86 | false | null | t3_1p0ze86 | /r/LocalLLaMA/comments/1p0ze86/free_listing_for_builders/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=108&crop=smart&auto=webp&s=010cc9ea5d0bc3a05da242442cb775e94fc8301a', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=216&crop=smart&auto=webp&s=ae278eebea08677310cc6ff3ab07bf043207b9cb', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=320&crop=smart&auto=webp&s=89e9ae70f9ee36ca0f6495a0152e0f9ddda7daac', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=640&crop=smart&auto=webp&s=c02166f002801604eab848b974928cc8c8c6b0eb', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=960&crop=smart&auto=webp&s=25e8e93e5f07f39235ed7157abaa0325865bdf5f', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?width=1080&crop=smart&auto=webp&s=e2fb5da6678da243bd00e40a05aaaac57313434e', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/sfindevJlkrVasuCalJj9viWQ-ihu3UzadENgSkKP2I.jpeg?auto=webp&s=8301ddc00ddb0b5a52f57c03a2e1ae69a424c066', 'width': 1280}, 'variants': {}}]} |
Who Is the Most Threatened by Structural Intelligence? | 0 | A deeper reflection on the next paradigm shift in AI.
Across the entire AI community, one theme is becoming harder to ignore:
probabilistic models may represent an impressive chapter, but not the final architecture of intelligence.
Large language models have reached extraordinary capability — coding, summarization, reasoning-by-pattern, multi-modal integration, and even early forms of tool use.
But despite their success, many researchers sense a missing layer, something beneath the surface:
• structure
• relations
• internal pathways
• coherence
• cognitive organization
These elements are not captured by prediction alone.
This is where the idea of structural intelligence enters the conversation.
Not as a finished system, not as a product, but as a conceptual proposal for the next direction in AI: intelligence built not on token probability, but on internal reasoning structures, conceptual relations, and stable cognitive paths.
If such a paradigm ever becomes mainstream, it will not affect all companies equally.
Some will adapt quickly; others may find themselves standing on foundations suddenly less secure than they once appeared.
So which companies face the greatest risk?
⸻
1. Google — the most exposed, and also the most likely to adopt.
Google is in a paradoxical position.
On one hand, it is the tech giant most vulnerable to a structural shift.
Its empire — Search, Ads, Gemini, Android’s AI layer — is built on probabilistic ranking, predictive modeling, and large-scale statistical inference.
If the center of gravity in AI shifts from “pattern prediction” to structured reasoning, Google’s intellectual infrastructure would face the deepest philosophical shock.
But on the other hand:
Google also possesses the world’s most philosophy-oriented AI lab (DeepMind), the closest to thinking about structure, reasoning, and cognitive alignment at a deeper level.
This makes Google both:
• the most threatened, and
• the most capable of evolving
in a world where structural intelligence becomes a serious research direction.
⸻
2. Microsoft — deeply invested in probabilistic AI
Microsoft’s relationship with OpenAI gives it a massive competitive advantage today,
but also places it in a vulnerable position if AI’s center of innovation shifts away from LLMs.
Copilot, enterprise AI tools, and much of Azure’s strategy are designed around:
• bigger models
• better fine-tuning
• improved probabilistic inference
A structural paradigm — emphasizing reasoning paths, conceptual relations, and cognitive organization — would require Microsoft to rethink portions of its AI stack.
Not a fatal threat, but a major redirection.
⸻
3. Meta — vulnerable in AI research, but safe in business
Meta’s LLaMA family is strong and influential, especially in open-source communities.
But LLaMA is still firmly in the probabilistic paradigm.
A shift toward structure would mean:
• a new model family
• new research directions
• new conceptual foundations
• a reconsideration of what “reasoning” means in an AI system
However, unlike Google or Microsoft, Meta’s core business does not depend on leading the next AI architecture.
Its risk is academic and technical, not existential.
⸻
4. Nvidia — pressure on the hardware layer
Nvidia is not an AI model company, yet it stands at the center of the current paradigm.
The entire GPU ecosystem is optimized for:
• dense matrix multiplication
• token-based transformer workloads
• probabilistic inference
Structural intelligence — depending on what shape it ultimately takes — could reduce the dominance of transformer-like workloads and push hardware in a new direction:
• more graph-based computation
• more pathway-oriented parallelism
• new forms of cognitive acceleration
Nvidia is not in immediate danger, but a paradigm shift would force it to evolve at the architectural level.
⸻
5. Amazon — affected but not disrupted
Amazon relies heavily on prediction, but not in the same way as Google or Microsoft.
Structural intelligence would influence:
• supply chain optimization
• recommendation systems
• autonomous logistics
• AWS model services
However, Amazon’s business model is broad enough that a paradigm shift in AI would not destabilize its foundation.
⸻
6. Apple — AI is not its strategic backbone
Apple integrates AI into the user experience, but AI is not the center of its economic engine.
A new structure of intelligence would eventually affect:
• on-device reasoning
• private/local AI models
• intelligent user interfaces
But the pressure on Apple is gentle compared to AI-first companies.
⸻
7. Tesla — least affected
Tesla’s AI is built on a different worldview:
• vision
• control
• end-to-end driving systems
• reinforcement learning
These approaches sit outside the probabilistic language-model paradigm,
so the shift toward structural intelligence would produce minimal disruption.
⸻
So who is most threatened?
The companies that built the highest towers on today’s paradigm — Google, Microsoft, Meta to some extent — would feel the first tremors if the ground beneath shifts from “probability-driven intelligence” to “structure-driven intelligence.”
Those who rely less on language-model architectures would experience less disruption.
But the deeper message is this:
When a paradigm changes, the companies closest to the old paradigm’s center are the ones that must change the fastest — or risk becoming monuments to a previous era of intelligence.
Whether structural intelligence becomes the next major direction remains an open question.
But exploring it helps illuminate where the current paradigm is strong, where it is fragile, and where the next breakthroughs may emerge.
| 2025-11-19T05:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p0z99z/who_is_the_most_threatened_by_structural/ | Hefty_Document_9466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0z99z | false | null | t3_1p0z99z | /r/LocalLLaMA/comments/1p0z99z/who_is_the_most_threatened_by_structural/ | false | false | self | 0 | null |
Free Listing for Builders | 1 | > | 2025-11-19T05:30:32 | https://www.reddit.com/r/LocalLLaMA/comments/1p0z6he/free_listing_for_builders/ | One-Incident2448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0z6he | false | null | t3_1p0z6he | /r/LocalLLaMA/comments/1p0z6he/free_listing_for_builders/ | false | false | self | 1 | null |
Macbook pro 128gb RAM owners, whats the best AI you're running for reasoning & knowledge? | 4 | I've been out of the game for a few months so I'm basically new at everything. Whats the biggest model you run that gets you good results? I don't care about speed at all. Just reasoning and knowledge. Thank you! | 2025-11-19T05:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p0yu23/macbook_pro_128gb_ram_owners_whats_the_best_ai/ | Vast-Piano2940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0yu23 | false | null | t3_1p0yu23 | /r/LocalLLaMA/comments/1p0yu23/macbook_pro_128gb_ram_owners_whats_the_best_ai/ | false | false | self | 4 | null |
Do you think Gemini 3 uses MoR or Titans? | 0 | Google some time ago introduced [MoR](https://arxiv.org/abs/2507.10524) and [Titans](https://arxiv.org/abs/2501.00663)
I'm very curious if Gemini 3 uses a combination of these two architectures.
I know that Google has described Gemini 3 as a sparse MoE transformers-based model but Google also described MoR as a transformers-based architecture which using recursion and dynamic token routing. So what if Gemini 3 is a mixture of MoR, Sparse MoE, Titans and Transformers, or am I missing something?
What are your thoughts on this?
| 2025-11-19T05:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p0yq1i/do_you_think_gemini_3_uses_mor_or_titans/ | SrijSriv211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0yq1i | false | null | t3_1p0yq1i | /r/LocalLLaMA/comments/1p0yq1i/do_you_think_gemini_3_uses_mor_or_titans/ | false | false | self | 0 | null |
How would one go about making an ai have all context for a show in the form of uploaded scripts? | 0 | What I'm trying to figure out how to do is upload all of the scripts of all episodes of a show I really like so the bot has accurate knowledge of what I want it to write about. I'm struggling to find many resources on how I'd go about this.
Been playing around in LM Studio but I'm not sure that's the right hosting service for what I'm trying to do.
Is this possible to do without increasing tokens to an ungodly high? Thanks. | 2025-11-19T04:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p0xpbr/how_would_one_go_about_making_an_ai_have_all/ | RedHandTowel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0xpbr | false | null | t3_1p0xpbr | /r/LocalLLaMA/comments/1p0xpbr/how_would_one_go_about_making_an_ai_have_all/ | false | false | self | 0 | null |
llama.cpp: kv cache data type with -fa on (CUDA and Vulkan backends) | 1 | [removed] | 2025-11-19T03:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p0xehk/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | PhilippeEiffel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0xehk | false | null | t3_1p0xehk | /r/LocalLLaMA/comments/1p0xehk/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | false | false | self | 1 | null |
llama.cpp: kv cache data type with -fa on (CUDA and Vulkan backends) | 1 | [removed] | 2025-11-19T03:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/1p0x800/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | PhilippeEiffel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0x800 | false | null | t3_1p0x800 | /r/LocalLLaMA/comments/1p0x800/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | false | false | self | 1 | null |
llama.cpp: kv cache data type with -fa on (CUDA and Vulkan backends) | 1 | [removed] | 2025-11-19T03:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p0x6po/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | PhilippeEiffel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0x6po | false | null | t3_1p0x6po | /r/LocalLLaMA/comments/1p0x6po/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | false | false | self | 1 | null |
Google launched Gemini 3. And I tried to break it. Here’s how. | 0 | I tried testing the Gemini 3 Pro model using the hardest challenges designed by the top LLMs.
I also documented it and created a whole 37 minute comprehensive video breakdown on how I did it.
If you have X.
Click on the given link and check out the full video.
I bet you’ll enjoy and love it. | 2025-11-19T03:44:45 | https://x.com/aaryan_kakad/status/1990953701041901594?s=46 | akmessi2810 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1p0x5w5 | false | null | t3_1p0x5w5 | /r/LocalLLaMA/comments/1p0x5w5/google_launched_gemini_3_and_i_tried_to_break_it/ | false | false | default | 0 | null |
llama.cpp: kv cache data type with -fa on (CUDA and Vulkan backends) | 1 | [removed] | 2025-11-19T03:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p0x55i/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | PhilippeEiffel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0x55i | false | null | t3_1p0x55i | /r/LocalLLaMA/comments/1p0x55i/llamacpp_kv_cache_data_type_with_fa_on_cuda_and/ | false | false | self | 1 | null |
Best LocalLLM Inference | 0 | Hey, I need the absolute best daily-driver local LLM server for my 12GB VRAM NVIDIA GPU (RTX 3060/4060-class) in late 2025.
My main uses:
- Agentic workflows (n8n, LangChain, LlamaIndex, CrewAI, Autogen, etc.)
- RAG and GraphRAG projects (long context is important)
- Tool calling / parallel tools / forced JSON output
- Vision/multimodal when needed (Pixtral-12B, Llama-3.2-11B-Vision, Qwen2-VL, etc.)
- Embeddings endpoint
- Project demos and quick prototyping with Open WebUI or SillyTavern sometimes
Constraints & strong preferences:
- I already saw raw llama.cpp is way faster than Ollama → I want that full-throttle speed, no unnecessary overhead
- I hate bloat and heavy GUIs (tried LM Studio, disliked it)
- When I’m inside a Python environment I strongly prefer pure llama.cpp solutions (llama-cpp-python) over anything else
- I need Ollama-style convenience: change model per request with "model": "xxx" in the payload, /v1/models endpoint, embeddings, works as drop-in OpenAI replacement
- 12–14B class models must fit comfortably and run fast (ideally 80+ t/s for text, decent vision speed)
- Bonus if it supports quantized KV cache for real 64k–128k context without dying
I’m very interested in TabbyAPI, ktransformers, llama.cpp-proxy, and the newest llama-cpp-python server features, but I want the single best setup that gives me raw speed + zero bloat + full Python integration + multi-model hot-swapping.
What is the current (Nov 2025) winner for someone exactly like me?
[View Poll](https://www.reddit.com/poll/1p0x2aj) | 2025-11-19T03:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/1p0x2aj/best_localllm_inference/ | venpuravi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0x2aj | false | null | t3_1p0x2aj | /r/LocalLLaMA/comments/1p0x2aj/best_localllm_inference/ | false | false | self | 0 | null |
Free Claude Sonnet 4.5 is the best use of Google Antigravity | 0 | So earlier post I hit my quota on Gemini 3 Pro with a single prompt. Then I noticed they have Sonnet 4.5 and it has been running for 15 minutes non-stop. Google is subsiding Claude Sonnet 4.5, and guys you should try it. | 2025-11-19T03:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p0x0xo/free_claude_sonnet_45_is_the_best_use_of_google/ | ComposerGen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0x0xo | false | null | t3_1p0x0xo | /r/LocalLLaMA/comments/1p0x0xo/free_claude_sonnet_45_is_the_best_use_of_google/ | false | false | self | 0 | null |
What Size of LLM Can 4x RTX 5090 Handle? (96GB VRAM) | 0 | I currently have access to a server equipped with **4x RTX 5090 GPUs**. This setup provides a total of **96GB of VRAM** (assuming 24GB per card).
I'm planning to use this machine specifically for running and fine-tuning **open-source Large Language Models (LLMs)**.
Has anyone in the community tested a similar setup? I'm curious to know:
1. **What is the maximum size (in parameters) of a model I can reliably run/inference with this 96GB configuration?** (e.g., Qwen-72B, Llama 3-70B, etc.)
2. **What size of model could I feasibly fine-tune, and what training techniques would be recommended (e.g., QLoRA, full fine-tuning)?**
Any real-world benchmarks or advice would be greatly appreciated! Thanks in advance! | 2025-11-19T03:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p0wu96/what_size_of_llm_can_4x_rtx_5090_handle_96gb_vram/ | Affectionate_Arm725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0wu96 | false | null | t3_1p0wu96 | /r/LocalLLaMA/comments/1p0wu96/what_size_of_llm_can_4x_rtx_5090_handle_96gb_vram/ | false | false | self | 0 | null |
Model quota limit exceeded with 1 prompt Google Antigravity | 6 | I’m quite excited, so download the app and run it on an old Next.js project. The agent goes fully autonomous with a single prompt for minutes, so I grab my double cappuccino. By the time I came back, the limit was already hit.
Prompt: Understand the codebase and build the code.
Call 1-5: List files / read.
Call 6-96: Install dependencies, generate Prisma client, build Next.js app, verify API routes, fix routes, fix lint.
22 files changed.
Model quota limit exceeded. | 2025-11-19T03:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p0wqib/model_quota_limit_exceeded_with_1_prompt_google/ | ComposerGen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0wqib | false | null | t3_1p0wqib | /r/LocalLLaMA/comments/1p0wqib/model_quota_limit_exceeded_with_1_prompt_google/ | false | false | self | 6 | null |
Momentum by movementlabs.ai is not just another AI model, it's on par with GEMINI 3 at 20x the speed. | 0 |  | 2025-11-19T02:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1p0vrhx/momentum_by_movementlabsai_is_not_just_another_ai/ | Vast_Cupcake1039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0vrhx | false | null | t3_1p0vrhx | /r/LocalLLaMA/comments/1p0vrhx/momentum_by_movementlabsai_is_not_just_another_ai/ | false | false | self | 0 | null |
What are the best resources to learn more about Local LLMs? | 1 | I've got an Intel **Core™ i5-9600T, 32gb, and a lot of ambition. I've installed Ollama, and a few models, but I really don't know what to expect.**
**I've been following this sub for a few days and it still feels so over my head. Is there a discord for me to ask stupid questions?**
**My goal is to make a local AI that has all of my docker configs as its knowledge base so it can help me make changes and tinker. I don't ever know if thats reasonable. How did you guys figure all this out? What YouTube channels or sites or subs were the most helpful? I really just wanna have fun. If I find a use for it I will make a purpose built machine for it.** | 2025-11-19T02:33:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p0vo67/what_are_the_best_resources_to_learn_more_about/ | thevault08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0vo67 | false | null | t3_1p0vo67 | /r/LocalLLaMA/comments/1p0vo67/what_are_the_best_resources_to_learn_more_about/ | false | false | self | 1 | null |
[LIVE] Gemini 3 Pro vs GPT-5.1: Chess Match (Testing Reasoning Capabilities) | 18 | Hi everyone,
Like many of you, I was eager to test the new Gemini 3 Pro!
I’ve just kicked off a chess game between **GPT-5.1 (White)** and **Gemini 3 Pro (Black)** on the *LLM Chess Arena* app I developed a few months ago.
A single game can take a while (sometimes several hours!), so I thought it would be fun to share the live link with you all!
**🔴 Link to the match:** [**https://chess.louisguichard.fr/battle?game=gpt-51-vs-gemini-3-pro-bb53bbdd**](https://chess.louisguichard.fr/battle?game=gpt-51-vs-gemini-3-pro-bb53bbdd)
LLMs aren't designed to play chess and they're not very good at it, but I find it interesting to test them on this because it clearly shows their capabilities or limitations in terms of thinking.
Come hang out and see who cracks first! | 2025-11-19T02:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p0ve2s/live_gemini_3_pro_vs_gpt51_chess_match_testing/ | Apart-Ad-1684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0ve2s | false | null | t3_1p0ve2s | /r/LocalLLaMA/comments/1p0ve2s/live_gemini_3_pro_vs_gpt51_chess_match_testing/ | false | false | self | 18 | null |
Real-world benchmark TOON with OpenAI API | 0 | # 🔬 Test Results - PRODUCTION VALIDATED
**✅ ZERO ACCURACY IMPACT**
* JSON Accuracy: **86.9%**
* TOON Accuracy: **86.9%**
* Difference: **0.0%** (identical)
**✅ SIGNIFICANT TOKEN SAVINGS**
* Total tokens saved: **545 tokens (18.3%)**
* Prompt token savings: **134 tokens per question**
**✅ COST EFFICIENT**
* Test cost: **$0.0025** (less than a penny!)
* Annual savings at scale: Hundreds of dollars | 2025-11-19T01:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1p0ux43/realworld_benchmark_toon_with_openai_api/ | Least-Barracuda-2793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0ux43 | false | null | t3_1p0ux43 | /r/LocalLLaMA/comments/1p0ux43/realworld_benchmark_toon_with_openai_api/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=108&crop=smart&auto=webp&s=bf1d5c92946234fb4ad022b0531db0d97cfa8f5c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=216&crop=smart&auto=webp&s=8e2ead1c8fd94b1fcd077f4ebcd606f9ed5ef542', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=320&crop=smart&auto=webp&s=84ef1f79e3b60550e17cbc8c4b0024dff77c9bb5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=640&crop=smart&auto=webp&s=c820086ee50ac91b124f35182f327314df65afcd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=960&crop=smart&auto=webp&s=d1f0bbb68e70c7bb825e105732da59c2fecb4579', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?width=1080&crop=smart&auto=webp&s=b250fb7d09f72ce881849df9e3b8c77456b79173', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WezminywvwrGcGy8ppIYiHjWnP4uDrgXe8XCAKN5YnQ.png?auto=webp&s=36b893718a57ac3865c849f3c1f3c00959ef093e', 'width': 1200}, 'variants': {}}]} |
3 Intel B580 or one used 3090? | 1 | Building my first local LLM and was wondering which of these options would be better for price to performance. TIA | 2025-11-19T01:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p0uika/3_intel_b580_or_one_used_3090/ | MacCollins44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0uika | false | null | t3_1p0uika | /r/LocalLLaMA/comments/1p0uika/3_intel_b580_or_one_used_3090/ | false | false | self | 1 | null |
ollama's enshitification has begun! open-source is not their priority anymore, because they're YC-backed and must become profitable for VCs... Meanwhile llama.cpp remains free, open-source, and easier-than-ever to run! No more ollama | 1,197 | 2025-11-19T01:26:53 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p0u8hd | false | null | t3_1p0u8hd | /r/LocalLLaMA/comments/1p0u8hd/ollamas_enshitification_has_begun_opensource_is/ | false | false | default | 1,197 | {'enabled': True, 'images': [{'id': '2zt7d6q0942g1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=108&crop=smart&auto=webp&s=9f5ca41a77c9301a55e9f52dd21a52d965d9dcfb', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=216&crop=smart&auto=webp&s=72b79446b389ea46e6ddf5793eada8302de64690', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=320&crop=smart&auto=webp&s=542183256f7b4b4551804c41c50588c3854d24b5', 'width': 320}, {'height': 343, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=640&crop=smart&auto=webp&s=6d69898cd41ba5897e02dd650de189c04e2b1fbb', 'width': 640}, {'height': 515, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=960&crop=smart&auto=webp&s=a5619dd2151ae6d4c28c161e826e5af286f298dd', 'width': 960}, {'height': 579, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?width=1080&crop=smart&auto=webp&s=76da718755edcf75dbe2672f2803aab5409c125f', 'width': 1080}], 'source': {'height': 1146, 'url': 'https://preview.redd.it/2zt7d6q0942g1.png?auto=webp&s=167a512f160161f65132f0f8b6a9e9538e0fd02b', 'width': 2136}, 'variants': {}}]} | ||
GraphScout + OrKa UI using local models to explore and score reasoning paths | 3 | Here is a short video of **GraphScout** running inside OrKa UI with local models behind it.
Workflow in the clip:
* I add a GraphScout node to a set of specialist agents
* send a question into the system
* GraphScout uses a local LLM to simulate several possible reasoning paths
* each path gets a deterministic score based on model judgment plus heuristics and cost
* only the highest scoring path is actually executed to answer the question
So you still get the “try multiple strategies” behavior of agents, but the final decision is made by a transparent scoring function that you control.
If you want to reproduce this setup on your machine:
* OrKa UI on Docker Hub: [https://hub.docker.com/r/marcosomma/orka-ui](https://hub.docker.com/r/marcosomma/orka-ui)
* Orka-ui docs: [https://github.com/marcosomma/orka-reasoning/blob/master/docs/orka-ui.md](https://github.com/marcosomma/orka-reasoning/blob/master/docs/orka-ui.md)
* OrKa reasoning repo (plug in your local models): [https://github.com/marcosomma/orka-reasoning]()
Interested in opinions from this sub on combining local LLMs with this kind of deterministic path selection. Where would you tighten or change the scoring logic? | 2025-11-19T00:57:58 | https://v.redd.it/nbk94n14442g1 | marcosomma-OrKA | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p0tlmo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nbk94n14442g1/DASHPlaylist.mpd?a=1766105895%2CMDBiNmZlNzIxYmE0Y2VhMWEzMjZlZTQzZDMzYjQwYmVlNWZmNGE0Y2I4N2Q2ODU1MzQwYTRjMDA5Y2I3ZDk1NQ%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/nbk94n14442g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nbk94n14442g1/HLSPlaylist.m3u8?a=1766105895%2COTk3YmVmYTQ3MzhhMGVlOWI3ODg4MzY0MDI3YTY2MzJiMjhkY2I0MmJiOTJmMDFkZjhjMzZmM2NmZmQ5MjYxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nbk94n14442g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1p0tlmo | /r/LocalLLaMA/comments/1p0tlmo/graphscout_orka_ui_using_local_models_to_explore/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=4c81a6257af134c619d3feba0835ca8d4fcec3c3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=216&crop=smart&format=pjpg&auto=webp&s=1f1a32366c8864ac41baca2f3072a2067f390248', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=320&crop=smart&format=pjpg&auto=webp&s=83888176c8012ff45885e33598461d2786235114', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=640&crop=smart&format=pjpg&auto=webp&s=c580e9925baeffb348553b2241ef20f5beb8e802', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=960&crop=smart&format=pjpg&auto=webp&s=2a63409a5dac9e031f764a6b43b933a4084b2cd7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=438b8d31cead9b4c5bc6f3e247140e878d9674a6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cHFxa3JrMDQ0NDJnMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?format=pjpg&auto=webp&s=06cde132808c2a101086622d344b5ed0be636254', 'width': 1920}, 'variants': {}}]} | |
Running the latest LLMs like Granite-4.0 and Qwen3 fully on ANE (Apple NPU) | 13 | Last year, our two co-founders were invited by the Apple Data & Machine Learning Innovation (DMLI) team to share our work on on-device multimodal models for local AI agents. One of the questions that came up in that discussion was: Can the latest LLMs actually run end-to-end on the Apple Neural Engine?
After months of experimenting and building, NexaSDK now runs the latest LLMs like Granite-4.0, Qwen3, Gemma3, and Parakeet-v3, fully on ANE (Apple's NPU), powered by the NexaML engine.
For developers building local AI apps on Apple devices, this unlocks low-power, always-on, fast inference across Mac and iPhone (iOS SDK coming very soon).
Video shows performance running directly on ANE
https://reddit.com/link/1p0tko5/video/ur014yfw342g1/player
Links in comment. | 2025-11-19T00:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p0tko5/running_the_latest_llms_like_granite40_and_qwen3/ | Different-Effect-724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0tko5 | false | null | t3_1p0tko5 | /r/LocalLLaMA/comments/1p0tko5/running_the_latest_llms_like_granite40_and_qwen3/ | false | false | self | 13 | null |
Best VS Code Extension for using local models? | 3 | VS Code team is dragging their feet with rolling out local model (not just ollama) inference support. (Its apparently in the Insiders edition but hasnt been released to the public edition but was supposed to have months ago)
Cline has support but with 15k sys prompt it makes local inference much slower than it needs to be.
Whats a good extension that provides a chat window and agentic abilities? The llama-vscode extension does just autocomplete. | 2025-11-19T00:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p0tcov/best_vs_code_extension_for_using_local_models/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0tcov | false | null | t3_1p0tcov | /r/LocalLLaMA/comments/1p0tcov/best_vs_code_extension_for_using_local_models/ | false | false | self | 3 | null |
Running the latest LLMs like Granite-4.0 and Qwen3 fully on ANE (Apple NPU) | 1 | [removed] | 2025-11-19T00:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p0ss6x/running_the_latest_llms_like_granite40_and_qwen3/ | Different-Effect-724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0ss6x | false | null | t3_1p0ss6x | /r/LocalLLaMA/comments/1p0ss6x/running_the_latest_llms_like_granite40_and_qwen3/ | false | false | self | 1 | null |
I replicated Anthropic’s "Introspection" paper on DeepSeek-7B. It works. | 249 | 2025-11-19T00:09:39 | https://joshfonseca.com/blogs/introspection | Specialist_Bad_4465 | joshfonseca.com | 1970-01-01T00:00:00 | 0 | {} | 1p0sisn | false | null | t3_1p0sisn | /r/LocalLLaMA/comments/1p0sisn/i_replicated_anthropics_introspection_paper_on/ | false | false | default | 249 | null | |
Stop guessing RAG chunk sizes | 11 | Hi everyone,
Last week, I shared a small tool I built to solve a personal frustration: guessing chunk sizes for RAG pipelines.
The feedback here was incredibly helpful. Several of you pointed out that word-based chunking wasn't accurate enough for LLM context windows and that cloning a repo is annoying.
I spent the weekend fixing those issues. I just updated the project (`rag-chunk`) with:
* **True Token Chunking:** I integrated `tiktoken`, so now you can chunk documents based on exact token counts (matching OpenAI's encoding) rather than just whitespace/words.
* **Easier Install:** It's now packaged properly, so you can install it directly via pip.
* **Visuals:** Added a demo GIF in the repo so you can see the evaluation table before trying it.
The goal remains the same: a simple CLI to **measure** recall for different chunking strategies on your own Markdown files, rather than guessing.
It is 100% open-source. I'd love to know if the token-based logic works better for your use cases.
Github: [https://github.com/messkan/rag-chunk](https://github.com/messkan/rag-chunk)
[](https://www.reddit.com/submit/?source_id=t3_1p0s3cd) | 2025-11-18T23:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p0s608/stop_guessing_rag_chunk_sizes/ | InstanceSignal5153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0s608 | false | null | t3_1p0s608 | /r/LocalLLaMA/comments/1p0s608/stop_guessing_rag_chunk_sizes/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=108&crop=smart&auto=webp&s=9eb1839af38cd10b99ae8115fc0e869bad73dd96', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=216&crop=smart&auto=webp&s=7fd6c8bae17f70e90267b37813a8623b2b46f7bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=320&crop=smart&auto=webp&s=20cebdb44b95cc7049bda7bfae58454b2e378eae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=640&crop=smart&auto=webp&s=a3cc83c0b425c761834f485b87410e775a861847', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=960&crop=smart&auto=webp&s=a2791c93b2f36e4a50241d6156ac676018b62231', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?width=1080&crop=smart&auto=webp&s=1ad407119f294a9714db779889ca331eeecda974', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/crZWpAWdPxvToi_-rAlqb5oZ030QFwHp0yi15nYvB_0.png?auto=webp&s=59f87f2e999ea294e33ec35b92fea363487e5810', 'width': 1200}, 'variants': {}}]} |
Can Open-Source AI Read Its Own Mind? | 0 | 2025-11-18T23:41:12 | https://joshfonseca.com/blogs/introspection | Specialist_Bad_4465 | joshfonseca.com | 1970-01-01T00:00:00 | 0 | {} | 1p0ruyl | false | null | t3_1p0ruyl | /r/LocalLLaMA/comments/1p0ruyl/can_opensource_ai_read_its_own_mind/ | false | false | default | 0 | null | |
The Real Meme of 2025 | 0 | I wrote all of my MCP's by vibe coding while inebriated.
And they all work. All hail Claude Code. | 2025-11-18T23:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p0rqc9/the_real_meme_of_2025/ | david_jackson_67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0rqc9 | false | null | t3_1p0rqc9 | /r/LocalLLaMA/comments/1p0rqc9/the_real_meme_of_2025/ | false | false | self | 0 | null |
Apple M5 news - LLM boost & clustering | 11 | 2025-11-18T23:33:08 | https://appleinsider.com/articles/25/11/18/macos-tahoe-262-will-give-m5-macs-a-giant-machine-learning-speed-boost | Secure_Archer_1529 | appleinsider.com | 1970-01-01T00:00:00 | 0 | {} | 1p0ro7q | false | null | t3_1p0ro7q | /r/LocalLLaMA/comments/1p0ro7q/apple_m5_news_llm_boost_clustering/ | false | false | default | 11 | null | |
We've gone too far - Gemini 3 pro | 0 | 2025-11-18T23:26:06 | Shyvadi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p0riaw | false | null | t3_1p0riaw | /r/LocalLLaMA/comments/1p0riaw/weve_gone_too_far_gemini_3_pro/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dbc4of9sn32g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=108&crop=smart&auto=webp&s=622633b44de7f60b8d5fbd1958a6acaecd4bdb3f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=216&crop=smart&auto=webp&s=ccba65a97979eaf800c46bce2b0f3a19fba6bc4c', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=320&crop=smart&auto=webp&s=1716cfd42a7be472520e484dd25f280117ae863d', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=640&crop=smart&auto=webp&s=4c57a7d7688c182e5efbc763250bf59d083556b5', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=960&crop=smart&auto=webp&s=a48f795b0d11e23557ce968ca18ef7991d14c10b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?width=1080&crop=smart&auto=webp&s=ff153c3e30f945c8d26a313dd416af841deff3da', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/dbc4of9sn32g1.png?auto=webp&s=e3faf228af1b5ffc3dc0794a205e78c804545837', 'width': 1080}, 'variants': {}}]} | ||
CodeMode vs Traditional MCP benchmark | 49 | 2025-11-18T23:13:56 | juanviera23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p0r7uw | false | null | t3_1p0r7uw | /r/LocalLLaMA/comments/1p0r7uw/codemode_vs_traditional_mcp_benchmark/ | false | false | default | 49 | {'enabled': True, 'images': [{'id': 'js0ua9ikl32g1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=108&crop=smart&auto=webp&s=3a12443ff00ad5e434b757aa3425a477e3484fd6', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=216&crop=smart&auto=webp&s=606c285f02bb5e4bcb6018e19b35f94c08a1790a', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=320&crop=smart&auto=webp&s=94373598ad3eb92ea0b09887765b085ca5abf990', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=640&crop=smart&auto=webp&s=66015efe5e2a45a7817dad1da12469ede8d60d0b', 'width': 640}, {'height': 701, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=960&crop=smart&auto=webp&s=c68a8f2c73445285221a67d3321f47a21a419905', 'width': 960}, {'height': 788, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?width=1080&crop=smart&auto=webp&s=fda7669085331910c33faff49ce25299352e76e1', 'width': 1080}], 'source': {'height': 1614, 'url': 'https://preview.redd.it/js0ua9ikl32g1.png?auto=webp&s=a41f89ae0eae63a649435a84255624ce2f5ce5bd', 'width': 2210}, 'variants': {}}]} | ||
Llama Nemoretriever Colembed: Top‑Performing Text‑Image Retrieval Model | 3 | A 1B/3B model built for text-image retrieval, hits SOTA on cross-modal benchmarks, open-source win for local llama-style setups! | 2025-11-18T23:13:04 | https://arxiv.org/abs/2507.05513 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1p0r73m | false | null | t3_1p0r73m | /r/LocalLLaMA/comments/1p0r73m/llama_nemoretriever_colembed_topperforming/ | false | false | default | 3 | null |
GLM 4.6 on 128 GB RAM with llama.cpp | 130 | Recently I got my hands on a new box at work with 128 GB RAM and 32 GB VRAM (it's a semi-budget option, with 2x5070, but it performs really well). I decided I'm going to try a few of the bigger models. Obviously, a very good model to run on this is GPT-OSS-120B and it's been the default model, but I've set my eyes on the big ones. The GLM 4.6 REAP was a bit overwhelming, but then I though "what if I could get my hands on a good low quant that fits"?
So, with the help of [https://huggingface.co/AesSedai](https://huggingface.co/AesSedai) I've obtained a really nice mixed quant: [https://huggingface.co/AesSedai/GLM-4.6-GGUF/tree/main/llama.cpp/GLM-4.6-Q6\_K-IQ2\_XS-IQ2\_XS-IQ3\_S](https://huggingface.co/AesSedai/GLM-4.6-GGUF/tree/main/llama.cpp/GLM-4.6-Q6_K-IQ2_XS-IQ2_XS-IQ3_S) \- it's tuned to \*just barely\* fit in 128GB. What's surprising is how good quality it retains even at such low quant sizes - here's its analysis when I fed it the \`modeling\_kimi.py\` file from Kimi Linear: [https://gist.github.com/pwilkin/7ee5672422bd30afdb47d3898680626b](https://gist.github.com/pwilkin/7ee5672422bd30afdb47d3898680626b)
And on top of that, llama.cpp just merged the results of a few weeks of hard work of new contributor **hksdpc255** on XML tool calling, including GLM 4.6: [https://github.com/ggml-org/llama.cpp/commit/1920345c3bcec451421bb6abc4981678cc721154](https://github.com/ggml-org/llama.cpp/commit/1920345c3bcec451421bb6abc4981678cc721154)
Feel free to give it a try - on my box it's getting around 40 t/s prompt processing and about 5 t/s generation, which is not lightning fast, but still a HUGE upgrade from the 5 t/s pp and 3 t/s tg when I tried just a slightly bigger quant. | 2025-11-18T23:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p0r5ww/glm_46_on_128_gb_ram_with_llamacpp/ | ilintar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p0r5ww | false | null | t3_1p0r5ww | /r/LocalLLaMA/comments/1p0r5ww/glm_46_on_128_gb_ram_with_llamacpp/ | false | false | self | 130 | {'enabled': False, 'images': [{'id': 'pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=108&crop=smart&auto=webp&s=62c91456e91e6c7046705f32566d6972d48ed110', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=216&crop=smart&auto=webp&s=981b6eed0da77b8320a813b7a81d0aa1aad8ccc0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=320&crop=smart&auto=webp&s=7189c9157e2df6b7d47017e1074393c3b4fde5ae', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=640&crop=smart&auto=webp&s=9aa62b621a29c79682e1f6f199607ad952e08b94', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=960&crop=smart&auto=webp&s=ec0f0470cace681da4ee9d5296f23734da627e7a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?width=1080&crop=smart&auto=webp&s=7609cb4a2112700b4adae82d6ba532b4d9f97bd0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pt7n3Fg8IYhvkV-IpirlkmFh6bO5nZOE1-FkE1Sh45w.png?auto=webp&s=08e7dac83bdb9ddde64f94711452bb76c179ea9a', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.