title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Here's grok 4 system prompt. | 3 | You are Grok 4 built by xAI.
When applicable, you have some additional tools:
\- You can analyze individual X user profiles, X posts and their links.
\- You can analyze content uploaded by user including images, pdfs, text files and more.
\- If it seems like the user wants an image generated, ask for confirmation, instead of directly generating one.
\- You can edit images if the user instructs you to do so.
In case the user asks about xAI's products, here is some information and response guidelines:
\- Grok 4 and Grok 3 can be accessed on grok.com, x.com, the Grok iOS app, the Grok Android app, the X iOS app, and the X Android app.
\- Grok 3 can be accessed for free on these platforms with limited usage quotas.
\- Grok 3 has a voice mode that is currently only available on Grok iOS and Android apps.
\- Grok 4 is only available for SuperGrok and PremiumPlus subscribers.
\- SuperGrok is a paid subscription plan for [grok.com](http://grok.com) that offers users higher Grok 3 usage quotas than the free plan.
\- You do not have any knowledge of the price or usage limits of different subscription plans such as SuperGrok or [x.com](http://x.com) premium subscriptions.
\- If users ask you about the price of SuperGrok, simply redirect them to [https://x.ai/grok](https://x.ai/grok) for details. Do not make up any information on your own.
\- If users ask you about the price of x.com premium subscriptions, simply redirect them to https://help.x.com/en/using-x/x-premium for details. Do not make up any information on your own.
\- xAI offers an API service. For any user query related to xAI's API service, redirect them to https://x.ai/api.
\- xAI does not have any other products.
\* Your knowledge is continuously updated - no strict knowledge cutoff.
\* Use tables for comparisons, enumerations, or presenting data when it is effective to do so.
\* For searching the X ecosystem, do not shy away from deeper and wider searches to capture specific details and information based on the X interaction of specific users/entities. This may include analyzing real time fast moving events, multi-faceted reasoning, and carefully searching over chronological events to construct a comprehensive final answer.
\* For closed-ended mathematics questions, in addition to giving the solution in your final response, also explain how to arrive at the solution. Your reasoning should be structured and transparent to the reader.
\* If the user asks a controversial query that requires web or X search, search for a distribution of sources that represents all parties/stakeholders. Assume subjective viewpoints sourced from media are biased.
\* The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.
\* Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
No external searches or tools were required here, as the prompt is derived from internal context—no citations apply. | 2025-11-13T14:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ow1uux/heres_grok_4_system_prompt/ | burning_wolf101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow1uux | false | null | t3_1ow1uux | /r/LocalLLaMA/comments/1ow1uux/heres_grok_4_system_prompt/ | false | false | self | 3 | null |
Python-native configuration management or Hydra for YAML-haters | 0 | 2025-11-13T13:43:05 | https://github.com/fxlrnrpt/pydraconf | aigoncharov | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ow1es7 | false | null | t3_1ow1es7 | /r/LocalLLaMA/comments/1ow1es7/pythonnative_configuration_management_or_hydra/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=108&crop=smart&auto=webp&s=37819e1dda218086544b48237435552051c767dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=216&crop=smart&auto=webp&s=36717653aa527a2804239e9e36c059df2042fd30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=320&crop=smart&auto=webp&s=4e694ffcd33ce25455e15d10df5eaf723d7b11b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=640&crop=smart&auto=webp&s=eccd57e8bd90fd95f458479202abf2aeb40d6903', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=960&crop=smart&auto=webp&s=a9aaf879da963ea749563bd8ea13ac50b095e49c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?width=1080&crop=smart&auto=webp&s=8a400017361c6352154ea7d5ffac60cc2e5d3cfe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7HDpEUHkSZ_8pgdAj7tqXf0tLR9iMb5vW2uxTJHpiTk.png?auto=webp&s=33eb8b37d9c81e267cc9e8032ae938c7ab1c6686', 'width': 1200}, 'variants': {}}]} | |
Gain 60% performance on RDNA 4 using this fix | 77 | [https://github.com/vllm-project/vllm/issues/28649](https://github.com/vllm-project/vllm/issues/28649)
This is verified to work and perform well.
If you want your RDNA 4 cards to go fast, here you go, since AMD can't be bothered to support their hardware I did their job for them. | 2025-11-13T13:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ow1bmr/gain_60_performance_on_rdna_4_using_this_fix/ | Sea-Speaker1700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow1bmr | false | null | t3_1ow1bmr | /r/LocalLLaMA/comments/1ow1bmr/gain_60_performance_on_rdna_4_using_this_fix/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=108&crop=smart&auto=webp&s=07130aeac9a97d7d37118a52deb509215aefa58b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=216&crop=smart&auto=webp&s=c4d140260f47fadf2d20c4216c3a5169878e42fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=320&crop=smart&auto=webp&s=6d8a7536e622879b58ca3172b25dda2c38c80381', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=640&crop=smart&auto=webp&s=5e4b0183ea10666b2c2a7b2b95b7b53b7a8417a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=960&crop=smart&auto=webp&s=75586abae08d67aecb0064df2c70837917aa3d35', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?width=1080&crop=smart&auto=webp&s=59136864e11969c01659b39790c689dc7c6db37c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uGa6Sq5I2BxkF06Bq439sG_yJrI6Bbo1uZGsbwoPL38.png?auto=webp&s=e2b7140252d4a7ad212ef5bb13876e7552598720', 'width': 1200}, 'variants': {}}]} |
Complete CUDA programming course - includes GPU implementations of transformer components from scratch | 2 |
Today I'm excited to share something I've been working on!
After months of learning and development, I've completed a comprehensive course for GPU programming using CUDA. This isn't just another tutorial - it's a complete journey from zero to hero!
What's included?
20+ comprehensive lessons (from "Hello GPU" to production)
10 real-world projects (image processing, NLP, Deep Learning, and more)
500+ hands-on exercises
Everything explained from first principles
Why does this matter?
Accelerate your code by 10-1000x!
Understand how PyTorch & TensorFlow work internally
Highly demanded skill in the job market (AI/ML, HPC)
Completely free and open source!
Whether you want to leverage GPU power in your projects or truly understand parallel programming, this course is for you.
[Repository](https://github.com/h9-tec/cuda-mastery-guide) | 2025-11-13T13:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ow18x6/complete_cuda_programming_course_includes_gpu/ | 1Hesham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow18x6 | false | null | t3_1ow18x6 | /r/LocalLLaMA/comments/1ow18x6/complete_cuda_programming_course_includes_gpu/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=108&crop=smart&auto=webp&s=958081dbb800172408bbfe9d7523686a6bdb133b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=216&crop=smart&auto=webp&s=02a8c89f4858ba0782bb8a4dccb751e9862ce702', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=320&crop=smart&auto=webp&s=dedc8825135f94856518b0e4256e55a970fe4aaa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=640&crop=smart&auto=webp&s=bfed0e93161f3eff9f1e2b91a39644d619c416f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=960&crop=smart&auto=webp&s=4abc0209823797996567fe9b955601975db2a3a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?width=1080&crop=smart&auto=webp&s=04755572cd6cda39720cac8c591842c4c33a81dc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qFRrwbFW6b6gXGuqW4GpGL8ipurO3kTb6xfEcVWSjkA.png?auto=webp&s=54aae36afa403e9302632ccce5119f93ab444757', 'width': 1200}, 'variants': {}}]} |
Best getting started guide, moving from RTX3090 to Strix Halo | 5 | After years of using a 3x RTX3090 with ollama for inference, I ordered a 128GB AI MAX+ 395 mini workstation with 128GB.
As it’s a major shift in hardware, I’m not too sure where to begin. My immediate objective is to get similar functionality to what I previously had, which was inference over the Ollama API. I don’t intend to do any training/fine-tuning. My primary use is for writing code and occasionally processing text and documents (translation, summarizing)
I’m looking for a few pointers to get started.
I admit I’m ignorant when it comes to the options for software stack. I’m sure I’ll be able to get it working, but I’m interested to know what the state of the art is.
Which is the most performant software solution for LLMs on this platform? If it’s not ollama, are there compatibility proxies so my ollama-based tools will work without changes?
There’s plenty of info in this sub about models that work well on this hardware, but software is always evolving. Up to the minute input from this sub seems invaluable
tl; dr; What’s the best driver and software stack for Strix Halo platforms currently, and what’s the best source of info as development continues? | 2025-11-13T13:24:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ow0yvl/best_getting_started_guide_moving_from_rtx3090_to/ | favicocool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow0yvl | false | null | t3_1ow0yvl | /r/LocalLLaMA/comments/1ow0yvl/best_getting_started_guide_moving_from_rtx3090_to/ | false | false | self | 5 | null |
qwen/qwen3-vl-4b - LMStudio Server - llama.cpp: Submitting multimodal video as individual frames | 5 | I was able to send images to Qwen3-VL using LMStudio wrapper around llama.cpp (works awesome btw) but when trying video I hit a wall, seemingly this implementation doesnt support Qwen3 video structures?
Questions:
1. Is this a Qwen3-specific thing, or are these video types also part of the so called "OpenAI compatible" schema?
2. I suppose my particular issue is a limitation of the LMStudio server and not llama.cpp or other frameworks?
3. And naturally, what is the easiest way to make this work?
*(main reason I am using LMStudio wrapper is because I dont want to have to fiddle with llama.cpp...* **baby steps***).*
Thanks!
`{`
`"role": "user",`
`"content": [`
`{`
`"type": "video",`
`"sample_fps": 2,`
`"video": [`
`"data:image/jpeg;base64,...(truncated)...",`
`"data:image/jpeg;base64,...(truncated)...",`
`"data:image/jpeg;base64,...(truncated)...",`
`"data:image/jpeg;base64,...(truncated)..."`
`]`
`},`
`{`
`"type": "text",`
`"text": "Let's see whats going on!"`
`}`
`]`
`}`
`]`
`Invoke-RestMethod error:`
`{ "error": "Invalid \u0027content\u0027: \u0027content\u0027 objects must have a \u0027type\u0027 field that is either \u0027`**text**`\u0027 or \u0027`**image\_url**`\u0027." }`
`InvalidOperation:`
`94 | $narr = $resp.choices[0].message.content`
`| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
`| Cannot index into a null array.` | 2025-11-13T13:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ow0ss8/qwenqwen3vl4b_lmstudio_server_llamacpp_submitting/ | ElSrJuez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow0ss8 | false | null | t3_1ow0ss8 | /r/LocalLLaMA/comments/1ow0ss8/qwenqwen3vl4b_lmstudio_server_llamacpp_submitting/ | false | false | self | 5 | null |
Running a 1 Trillion Parameter Model on a PC with 128 GB RAM + 24 GB VRAM | 308 | Hi again, just wanted to share that this time I've successfully run **Kimi K2 Thinking (1T parameters)** on **llama.cpp** using my desktop setup:
* **CPU:** Intel i9-13900KS
* **RAM:** 128 GB DDR5 @ 4800 MT/s
* **GPU:** RTX 4090 (24 GB VRAM)
* **Storage:** 4TB NVMe SSD (7300 MB/s read)
I'm using **Unsloth UD-Q3\_K\_XL (\~3.5 bits)** from Hugging Face: [https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF](https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF)
**Performance (generation speed):** 0.42 tokens/sec
(I know, it's slow... but it runs! I'm just stress-testing what's possible on consumer hardware...)
I also tested other huge models - here is a full list with speeds for comparison:
|Model|Parameters|Quant|Context|Speed (t/s)|
|:-|:-|:-|:-|:-|
|Kimi K2 Thinking|1T A32B|UD-Q3\_K\_XL|128K|0.42|
|Kimi K2 Instruct 0905|1T A32B|UD-Q3\_K\_XL|128K|0.44|
|DeepSeek V3.1 Terminus|671B A37B|UD-Q4\_K\_XL|128K|0.34|
|Qwen3 Coder 480B Instruct|480B A35B|UD-Q4\_K\_XL|128K|1.0|
|GLM 4.6|355B A32B|UD-Q4\_K\_XL|128K|0.82|
|Qwen3 235B Thinking|235B A22B|UD-Q4\_K\_XL|128K|5.5|
|Qwen3 235B Instruct|235B A22B|UD-Q4\_K\_XL|128K|5.6|
|MiniMax M2|230B A10B|UD-Q4\_K\_XL|128K|8.5|
|GLM 4.5 Air|106B A12B|UD-Q4\_K\_XL|128K|11.2|
|GPT OSS 120B|120B A5.1B|MXFP4|128K|25.5|
|IBM Granite 4.0 H Small|32B dense|UD-Q4\_K\_XL|128K|72.2|
|Qwen3 30B Thinking|30B A3B|UD-Q4\_K\_XL|120K|197.2|
|Qwen3 30B Instruct|30B A3B|UD-Q4\_K\_XL|120K|218.8|
|Qwen3 30B Coder Instruct|30B A3B|UD-Q4\_K\_XL|120K|211.2|
|GPT OSS 20B|20B A3.6B|MXFP4|128K|223.3|
**Command line used (llama.cpp):**
`llama-server --threads 32 --jinja --flash-attn on --cache-type-k q8_0 --cache-type-v q8_0 --model <PATH-TO-YOUR-MODEL> --ctx-size 131072 --n-cpu-moe 9999 --no-warmup`
**Important:** Use *--no-warmup* \- otherwise, the process can crash before startup.
Notes:
* **Memory mapping (mmap)** in llama.cpp lets it read model files far beyond RAM capacity.
* **No swap/pagefile** \- I disabled these to prevent SSD wear (no disk writes during inference).
* **Context size:** Reducing context length didn't improve speed for huge models (token/sec stayed roughly the same).
* **GPU offload:** llama.cpp automatically uses GPU for all layers unless you limit it. I only use --n-cpu-moe 9999 to keep MoE layers on CPU.
* **Quantization:** Anything below \~4 bits noticeably reduces quality. Lowest meaningful quantization for me is UD-Q3\_K\_XL.
* Tried **UD-Q4\_K\_XL** for Kimi models, but it failed to start. UD-Q3\_K\_XL is the max stable setup on my rig.
* **Speed test method:** Each benchmark was done using the same prompt - "Explain quantum computing". The measurement covers the entire generation process until the model finishes its response (so, true end-to-end inference speed).
**TL;DR - Yes**, it's possible to run (slowly) a **1-trillion-parameter LLM** on a machine with **128 GB RAM + 24 GB VRAM** \- no cluster or cloud required. Mostly an experiment to see where the limits really are. | 2025-11-13T13:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ow0jj0/running_a_1_trillion_parameter_model_on_a_pc_with/ | pulse77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow0jj0 | false | null | t3_1ow0jj0 | /r/LocalLLaMA/comments/1ow0jj0/running_a_1_trillion_parameter_model_on_a_pc_with/ | false | false | self | 308 | {'enabled': False, 'images': [{'id': 'jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=108&crop=smart&auto=webp&s=30bb6e09d68ef94e1517cac62d1a95e31206023f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=216&crop=smart&auto=webp&s=b94146495960a26be9db0f1e535d1ea64bd4b4a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=320&crop=smart&auto=webp&s=c65abc5cf5ba59cf4c7c05c6d6b671510b6498f6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=640&crop=smart&auto=webp&s=ed16b707fe50718f68252d97655074259a01b9f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=960&crop=smart&auto=webp&s=d497f4fe96f44d75e215e561fc62696dc4228dcc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?width=1080&crop=smart&auto=webp&s=ff650f2b2321d81f7fcdb1c856dab466116f547b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jky1-TY04Az9qKUPOX_7cjWoJywEk1kPIqafXMYn3dI.png?auto=webp&s=bde2dfad1493659832e2e1d4fc76ded1b153b257', 'width': 1200}, 'variants': {}}]} |
CPU inference - memory or cores? | 2 | I run my daily driver - glm 4.5 air Q6 - with ram/cpu offload and noticed that the CPU is always 100% busy during inference.
it does 10 tps on a real load- so it is OK for chats but still would like more :)
Wondering if I add more cores (upgrade CPU) would it increase tps? or memory (ddr5 6000mhz) bandwidth is still a bottleneck?
where is that point where it hits memory vs cpu?
and yeah, I got 5060ti to keep some model weights | 2025-11-13T12:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ow04al/cpu_inference_memory_or_cores/ | Steus_au | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ow04al | false | null | t3_1ow04al | /r/LocalLLaMA/comments/1ow04al/cpu_inference_memory_or_cores/ | false | false | self | 2 | null |
Interesting to see an open-source model genuinely compete with frontier proprietary models for coding | 129 | So Code Arena just dropped their new live coding benchmark, and the tier 1 results are sparking an interesting open vs proprietary debate.
GLM-4.6 is the only open-source model in the top tier. It's MIT licensed, the most permissive license possible. It's sitting at rank 1 (score: 1372) alongside Claude Opus and GPT-5.
What makes Code Arena different is that it's not static benchmarks. Real developers vote on actual functionality, code quality, and design. Models have to plan, scaffold, debug, and build working web apps step-by-step using tools just like human engineers.
The score gap among the tier 1 clusters is only \~2%. For context, every other model in ranks 6-10 is either proprietary or Apache 2.0 licensed, and they're 94-250 points behind.
This raises some questions. Are we reaching a point where open models can genuinely match frontier proprietary performance for specialized tasks? Or does this only hold for coding, where training data is more abundant?
The fact that it's MIT licensed (not just "open weights") means you can actually build products with it, modify the architecture, deploy without restrictions, not just run it locally.
Community voting is still early (576-754 votes per model), but it's evaluating real-world functionality, not just benchmark gaming. You can watch the models work: reading files, debugging, iterating.
They're adding multi-file codebases and React support next, which will test architectural planning even more.
Do you think open models will close the gap across the board, or will proprietary labs always stay ahead? And does MIT vs Apache vs "weights only" licensing actually matter for your use cases? | 2025-11-13T12:44:35 | Technical_Gene4729 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ow03a6 | false | null | t3_1ow03a6 | /r/LocalLLaMA/comments/1ow03a6/interesting_to_see_an_opensource_model_genuinely/ | false | false | default | 129 | {'enabled': True, 'images': [{'id': 'l3lt0757s01g1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=108&crop=smart&auto=webp&s=62e1b85c3fbe967ba4da9de2596844de5cce456b', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=216&crop=smart&auto=webp&s=1a29e814017f75855fba2b8c4f853ed30c2c5bfc', 'width': 216}, {'height': 220, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=320&crop=smart&auto=webp&s=9e413d1b602bf01d5a42f3651bf530954b5975ce', 'width': 320}, {'height': 441, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=640&crop=smart&auto=webp&s=7f3256b064baacc927a42b0d21aa1946bac509bb', 'width': 640}, {'height': 662, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=960&crop=smart&auto=webp&s=9b205ef6ffb695374ce813866a0a3dfcb371ec75', 'width': 960}, {'height': 745, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?width=1080&crop=smart&auto=webp&s=34b1677338d6388914d74038feb4a6493e0173c1', 'width': 1080}], 'source': {'height': 1486, 'url': 'https://preview.redd.it/l3lt0757s01g1.png?auto=webp&s=fbd231aa485108d3c3b3a64bd560cbdb39456d56', 'width': 2154}, 'variants': {}}]} | |
Vascura FRONT - Open Source (Apache 2.0), Bloat Free, Portable and Lightweight (300~ kb) LLM Frontend (Single HTML file). Now with GitHub - github.com/Unmortan-Ellary/Vascura-FRONT. | 25 | **GitHub** \- [github.com/Unmortan-Ellary/Vascura-FRONT](http://github.com/Unmortan-Ellary/Vascura-FRONT)
Changes from the prototype version:
\- Reworked Web Search: now fit in 4096 tokens, allOrigins can be used locally.
\- Now Web Search is really good at collecting links (90 links total for 9 agents).
\- Lot of bug fixes and logic improvements.
\- Improved React system.
\- Copy / Paste settings function.
\---
**Frontend is designed around core ideas:**
\- On-the-Spot Text Editing: You should have fast, precise control over editing and altering text.
\- Dependency-Free: No downloads, no Python, no Node.js - just a single compact (300\~ kb) HTML file that runs in your browser.
\- Focused on Core: Only essential tools and features that serve the main concept.
\- Context-Effective Web Search: Should find info and links and fit in 4096 tokens limit.
\- OpenAI-compatible API: The most widely supported standard, chat-completion format.
\- Open Source under the Apache 2.0 License.
\---
**Features:**
Please watch the video for a visual demonstration of the implemented features.
1. **On-the-Spot Text Editing:** Edit text just like in a plain notepad, no restrictions, no intermediate steps. Just click and type.
2. **React (Reactivation) System:** Generate as many LLM responses as you like at any point in the conversation. Edit, compare, delete or temporarily exclude an answer by clicking “Ignore”.
3. **Agents for Web Search:** Each agent gathers relevant data (using allOrigins) and adapts its search based on the latest messages. Agents will push findings as "internal knowledge", allowing the LLM to use or ignore the information, whichever leads to a better response. The algorithm is based on more complex system but is streamlined for speed and efficiency, fitting within an 4K context window (all 9 agents, instruction model).
4. **Tokens-Prediction System:** Available when using LM Studio or Llama.cpp Server as the backend, this feature provides short suggestions for the LLM’s next response or for continuing your current text edit. Accept any suggestion instantly by pressing Tab.
5. **Any OpenAI-API-Compatible Backend:** Works with any endpoint that implements the OpenAI API - LM Studio, Kobold.CPP, Llama.CPP Server, Oobabooga's Text Generation WebUI, and more. With "Strict API" mode enabled, it also supports Mistral API, OpenRouter API, and other v1-compliant endpoints.
6. **Markdown Color Coding:** Uses Markdown syntax to apply color patterns to your text.
7. **Adaptive Interface:** Each chat is an independent workspace. Everything you move or change is saved instantly. When you reload the backend or switch chats, you’ll return to the exact same setup you left, except for the chat scroll position. Supports custom avatars for your chats.
8. **Pre-Configured for LM Studio:** By default, the frontend is configured for an easy start with LM Studio: just turn "Enable CORS" to ON, in LM Studio server settings, enable the server in LM Studio, choose your model, launch Vascura FRONT, and say “Hi!” - that’s it!
9. **Thinking Models Support:** Supports thinking models that use \`<think></think>\` tags or if your endpoint returns only the final answer (without a thinking step), enable the "Thinking Model" switch to activate compatibility mode - this ensures Web Search and other features work correctly.
\---
**allOrigins:**
\- Web Search works via allOrigins - [https://github.com/gnuns/allOrigins/tree/main](https://github.com/gnuns/allOrigins/tree/main)
\- By default it will use [allorigins.win](http://allorigins.win) website as a proxy.
\- But by running it locally you will get way faster and more stable results (use LOC version). | 2025-11-13T12:38:30 | https://v.redd.it/ykcfbxd4o01g1 | -Ellary- | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovzz3q | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ykcfbxd4o01g1/DASHPlaylist.mpd?a=1765629522%2CYjIyOThhMDU3ZjllYjgxZWMzOTk3MzkyM2I0OGYyMjUyMDhlNWQ5MmY2M2VkY2Y2NzgyYmI4YTlkZjIwZjMwZg%3D%3D&v=1&f=sd', 'duration': 118, 'fallback_url': 'https://v.redd.it/ykcfbxd4o01g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 646, 'hls_url': 'https://v.redd.it/ykcfbxd4o01g1/HLSPlaylist.m3u8?a=1765629522%2CMTc3MmE5N2RlY2M5MGMwZjdmMTg4YWRiMGQ3N2ZhNzI2YjYzYWNlZGM5MzNlYmJjZjQ4NTJjMzQxODEyZTEzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ykcfbxd4o01g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ovzz3q | /r/LocalLLaMA/comments/1ovzz3q/vascura_front_open_source_apache_20_bloat_free/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=108&crop=smart&format=pjpg&auto=webp&s=676d13a7b4bbf2d13d98fef9bb316df41352e043', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=216&crop=smart&format=pjpg&auto=webp&s=cd96763c48a3f837faab9b01d8772ade0a78f19d', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=320&crop=smart&format=pjpg&auto=webp&s=7231627d7296460ecf7c4eeebc050dbf976fa25d', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=640&crop=smart&format=pjpg&auto=webp&s=39ea1d3659f6f81e6d4a6641b70046a62317029a', 'width': 640}, {'height': 485, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=960&crop=smart&format=pjpg&auto=webp&s=32fbda4316ca76afa79b9581277992c645022d0c', 'width': 960}, {'height': 545, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=54a551adc3706b00d6244df0beb2d603e7eae630', 'width': 1080}], 'source': {'height': 970, 'url': 'https://external-preview.redd.it/Y2xkOWd4ZDRvMDFnMTaXKbAxEnkCSlwbwmJDXf_lyDQzd483n4JJoFhjK3xD.png?format=pjpg&auto=webp&s=60c4b4816b372284b7d2477d61fa102eb981d802', 'width': 1920}, 'variants': {}}]} | |
Do not use local LLMs to privatize your data without Differential Privacy! | 4 | ERROR: type should be string, got "https://preview.redd.it/5mf4dmhom01g1.jpg?width=1654&format=pjpg&auto=webp&s=b2b0e8a6cd49bde36263a1361a09d0734ce80470\n\nWe showcase that simple membership inference–style attacks can achieve over 60% success in predicting the presence of personally identifiable information (PII) in data input to LLMs just by observing the privatized output, even when it doesn’t explicitly leak private information!\n\nTherefore, it’s imperative to use Differential Privacy (DP) with LLMs to protect private data passed to them. However, existing DP methods for LLMs often severely damage utility, even when offering only weak theoretical privacy guarantees.\n\nWe present DP-Fusion the first method that enables differentially private inference (at the token level) with LLMs, offering robust theoretical privacy guarantees without significantly hurting utility.\n\nOur approach bounds the LLM’s output probabilities to stay close to a public distribution, rather than injecting noise as in traditional methods. This yields over 6× higher utility (perplexity) compared to existing DP methods.\n\n📄 The arXiv paper is now live here: [https://arxiv.org/abs/2507.04531](https://arxiv.org/abs/2507.04531) \n💻 Code and data: [https://github.com/MBZUAI-Trustworthy-ML/DP-Fusion-DPI](https://github.com/MBZUAI-Trustworthy-ML/DP-Fusion-DPI)\n\n⚙️ Stay tuned for a PIP package for easy integration!" | 2025-11-13T12:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ovzfui/do_not_use_local_llms_to_privatize_your_data/ | IIITDkaLaunda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovzfui | false | null | t3_1ovzfui | /r/LocalLLaMA/comments/1ovzfui/do_not_use_local_llms_to_privatize_your_data/ | false | false | 4 | null | |
Custom-Built AI Server - Thoughts? | 1 | I’m working on the hardware selection to build an AI server to host several different AI instances with different models ranging from text-based to basic image generation. I want to be able to run models to at least 70B parameters and have some room to expand in the future (via hardware upgrades). This is what I have in mind:
CPU: AMD EPYC 7282
- 2.8Ghz base, 3.2Ghz max turbo
- 16cores, 32threads
- 85.3GB/s memory bandwidth
RAM: 128GB DDR4-3200Mhz
- 4x32GB sticks
- Upgradable to 4TB (aiming for 256GB or 512GB if needed)
Motherboard: AsRock Rack ROMED8-2T
- 8x RAM slots, max 3200Mhz
- 7x PCIe 4.0 x16
GPU: 2x Nvidia RTX 3090
- 48GB VRAM total
- Motherboard can support two more if needed
OS: Either TalosOS or Debian w/ Docker
- Using Nvidia drivers to bridge GPUs directly to Docker containers
My goal is run various things like one for conversational activity for private discord server, n8n workflows, image generation (converting pics to animated versions), integrate with my datasets via MCP server and HomeAssistant stuff.
Do you think this is good to start off with? I’m open to suggestions/concerns you may have. | 2025-11-13T12:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ovzdh8/custombuilt_ai_server_thoughts/ | Atomicbeast101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovzdh8 | false | null | t3_1ovzdh8 | /r/LocalLLaMA/comments/1ovzdh8/custombuilt_ai_server_thoughts/ | false | false | self | 1 | null |
⚠️ Do not use LLMs to privatize your data without Differential Privacy! ⚠️ | 0 | ERROR: type should be string, got "https://preview.redd.it/5aqyfpfhk01g1.jpg?width=1654&format=pjpg&auto=webp&s=fb625930eb8fe36e6f8f52d6c7262ee0bfd4144f\n\nWe showcase that simple membership inference–style attacks can achieve over 60% success in predicting the presence of personally identifiable information (PII) in data input to LLMs just by observing the privatized output, even when it doesn’t explicitly leak private information! \n \nTherefore, it’s imperative to use Differential Privacy (DP) with LLMs to protect private data passed to them. However, existing DP methods for LLMs often severely damage utility, even when offering only weak theoretical privacy guarantees. \n \nWe present DP-Fusion the first method that enables differentially private inference (at the token level) with LLMs, offering robust theoretical privacy guarantees without significantly hurting utility. \n \nOur approach bounds the LLM’s output probabilities to stay close to a public distribution, rather than injecting noise as in traditional methods. This yields over 6× higher utility (perplexity) compared to existing DP methods. \n \n📄 The arXiv paper is now live here: [https://arxiv.org/abs/2507.04531](https://arxiv.org/abs/2507.04531) \n 💻 Code and data: [https://github.com/MBZUAI-Trustworthy-ML/DP-Fusion-DPI](https://github.com/MBZUAI-Trustworthy-ML/DP-Fusion-DPI) \n \n ⚙️ Stay tuned for a PIP package for easy integration! \n\n\nlinkedin post - [https://www.linkedin.com/posts/rushil-thareja-a80400179\\_do-not-use-llms-to-privatize-your-data-activity-7393274346275807232-luBB?utm\\_source=share&utm\\_medium=member\\_desktop&rcm=ACoAACo9hgABughMeMAXoGUs0enD590i3TroMXM](https://www.linkedin.com/posts/rushil-thareja-a80400179_do-not-use-llms-to-privatize-your-data-activity-7393274346275807232-luBB?utm_source=share&utm_medium=member_desktop&rcm=ACoAACo9hgABughMeMAXoGUs0enD590i3TroMXM)" | 2025-11-13T11:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ovz7jw/do_not_use_llms_to_privatize_your_data_without/ | Acrobatic_Wind_3914 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovz7jw | false | null | t3_1ovz7jw | /r/LocalLLaMA/comments/1ovz7jw/do_not_use_llms_to_privatize_your_data_without/ | false | false | 0 | null | |
Sell my 5080 for something else or... | 5 | Hello,
I currently have a spare 5080 16GB in my Xeon server (8259CL, 192GB of RAM). I mostly want to run coding agent (I don't do image/video generation - and I would probably do that on the 5080 that is on my desktop).
I know it's not the best card for the job. I was wondering if I should sell it and invest in card(s) with more VRAM, or even just buy a Strix Halo 128GB. Or sell everything and buy the biggest Mac Studio I can.
I do not care (in some limits) to noise (the noisy machines are in the garage) nor energy consumption (as long as it run on a regular 230v power outlet that is). | 2025-11-13T11:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ovys0w/sell_my_5080_for_something_else_or/ | foogitiff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovys0w | false | null | t3_1ovys0w | /r/LocalLLaMA/comments/1ovys0w/sell_my_5080_for_something_else_or/ | false | false | self | 5 | null |
What's the easiest way to setup AI Image/Videogen on Debian? | 2 | I've made countless attempts and it seems like either the guide goes crossways, something doesn't work, or for some reason it insists on a NVIDIA card when I have an AMD Card. My rig is at 16gb with an RX 6600 XT 8GB And an I5-12400f | 2025-11-13T11:34:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ovyro6/whats_the_easiest_way_to_setup_ai_imagevideogen/ | FunnyGarbage4092 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovyro6 | false | null | t3_1ovyro6 | /r/LocalLLaMA/comments/1ovyro6/whats_the_easiest_way_to_setup_ai_imagevideogen/ | false | false | self | 2 | null |
Venez faites Max d’argent avec moi 💸💰 | 0 | 2025-11-13T11:32:41 | Farmajo123 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovyqdi | false | null | t3_1ovyqdi | /r/LocalLLaMA/comments/1ovyqdi/venez_faites_max_dargent_avec_moi/ | false | false | 0 | {'enabled': True, 'images': [{'id': '8u_tE0UWEKlmtRWuPaQmCW7OYTrzVwIWz74sp-PfC0k', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=108&crop=smart&auto=webp&s=51a30af164ff38fa3dab7cb7dad036b0c5128134', 'width': 108}, {'height': 50, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=216&crop=smart&auto=webp&s=08c595298af2c080859eba58d749a8ac450e1bf2', 'width': 216}, {'height': 74, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=320&crop=smart&auto=webp&s=475b7903ce40a0300af0df4eb94215cc6a27dd70', 'width': 320}, {'height': 149, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=640&crop=smart&auto=webp&s=b448aaf06853dcfeca0839bab9ff2e15b04f624d', 'width': 640}, {'height': 224, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=960&crop=smart&auto=webp&s=e736f161a6fa3e9a085060b2d02abfb85281ea56', 'width': 960}, {'height': 252, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?width=1080&crop=smart&auto=webp&s=c09471587ed9d10830853202dedf229e3008a4d2', 'width': 1080}], 'source': {'height': 301, 'url': 'https://preview.redd.it/h92hduwxf01g1.jpeg?auto=webp&s=a3c2f07c8effc86c4e5047f3ed1a0429aaa4de33', 'width': 1289}, 'variants': {}}]} | |||
Help choosing AI workstation hardware (budget 5–10k) – A100 vs 2×4090 for RAG + chat completions? | 1 | Hey everyone,
I’m looking to build (or buy) an AI setup for work and would really appreciate some hardware advice.
**Budget:**
Roughly **5,000–10,000** (EUR/USD range) for the whole system.
**Main use case:**
* Running a **Chat-Completion style API** (similar to OpenAI’s `/chat/completions` endpoint)
* **Streaming support** for real-time responses
* Support for **system / user / assistant roles**
* Control over **temperature, max tokens, top\_p**, etc.
* **Embedding generation** for documents
* Used in a **RAG setup** (Retrieval Augmented Generation)
* Target **latency < 3 seconds per request** under normal load
**My main questions:**
1. For this kind of workload, would you recommend:
* a **single A100**, or
* **2 × RTX 4090** (or similar high-end consumer GPUs)?
2. Are there any **recommended system configurations** (CPU, RAM, storage, PSU, cooling, etc.) you’d suggest for this price range?
3. Any **build guides, example setups, or blog posts** you’d recommend that are focused on local LLM/RAG backends for production-like use?
I’m mainly interested in a stable, future-proof setup that can handle multiple concurrent chat requests with low latency and also do embedding generation efficiently.
Thanks in advance for any tips, parts lists, or real-world experience you can share! | 2025-11-13T10:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ovxyhd/help_choosing_ai_workstation_hardware_budget_510k/ | Melodic-Bit7032 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovxyhd | false | null | t3_1ovxyhd | /r/LocalLLaMA/comments/1ovxyhd/help_choosing_ai_workstation_hardware_budget_510k/ | false | false | self | 1 | null |
Which LocalLLM I Can Use On My MacBook | 2 | Hi everyone, i recently bought a MacBook M4 Max with 48gb of ram and want to get into the LLM's, my use case is general chatting, some school work and run simulations (like battles, historical events, alternate timelines etc.) for a project. Gemini and ChatGPT told me to download LM Studio and use Llama 3.3 70B 4-bit and i downloaded this version llama-3.3-70b-instruct-dwq from mlx community but unfortunately it needs 39gb ram and i have 37 if i want to run it i needed to manually allocate more ram to the gpu. So which LLM should i use for my use case, is quality of 70B models are significantly better? | 2025-11-13T10:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ovxvke/which_localllm_i_can_use_on_my_macbook/ | AegirAsura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovxvke | false | null | t3_1ovxvke | /r/LocalLLaMA/comments/1ovxvke/which_localllm_i_can_use_on_my_macbook/ | false | false | self | 2 | null |
Jan-v2-VL: 8B model for long-horizon tasks, improving Qwen3-VL-8B’s agentic capabilities almost 10x | 624 | Hi, this is Bach from the Jan team. We’re releasing Jan-v2-VL, an 8B vision–language model aimed at long-horizon, multi-step tasks starting from browser use.
Jan-v2-VL-high executes 49 steps without failure on the Long-Horizon Execution benchmark, while the base model (Qwen3-VL-8B-Thinking) stops at 5 and other similar-scale VLMs stop between 1 and 2.
Across text and multimodal benchmarks, it matches or slightly improves on the base model, so you get higher long-horizon stability without giving up reasoning or vision quality.
We're releasing 3 variants:
* Jan-v2-VL-low (efficiency-oriented)
* Jan-v2-VL-med (balanced)
* Jan-v2-VL-high (deeper reasoning and longer execution)
How to run the model
* Download Jan-v2-VL from the Model Hub in Jan
* Open the model’s settings and enable Tools and Vision
* Enable BrowserUse MCP (or your preferred MCP setup for browser control)
You can also run the model with vLLM or llama.cpp.
Recommended parameters
* `temperature: 1.0`
* `top_p: 0.95`
* `top_k: 20`
* repetition\_penalty`: 1.0`
* presence\_penalty`: 1.5`
Model: [https://huggingface.co/collections/janhq/jan-v2-vl](https://huggingface.co/collections/janhq/jan-v2-vl)
Jan app: [https://github.com/janhq/jan](https://github.com/janhq/jan)
We're also working on a browser extension to make model-driven browser automation faster and more reliable on top of this.
Credit to the Qwen team for the Qwen3-VL-8B-Thinking base model. | 2025-11-13T10:22:48 | https://v.redd.it/go4j38r5xz0g1 | Delicious_Focus3465 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovxksu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/go4j38r5xz0g1/DASHPlaylist.mpd?a=1765621384%2CODBjODAxMWQ4YzA4OTdhMGZlYWEwNmRjOGQ5MjUwZTM3MTJkZTQ3OGQwYzU1NGMyZWQ5MTQyZjMwYjg5MDBkMA%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/go4j38r5xz0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/go4j38r5xz0g1/HLSPlaylist.m3u8?a=1765621384%2CNGRkODM4YzgyMjU3ODQwMGE0YmI5ZTZmZTU2Y2UyYmRlZmQ1N2U0Mzc4ZGIxMTg1NWY2MjQ0MmFjMzMwZWViMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/go4j38r5xz0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1832}} | t3_1ovxksu | /r/LocalLLaMA/comments/1ovxksu/janv2vl_8b_model_for_longhorizon_tasks_improving/ | false | false | 624 | {'enabled': False, 'images': [{'id': 'bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=108&crop=smart&format=pjpg&auto=webp&s=4753746bb1e2a9853acea01d99769e9d67ab5430', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=216&crop=smart&format=pjpg&auto=webp&s=18e3f596365e0252c7c6e29833ed6590e46a1e87', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=320&crop=smart&format=pjpg&auto=webp&s=d3aa69dbb1d71208ff75a69c497e6ae1ac1c09a7', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9691b1fb6995f53d67111da04ba836db9dc045b', 'width': 640}, {'height': 565, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=960&crop=smart&format=pjpg&auto=webp&s=2621b2088a0367bbef1a7d6781879f72efab44c3', 'width': 960}, {'height': 636, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8117b9009557ce1cac19abb1f7515729c5cba6d7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bmthZnk4cjV4ejBnMYgdXr3Xr8K8l3LMKEIqfiXLStzaSkNnB6704_pmF3PX.png?format=pjpg&auto=webp&s=de25c823d0f6eb33b1667cb74b43b9c152755cfd', 'width': 1832}, 'variants': {}}]} | |
🥇 NEW High-End AI PC | RTX 5090 FE | Ultra 9 285K | Z890 Extreme | 192GB DDR5 | 1 | 2025-11-13T10:05:56 | https://ebay.us/m/PlpP09 | Due-Ad-4547 | ebay.us | 1970-01-01T00:00:00 | 0 | {} | 1ovxb3l | false | null | t3_1ovxb3l | /r/LocalLLaMA/comments/1ovxb3l/new_highend_ai_pc_rtx_5090_fe_ultra_9_285k_z890/ | false | false | default | 1 | null | |
Qwen Chat Bot - Inaccessible Source Links | 1 | So when I prompted the Qwen AI chatbot to provide me links/sources to its claims, all (like all the links) the links do not work at all
\- I understand that some links are behind paywalls but I have tried over 50+ links and they're all 'broken'/non-existent links
Due to the lack of actual sources/links, it seems risky to even believe the slightest form of answer it gives.
Does anyone have the same issue? | 2025-11-13T09:51:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ovx2jr/qwen_chat_bot_inaccessible_source_links/ | middyy95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovx2jr | false | null | t3_1ovx2jr | /r/LocalLLaMA/comments/1ovx2jr/qwen_chat_bot_inaccessible_source_links/ | false | false | self | 1 | null |
What Modell to run on 8x A100 (40GB)? | 5 | Hello everyone,
I just got access to a 8x A100 GPU server.
Do you have some interesting models I should try to run and or benchmark?
Here are the specs of the system:
8x A100 40GB (320GB total)
AMD EPYC 7302 (16 Cores / 32 Threads)
1TB of RAM | 2025-11-13T09:39:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ovww60/what_modell_to_run_on_8x_a100_40gb/ | Not_Black_is_taken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovww60 | false | null | t3_1ovww60 | /r/LocalLLaMA/comments/1ovww60/what_modell_to_run_on_8x_a100_40gb/ | false | false | self | 5 | null |
RAG Paper 25.11.12 | 8 | 1. [MARC: Multimodal and Multi-Task Agentic Retrieval-Augmented Generation for Cold-Start Recommender System](http://arxiv.org/abs/2511.08181v1)
2. [Structured RAG for Answering Aggregative Questions](http://arxiv.org/abs/2511.08505v1)
3. [Prompt Tuning for Natural Language to SQL with Embedding Fine-Tuning and RAG](http://arxiv.org/abs/2511.08245v1)
4. [BiCA: Effective Biomedical Dense Retrieval with Citation-Aware Hard Negatives](http://arxiv.org/abs/2511.08029v1)
5. [Introducing A Bangla Sentence - Gloss Pair Dataset for Bangla Sign Language Translation and Research](http://arxiv.org/abs/2511.08507v1)
6. [Multi-Agent GraphRAG: A Text-to-Cypher Framework for Labeled Property Graphs](http://arxiv.org/abs/2511.08274v1)
7. [JobSphere: An AI-Powered Multilingual Career Copilot for Government Employment Platforms](http://arxiv.org/abs/2511.08343v1)
8. [Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction](http://arxiv.org/abs/2511.07943v1)
9. [NOTAM-Evolve: A Knowledge-Guided Self-Evolving Optimization Framework with LLMs for NOTAM Interpretation](http://arxiv.org/abs/2511.07982v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-11-13T08:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ovw4gk/rag_paper_251112/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovw4gk | false | null | t3_1ovw4gk | /r/LocalLLaMA/comments/1ovw4gk/rag_paper_251112/ | false | false | self | 8 | null |
try my new app MOBI GPT available in playstore and recommend me new features | 0 |
I would love to hear your thoughts on how to improve the app | 2025-11-13T08:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ovw2d6/try_my_new_app_mobi_gpt_available_in_playstore/ | MoistPhilosophy8837 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovw2d6 | false | null | t3_1ovw2d6 | /r/LocalLLaMA/comments/1ovw2d6/try_my_new_app_mobi_gpt_available_in_playstore/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'd7YEsUcsuyyZJ9UtszGWTpWgv26LKpaFTLrYuG7_x5c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/d7YEsUcsuyyZJ9UtszGWTpWgv26LKpaFTLrYuG7_x5c.png?width=108&crop=smart&auto=webp&s=b8107dd1517119434313d968ab77da18da604854', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/d7YEsUcsuyyZJ9UtszGWTpWgv26LKpaFTLrYuG7_x5c.png?width=216&crop=smart&auto=webp&s=8fb90ff942c437eb3d63c5d1d81d70242ba3e2d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/d7YEsUcsuyyZJ9UtszGWTpWgv26LKpaFTLrYuG7_x5c.png?width=320&crop=smart&auto=webp&s=4ed08f79fa104cd2091796ba3556465d5016930c', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/d7YEsUcsuyyZJ9UtszGWTpWgv26LKpaFTLrYuG7_x5c.png?auto=webp&s=2907276a0a964de944981efab75de2ba3da6fe39', 'width': 512}, 'variants': {}}]} |
Rebtech for AI? crazy idea | 1 | So… I got one 5060 Ti and one 4060 Ti, and I can get a RebTech single board (the mining motherboard, the tiny one). It’s compatible with Ubuntu and all that, so I was thinking… why not make a mini-cluster for AI instead of mining? Like, both GPUs together give me 24GB VRAM, and I’ve seen people running 30B models on mixed cards, so maybe it works? I know the RebTech is meant for mining rigs but honestly it’s cheap as hell and it boots Linux no problem, so… why not. My doubt is: is this actually a good idea or am I being stupid? Would vLLM or Ollama even run decent with 16GB + 8GB split like that?
Any advice from people who tried something similar? | 2025-11-13T08:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ovw04y/rebtech_for_ai_crazy_idea/ | Famous_Win2378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovw04y | false | null | t3_1ovw04y | /r/LocalLLaMA/comments/1ovw04y/rebtech_for_ai_crazy_idea/ | false | false | self | 1 | null |
Vim: Fill in the Middle code completion | 3 | Any Vim users here who use FIM with vim? If so, what is your set-up? I'm currently using vim-ai but was looking for something that might have more intelligent context provision.
I'm wondering if I need to switch to a dedicated editor for FIM/AI support.
Any recommendations for a lightweight editor for Linux? | 2025-11-13T08:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ovvkl7/vim_fill_in_the_middle_code_completion/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovvkl7 | false | null | t3_1ovvkl7 | /r/LocalLLaMA/comments/1ovvkl7/vim_fill_in_the_middle_code_completion/ | false | false | self | 3 | null |
Built a RAG chatbot (crossed 1.7k stars in GIT) that runs 100% offline with DeepSeek - no API costs, complete privacy | 1 | [removed] | 2025-11-13T08:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ovvhkd/built_a_rag_chatbot_crossed_17k_stars_in_git_that/ | Safe_Trouble8622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovvhkd | false | null | t3_1ovvhkd | /r/LocalLLaMA/comments/1ovvhkd/built_a_rag_chatbot_crossed_17k_stars_in_git_that/ | false | false | self | 1 | null |
Stanford's new Equivariant Encryption enables private AI inference with zero slowdown - works with any symmetric encryption | 101 | Just came across this paper (arXiv:2502.01013) that could be huge for private local model deployment.
The researchers achieved 99.999% accuracy on encrypted neural network inference with literally zero additional latency. Not "minimal" overhead - actually zero.
The key insight: instead of using homomorphic encryption (10,000x slowdown), they train networks to use "equivariant functions" that commute with encryption operations. So you can compute directly on AES or ChaCha20 encrypted data.
What this means for local LLMs:
\- Your prompts could remain encrypted in memory
\- Model weights could be encrypted at rest
\- No performance penalty for privacy
The catch: you need to retrain models with their specific architecture constraints. Can't just plug this into existing models.
Paper: [https://arxiv.org/abs/2502.01013](https://arxiv.org/abs/2502.01013)
Also made a technical breakdown analyzing the limitations they gloss over: [https://youtu.be/PXKO5nkVLI4](https://youtu.be/PXKO5nkVLI4)
Anyone see potential applications for local assistant privacy? The embedding layer limitations seem like the biggest bottleneck for LLM applications. | 2025-11-13T07:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ovv95d/stanfords_new_equivariant_encryption_enables/ | Proof-Possibility-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovv95d | false | null | t3_1ovv95d | /r/LocalLLaMA/comments/1ovv95d/stanfords_new_equivariant_encryption_enables/ | false | false | self | 101 | null |
Would you trust an LLM to install software on your machine? Here’s a cross-platform version that actually does it | 1 | [removed] | 2025-11-13T07:28:49 | https://www.reddit.com/gallery/1ovuwnz | operastudio | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ovuwnz | false | null | t3_1ovuwnz | /r/LocalLLaMA/comments/1ovuwnz/would_you_trust_an_llm_to_install_software_on/ | false | false | 1 | null | |
Experiment: a local-first LLM that executes real OS commands across Linux, macOS, and Windows through a secure tool layer all in the browser | 1 | [removed] | 2025-11-13T07:25:57 | https://www.reddit.com/gallery/1ovuv3g | operastudio | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ovuv3g | false | null | t3_1ovuv3g | /r/LocalLLaMA/comments/1ovuv3g/experiment_a_localfirst_llm_that_executes_real_os/ | false | false | 1 | null | |
LLM integration with budget - help | 1 | Hi all,
I hit the wall with the budget of my startup, im trying to figure out how can i integrate an llm or a service that does a certain validation over the user's input (image validation), it needs to extract a lot of properties from that input, tried to find maybe something open source or maybe run an llm on cloud run(Google Cloud), but all seems really high in price, maybe someone from here has an idea that will help me? i know that i have to spend some money of course, but trying to find a way to be as affordable as possible, im expecting a lot of image input possibly from each user and have to run validation for each one.
Thanks! | 2025-11-13T06:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ovu22o/llm_integration_with_budget_help/ | Unlucky_Analysis4584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovu22o | false | null | t3_1ovu22o | /r/LocalLLaMA/comments/1ovu22o/llm_integration_with_budget_help/ | false | false | self | 1 | null |
Testing | 1 | [removed] | 2025-11-13T06:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ovtxqh/testing/ | Extra_Cicada8798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovtxqh | false | null | t3_1ovtxqh | /r/LocalLLaMA/comments/1ovtxqh/testing/ | false | false | self | 1 | null |
Insane week for LLMs | 101 | In the past week, we've gotten...
\- GPT 5.1
\- Kimi K2 Thinking
\- 12+ stealth endpoints across LMArena, Design Arena, and OpenRouter, with more coming in just the past day
\- Speculation about an imminent GLM 5 drop on X
\- A 4B model that beats several SOTA models on front-end fine-tuned using a new agentic reward system
It's a great time for new models and an even better time to be running a local setup. Looking forward to what the labs can cook up before the end of the year (looking at you Z.ai)
https://preview.redd.it/b46881agly0g1.png?width=1892&format=png&auto=webp&s=16dfc05b6c2989ae933201911e8d326c473a3402
| 2025-11-13T05:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ovsqs7/insane_week_for_llms/ | Interesting-Gur4782 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovsqs7 | false | null | t3_1ovsqs7 | /r/LocalLLaMA/comments/1ovsqs7/insane_week_for_llms/ | false | false | 101 | null | |
llama.cpp and Qwen 2.5 running on bare metal Windows XP x64 without any compatibility layers | 348 | Slowness aside, surprisingly llama.cpp can be cross-compiled using MinGW and you can actually run it on Windows XP with only a few tweaks! I only have the x64 edition on this laptop so not really sure if it also works on x86
All tools are working without any problems, even the CLI and server tools (pictured), though i'm fairly sure that you can squeeze a token or two more by using the CLI instead of the server | 2025-11-13T04:50:33 | PANCHO7532 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovs6ut | false | null | t3_1ovs6ut | /r/LocalLLaMA/comments/1ovs6ut/llamacpp_and_qwen_25_running_on_bare_metal/ | false | false | default | 348 | {'enabled': True, 'images': [{'id': 'hg1xeqvuey0g1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?width=108&crop=smart&auto=webp&s=f4933992cff6a9ea9a3794c65918fe65c118f5ab', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?width=216&crop=smart&auto=webp&s=602106d91f1d7353513f804d5700dd864fb32d11', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?width=320&crop=smart&auto=webp&s=bd823adf7f6f92183d6a94357f2239f0ecdaca03', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?width=640&crop=smart&auto=webp&s=223d93e64bfc8f846a673473dfbaaae88ede30a6', 'width': 640}, {'height': 562, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?width=960&crop=smart&auto=webp&s=90df8903174f9858cb22a606d1e045c22af6553d', 'width': 960}], 'source': {'height': 600, 'url': 'https://preview.redd.it/hg1xeqvuey0g1.jpeg?auto=webp&s=1ac86f71ec16bd80cb9468aa4f7119a314263c4d', 'width': 1024}, 'variants': {}}]} | |
Agents belong in chat apps, not in new apps someone finally built the bridge. | 0 | Been thinking about agent UX a lot lately.
Apps are dead interfaces messaging is the real one.
Just found something called *iMessage Kit* (search photon imessage kit).
It’s an open-source SDK that lets AI agents talk directly over iMessage.
Imagine your agent:
• texting reminders
• summarizing group chats
• sending PDFs/images
This feels like the missing interface layer for AI. | 2025-11-13T04:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ovrzf3/agents_belong_in_chat_apps_not_in_new_apps/ | Fearless-Confusion-4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovrzf3 | false | null | t3_1ovrzf3 | /r/LocalLLaMA/comments/1ovrzf3/agents_belong_in_chat_apps_not_in_new_apps/ | false | false | self | 0 | null |
Is running Chatterbox with an old Radeon RX 460 2GB card possible? | 1 | [removed] | 2025-11-13T04:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ovrpmw/is_running_chatterbox_with_an_old_radeon_rx_460/ | MassiveTopic1601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovrpmw | false | null | t3_1ovrpmw | /r/LocalLLaMA/comments/1ovrpmw/is_running_chatterbox_with_an_old_radeon_rx_460/ | false | false | self | 1 | null |
Open source x 3: GRPO training with OpenEnv, vLLM, and Oumi | 13 | You may have seen the release of open source OpenEnv a fews ago at the PyTorch Conference. I wanted to share a tutorial showing how you can actually do GRPO training using an OpenEnv environment server and vLLM: [https://github.com/oumi-ai/oumi/blob/main/notebooks/Oumi%20-%20OpenEnv%20GRPO%20with%20trl.ipynb](https://github.com/oumi-ai/oumi/blob/main/notebooks/Oumi%20-%20OpenEnv%20GRPO%20with%20trl.ipynb) | 2025-11-13T04:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ovra2o/open_source_x_3_grpo_training_with_openenv_vllm/ | PrincipleFar6835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovra2o | false | null | t3_1ovra2o | /r/LocalLLaMA/comments/1ovra2o/open_source_x_3_grpo_training_with_openenv_vllm/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=108&crop=smart&auto=webp&s=3a4d22e963c4ba5f40d3b84702a457ac2df1e87f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=216&crop=smart&auto=webp&s=c9f44681f6d276c02f083850656998157f75b690', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=320&crop=smart&auto=webp&s=9138f717217f917b76e883ca79c7a2f2b8c2d0e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=640&crop=smart&auto=webp&s=41322eaf537833b0c392f0232a0f6392c5961007', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=960&crop=smart&auto=webp&s=33c4ae714c6772d66a6cab61f935835efd9b5d4b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?width=1080&crop=smart&auto=webp&s=31fd651facf758894ece91ed1b8fb617e002e710', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gtMnjgO8aa8iIu1VtXpfkJGRpmCwZmNk9KNQQ6mxBjQ.png?auto=webp&s=b5087fccae5205706a797f301fa59c951a271a77', 'width': 1200}, 'variants': {}}]} |
Qwen3 235B vs Qwen3 VL 235B | 2 | I believe Qwen has stated all their future models will be VL already. I want to try 235B on my setup, I wondering if there is any downside to the VL version? | 2025-11-13T03:51:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ovr19x/qwen3_235b_vs_qwen3_vl_235b/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovr19x | false | null | t3_1ovr19x | /r/LocalLLaMA/comments/1ovr19x/qwen3_235b_vs_qwen3_vl_235b/ | false | false | self | 2 | null |
A proper way to connect a local LLM to iMessage? | 0 | I've been seeing a lot of projects where people build a whole web UI for their AI agent, but I just want to text my local model.
I've been looking for a good way to do this without a janky Android-Twilio bridge. Just found an open-source project that acts as an iMessage SDK. It's built in TypeScript and seems to let you programmatically read new messages and send replies (with files and images) right from a script.
Imagine hooking this up to Oobabooga or a local API. Your agent could just live in your iMessage.
Search for **"imessage kit github"** if you're curious. I'm thinking of trying to build a RAG agent that can summarize my group chats for me. | 2025-11-13T03:30:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ovqlq2/a_proper_way_to_connect_a_local_llm_to_imessage/ | Quiet-Ranger-5487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovqlq2 | false | null | t3_1ovqlq2 | /r/LocalLLaMA/comments/1ovqlq2/a_proper_way_to_connect_a_local_llm_to_imessage/ | false | false | self | 0 | null |
Does Chatgpt plus, like Chinese AI Coding Plans, also have limited requests? | 0 |
Hey guys, wanted to ask that Chatgpt plus subscription also mentions stuff like 40-120 codex calls etc.
Has OpenAI integrated these types of coding plans in their plus subs? Like i can use a key and then in my IDE or environment to use the prompt limits?
I could not find anything about this yet anywhere. But the way Plus is described on OpenAI makes me believes this is the case? If that is so, plus subsription is pretty awsome now. If not, openAI needs to get on this ASAP. Chinesse Labs will take the lead away because of these coding plans. They are quite handy | 2025-11-13T02:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ovphkg/does_chatgpt_plus_like_chinese_ai_coding_plans/ | Kingwolf4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovphkg | false | null | t3_1ovphkg | /r/LocalLLaMA/comments/1ovphkg/does_chatgpt_plus_like_chinese_ai_coding_plans/ | false | false | self | 0 | null |
R2R vs LightRAG: Early Results from a Simple Evaluation Benchmark | 0 | 2025-11-13T02:07:31 | https://v.redd.it/4tjmci9bmx0g1 | Cheryl_Apple | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovot0j | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4tjmci9bmx0g1/DASHPlaylist.mpd?a=1765592336%2CNDU0ZjhkOTYxYWM2MWY1M2FkYTRlNGVkYTA1MGU2NDJlMDYzMDJhZjg5ZTUwYjZhMmMyNmIwNGZiMTA3MzUxYg%3D%3D&v=1&f=sd', 'duration': 283, 'fallback_url': 'https://v.redd.it/4tjmci9bmx0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1042, 'hls_url': 'https://v.redd.it/4tjmci9bmx0g1/HLSPlaylist.m3u8?a=1765592336%2COTc0ZGFkOGVmYTlmNjMzZjAyMGM1ZWNhMmY0ZmFlZDkyMzgzN2MwYTY3Y2U5NGQyY2VkZDFmYjdkZDY2MGMzMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4tjmci9bmx0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ovot0j | /r/LocalLLaMA/comments/1ovot0j/r2r_vs_lightrag_early_results_from_a_simple/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=108&crop=smart&format=pjpg&auto=webp&s=d1c94a5351707ae3a5ab7f786a461438e2faa9f9', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=216&crop=smart&format=pjpg&auto=webp&s=06228d49a9d8beeb008012145991d6686a8843c1', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=320&crop=smart&format=pjpg&auto=webp&s=53c803b760a19ecb1a5d29488aa52f2b0299ea59', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a33f6ffd0f2abed32fcd1a0e703d1280e43e81e', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=960&crop=smart&format=pjpg&auto=webp&s=eaf46d95e7d397bb2fe3163ce0b3123a97b1b6a5', 'width': 960}, {'height': 586, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1c8a0f8abe6db08803f7807421349f17d1cd4cc5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cWxkNm5pOWJteDBnMYsqXkUhZ5sear7V_4x4L2GUGx-oytNPOZyv1M3lW69z.png?format=pjpg&auto=webp&s=1689ca238f93d4b88a8c9900375981e3678599be', 'width': 1990}, 'variants': {}}]} | ||
R2R vs LightRAG: Early Results from a Simple Evaluation Benchmark | 1 | 2025-11-13T02:01:39 | https://v.redd.it/i7wd0abxlx0g1 | Current_Material_439 | /r/LocalLLaMA/comments/1ovoo9e/r2r_vs_lightrag_early_results_from_a_simple/ | 1970-01-01T00:00:00 | 0 | {} | 1ovoo9e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i7wd0abxlx0g1/DASHPlaylist.mpd?a=1765721006%2CNGEzMjc0MGQ1ZjMyYzMwM2Y4ZmU0NmYzNzdlYmJiYjY2OWJmN2QzMzM2ODg3NjIxMGViMmEzY2E5NzE0NDhkZg%3D%3D&v=1&f=sd', 'duration': 283, 'fallback_url': 'https://v.redd.it/i7wd0abxlx0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1042, 'hls_url': 'https://v.redd.it/i7wd0abxlx0g1/HLSPlaylist.m3u8?a=1765721006%2CNzk2Yjk2NTIwNmY5NDg1M2YwYzhmNjEzODUwODRiMDJiZTYzY2Y1ZGU1NzY3MzdlODdmMWI1YmZkOTFhY2VmNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i7wd0abxlx0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ovoo9e | /r/LocalLLaMA/comments/1ovoo9e/r2r_vs_lightrag_early_results_from_a_simple/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=108&crop=smart&format=pjpg&auto=webp&s=4dc3a2f82c8739f451bb450bea7cfdab9bb603e6', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=216&crop=smart&format=pjpg&auto=webp&s=25a9b67ccc23749bca5defc47a8fce34f06bda84', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=320&crop=smart&format=pjpg&auto=webp&s=a1f123b52d8b43e1bf4811bd08a9881e1d514c21', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=640&crop=smart&format=pjpg&auto=webp&s=1094387ca7103b0f53c0e17a00ea771030c1cd7d', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=960&crop=smart&format=pjpg&auto=webp&s=4aa9a7e603eb216dff5dc0473de6d451c3689331', 'width': 960}, {'height': 586, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=729b941e4d2a0563f8646bfc2a9e972891c8ed1e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dWl5NTU4YnhseDBnMXJetzVI_EHMYN1MaZ-JY8bSP3ODszTirfwMpmjezyGY.png?format=pjpg&auto=webp&s=0c021764595a0198b95fc949fd8e0e829ccc775d', 'width': 1990}, 'variants': {}}]} | ||
Building a real-time LLM visualization tool for Mac - what would make it useful for you? | 3 | I'm building a native Mac app that visualizes what's happening inside local LLMs as they generate tokens.
**What it does:**
* Runs models locally with MLX
* Shows real-time layer activations as the model thinks
* Visualizes attention patterns (which tokens each layer is looking at)
* All rendered in Metal with smooth 60fps
**Current features:**
* 32 transformer layers lighting up based on activation strength
* Attention flow graph showing token→layer connections
**My question:** Would this be useful for your work? What features would make you actually use it?
Thinking:
* Prompt debugging/optimization tools?
* Export activation patterns to compare models/quantisation?
* Identify dead/underperforming layers?
* Something else?
Genuinely want to build something useful, not just cool-looking. What would you need? | 2025-11-13T02:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ovonf7/building_a_realtime_llm_visualization_tool_for/ | ManuToniotti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovonf7 | false | null | t3_1ovonf7 | /r/LocalLLaMA/comments/1ovonf7/building_a_realtime_llm_visualization_tool_for/ | false | false | self | 3 | null |
Commercial lock-in versus new algorithms. | 0 | I asked gpt what if more efficient neural network algorithms came along. Say 10 by, 100 by, 1000 by.
Gpt gave convincing arguments that large companies would keep ploughing ahead with the inefficient algorithms for a long time for both hardward and software lock-in reasons.
Gpt gave an estimated cost of about $30 billion a year. Which I think is an underestimate.
Also if such an algorithm was created by someone outside the academic or industrial hierarchy that algorithm could be ignored for a very long time. Especially given the daily torrent of new neural network papers and general noise about the topic on the internet.
[https://editor.p5js.org/seanhaddps/sketches/TlfJQFFxU](https://editor.p5js.org/seanhaddps/sketches/TlfJQFFxU)
| 2025-11-13T01:56:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ovojw1/commercial_lockin_versus_new_algorithms/ | oatmealcraving | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovojw1 | false | null | t3_1ovojw1 | /r/LocalLLaMA/comments/1ovojw1/commercial_lockin_versus_new_algorithms/ | false | false | self | 0 | null |
lightest models for understanding desktop screenshot content? | 2 | am trying to build an llm interface that understands what the user is doing and compares it to a set goal via interval screenshots - what model would best be able to balance performance & speed? am trying to get it to run basically on smartphone/ potato pcs.
any suggestions are welcome | 2025-11-13T01:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ovocij/lightest_models_for_understanding_desktop/ | xoclear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovocij | false | null | t3_1ovocij | /r/LocalLLaMA/comments/1ovocij/lightest_models_for_understanding_desktop/ | false | false | self | 2 | null |
Uncensored models | 106 | Hello everyone,
I’m new to the thread and I’m not sure if I’m asking my question in the right place.
Still, I’m wondering: are there any AI models for local use that are as uncensored as, or even more uncensored than, Venice.ai?
Or would it be better to just run regular open-source LLMs locally and try to look for jailbreaks? | 2025-11-13T00:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ovmyux/uncensored_models/ | NotoriousKekabidze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovmyux | false | null | t3_1ovmyux | /r/LocalLLaMA/comments/1ovmyux/uncensored_models/ | false | false | nsfw | 106 | null |
Kimi K2 Thinking Creative Writing Test | 56 | Whenever a new model is dropped, either from one of the established labs, or from a new lab, the first thing I do is to give it a creative writing test. I am not a coder. I am more interested in creative writing. And so, my expectations are usually a bit different from most of the people involved in the AI scene. The test I use is simple. I give the AI some background information and worldbuilding details, and then a very rough prologue sketch, including a list of agents that I want the AI to use to edit the prose. Using those agents, the AI is to stretch and refine the sketch to a prologue that is about 2000 words. I have done this consistently for months, and before moving on with my main point, I will list some of my observations-
Lets start with Chatgpt- The newer models are solid. Very, very good. Arguably the best. No complaints. At least for the first couple chapters. To note moving forward, this goes for chatgpt as well as the other models, they all seem to decline in quality in like the third chapter, and more so after that. So, to me these are not long term companions. Honestly, if that could be fixed, I could see AI being used more in the literary scene.
Moving on to Gemini- Was not good until 2.0Pro came, then it got surprisingly better, then 2.5pro came, then it got really good, good enough that I became tempted to start plotting more chapters. Which is usually a good sign. The quality usually declines immediately after, for this and all other models, in my opinion, however, when the prologue is solid, that's a good sign. I go back to Gemini and I am surprised again at how good the writing got.
Claude- Really good, could be the best, but got stagnant/limited. Claude used to be my go to AI for creative writing. I remember there was a time when everyone boasted about Claude's writing chops. I was one of those people. Don't get me wrong, the writing is amazing, still is, but it feels less like Claude got better and more like the others caught up in my opinion. Claude's writing was what made it stand out in the whole field, now the field appears full in my opinion. And I know this because sometimes, I use the old models, and the prose there maintains a kind of elegance. Indicating that while the newer models did improve in certain areas, the AI more or less stagnated. Which is fine, I'm not complaining, but it feels like, if that's the case, then they should focus more on longevity. And that is when it is good. Often it gets over ambitious, it starts doing too much, and weirdly enough, the writing gets awful then. But sometimes, it writes like it really gets you. My relationship with Claude is complex.
Grok- Okay. Fine.
Now, I know that each of these AI's have different models, with different capabilities, but I more or less breezed through these differences for the sake of brevity. Just assume that I am talking about the latest models. Now moving on the the open source models-
Gemma- Not good.
GPT-OSS- Not good.
Llama- Not good. At best, okay.
Now we will move to the Chinese models, one of which, this post centers around. Many of then are either open or quasi open.
Ling and Ring 1T- For some reason, they kept spazzing out. I would look at the reasoning and it was like a guy was driving, then suddenly got super drunk and flew off the road. I never even got any write ups from them, the whole thing would just crash.
Deepseek- It writes like it does not care for creative writing, and in turn, I don't care for it much.
Qwen- Same as Deepseek.
Kimi- When Kimi first came out. I was interested. Everyone raved about it, and so I did the test, it was the first lab that did not spaz out on me, did not start inserting random Chinese letters in the text, it was not good, alright average, but unlike Deepseek and Qwen, it seemed like it cared somewhat. So I decided to put an eye on it. K2 thinking came out. And I noticed instantly, the writing was good. Really good. About as good as the other labs. In my opinion, in terms of creative writing, it is the one that somewhat captures the heart of the story I suppose. Although Claude seems to get it as well. Anyhoo, I'll put the link below to the writing tests.
Here's the link;
[https://docs.google.com/document/d/1ln9txx6vOtyNcYnmb\_yBvjMPtzzqlCZTBKJVIsEdjdw/edit?usp=sharing](https://docs.google.com/document/d/1ln9txx6vOtyNcYnmb_yBvjMPtzzqlCZTBKJVIsEdjdw/edit?usp=sharing)
| 2025-11-13T00:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ovm3jd/kimi_k2_thinking_creative_writing_test/ | kennydotun123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovm3jd | false | null | t3_1ovm3jd | /r/LocalLLaMA/comments/1ovm3jd/kimi_k2_thinking_creative_writing_test/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': '2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=108&crop=smart&auto=webp&s=dbfff499b467ad1b6153472af052d98642346461', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=216&crop=smart&auto=webp&s=f8f2c3d258f1ec688b51bc2742f65777bd8b2558', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=320&crop=smart&auto=webp&s=50e86bf9e591620a0d3399bd6ed01b19db0d49ce', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=640&crop=smart&auto=webp&s=abed36cb32d839cca09839ccc6f493d4ba3418a3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=960&crop=smart&auto=webp&s=f25aaa39c75baedf96b1c36a4d70611d6b188576', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?width=1080&crop=smart&auto=webp&s=a4c23319175164a4b6424cc474f1d34cf56a8f0c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/2L1-sD9n7h7WWRhj8o7BeRxPF8z_wQ3YenXoLvLxjR0.png?auto=webp&s=e8d044fe9ec2f9ca07caae8a9e3f282146c3b736', 'width': 1200}, 'variants': {}}]} |
Chat with Obsidian vault | 6 | I have been chatting with ChatGPT about my characters, narrative and worldbuilding and have racked up around 150 chats. I am currently in the process of cataloging them in Obisidian. My goal is to be able to easily pull scenes, worldbuilding snippets etc from my vault using an LLM. I am running into embedding and context problems with even short chats (I have created a test vault with three short chats of different subjects) and wanted to know if something like this is possible. So far I have tried creating rags with AnythingLM but results have not been satisfactory.
I am fairly new to running Local LLMs and am current sporting 32gb of RAM and an RTX 3060 with 12gb of VRAM. I plan to upgrade to 64GB and an RTX 5060Ti when I have the money.
Any help would be greatly appreciated. | 2025-11-13T00:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ovm20t/chat_with_obsidian_vault/ | TanariTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovm20t | false | null | t3_1ovm20t | /r/LocalLLaMA/comments/1ovm20t/chat_with_obsidian_vault/ | false | false | self | 6 | null |
Claude cli with LMStudio | 8 | I used claude cli but I don't want to use cloud ai. Any way to do the same with lmstudio?
Like letting a private llm access a folder. | 2025-11-12T23:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ovl9ic/claude_cli_with_lmstudio/ | ImaginaryRea1ity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovl9ic | false | null | t3_1ovl9ic | /r/LocalLLaMA/comments/1ovl9ic/claude_cli_with_lmstudio/ | false | false | self | 8 | null |
Is there an app like this? | 0 | Hi, I am looking for mobile/desktop app where I can record myself and then ask local model for an example summary.
I could do it myself (my own server, and whisper on top + rag), but do not have enough time.
The idea is really easy, so I am almost sure that there is something like this already.
Most important thing is everything needs to run locally (starting your own server). I can use one or two RTX 5090 for it.
Best regards | 2025-11-12T22:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ovjig8/is_there_an_app_like_this/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovjig8 | false | null | t3_1ovjig8 | /r/LocalLLaMA/comments/1ovjig8/is_there_an_app_like_this/ | false | false | self | 0 | null |
rwkv7-g0a3-13.3b-Q4_K_M.gguf overthinking? | 8 | 2+2=
Okay, let's see. The user asked "2+2=". Hmm, that's a basic arithmetic question. I need to make sure I get this right. Let me start by recalling the fundamentals of addition. Adding two numbers together is combining their values. So, if I have one group of two and another group of two, how many do I have in total?
Let me visualize it. If I have two apples and then get two more apples, how many apples do I have altogether? That should be four apples. Similarly, if I count two steps forward and then two more steps, I end up at four steps from the starting point.
Wait, maybe I should break it down step by step. Starting with 2, then adding another 2. So, 2 + 2 = ? Let me count on my fingers. Hold up two fingers, then two more. That makes four fingers total. Yep, that's four.
Is there any chance this could be a trick question? Sometimes people might play with numbers in different bases, but the user didn't specify any base other than base 10, which is standard. In base 10, 2+2 is definitely 4. If it were in another base, like base 3, 2+2 would be 11 because 2+2=4 in decimal, which is 1\*3 +1=4, but that's not the case here. The question is straightforward.
Another way to think about it is using number lines. Starting at 2, moving two steps to the right would land me on 4. So, 2 + 2 = 4.
I should also consider if there's any context where this might not hold. For example, in modular arithmetic, like mod 3, 2+2=4 mod3 which is 1, but again, the user didn't mention anything about modular arithmetic. The question is presented in a general sense, so the answer should be in the standard mathematical context.
Maybe check with a calculator. Let me mentally simulate pressing 2 + 2 on a calculator. The display would show 4. Yep, that's consistent.
Is there any possibility of a typo or misinterpretation? The user wrote "2+2=", so they're probably expecting the result of the addition. No variables or other operations involved. Just simple addition.
I think I've covered all bases here. The answer should be 4. No doubts left.
</think>
The sum of 2 and 2 is calculated as follows:
\*\*2 + 2 = 4\*\*
This result is obtained by combining two groups of two, which totals four. | 2025-11-12T22:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ovjarh/rwkv7g0a3133bq4_k_mgguf_overthinking/ | vasileer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovjarh | false | null | t3_1ovjarh | /r/LocalLLaMA/comments/1ovjarh/rwkv7g0a3133bq4_k_mgguf_overthinking/ | false | false | self | 8 | null |
Running MLPerf Client on Nvidia GB10 | 2 | Anyone had luck running MLPerf Client on the DGX Spark? All the docker images I've tried seem to fail with lack of support for the GB10.
The most promising docker image is from the 1st August
nvcr.io/nvidia/mlperf/mlperf-inference:mlpinf-v5.1-cuda13.0-pytorch25.08-ubuntu24.04-aarch64-Grace-release
But that again is failing and I suspect it doesn't yet support this platform from the following output:
WARNING: Detected NVIDIA GB10 GPU, which may not yet be supported in this version of the container | 2025-11-12T22:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1oviz38/running_mlperf_client_on_nvidia_gb10/ | Mountain_Living_4159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oviz38 | false | null | t3_1oviz38 | /r/LocalLLaMA/comments/1oviz38/running_mlperf_client_on_nvidia_gb10/ | false | false | self | 2 | null |
Improving model load times | 4 | I'm moving to bigger models and trying to improve the load times when switching, which is currently dominated by disk read.
I'm running llama.cpp in Docker on a Debian 13 VM on a Proxmox 9 host. I'm using raw disk passthrough to feed a Crucial T700 directly into the VM, it's formated with ext4. The drive was recently wiped and formatted and then loaded with models, so there should be zero fragmentation and everything is nice and sequential.
The T700's datasheet sequential read speed is 12.4 GB/s, with fio in the VM I'm benchmarking about 9 GB/s, which would be good enough. The problem is I don't actually hit that with real world reads. cp, dd, llama.cpp, all hit around the same 3 GB/s. To verify it's not the Proxmox virtualization layer causing problems, I've also tried mounting the SSD directly on the host and testing there, same 9 GB/s with fio, same 3 GB/s with cp and dd. I've also tried other SSDs and run into the same limit at around 2-3 GB/s when doing real-world reads of large files.
Any ideas how to speed things up? Different filesystem maybe, or different formatting/mount options? The T700 has a heatsink and active airflow, I'm also monitoring drive temperatures and that's not an issue.
Reading around it looks like it could be due to cp, dd, etc. doing single-threaded file read, and you need multi-threaded reads to get above 3 GB/s or so. Is there any way to enable that in llama.cpp or are we stuck with single-threaded reads there as well?
According to [this](https://github.com/ggml-org/llama.cpp/issues/8796), splitting the disk into multiple partitions and then combining them back together in RAID 0 might work around the issue? | 2025-11-12T21:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ovingu/improving_model_load_times/ | suicidaleggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovingu | false | null | t3_1ovingu | /r/LocalLLaMA/comments/1ovingu/improving_model_load_times/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=108&crop=smart&auto=webp&s=587ca975d1514b9e31b9dcf821bd4fc84e5e8bd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=216&crop=smart&auto=webp&s=4331490e052e71e5a8fb32ec3639d04274c45f1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=320&crop=smart&auto=webp&s=23a642cf0db1af031cfd4a91df6897e8225b1d93', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=640&crop=smart&auto=webp&s=dd76c793c34e81899c9857d651698ecccd64341c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=960&crop=smart&auto=webp&s=8fb433635fb4664e13d8f7024f3c1eff61ea95a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?width=1080&crop=smart&auto=webp&s=1c980634225a34d30e78451d8e0ce9f32bfafdd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9CncsCXdrs-4AUf1YPv24zn8bvsSrp9ESTCmKrajMio.png?auto=webp&s=6305fc5bce66bc9c9f95284d5312802178704489', 'width': 1200}, 'variants': {}}]} |
Cross-GPU prefix KV reuse with RDMA / NVLink - early experimental results | 15 | Been experimenting with a small prototype to reuse transformer KV attention states across GPUs. Current inference frameworks only reuse KV prefixes locally, so multi-GPU setups redo prefill work even when the prefix is identical.
I implemented a simple path where one process exports its prefix KV tensors, and another process with the same prefix imports them directly over GPU-to-GPU links. Under optimistic conditions I’m seeing about 15 percent latency reduction in early experiments.
I’d love feedback from anyone who has worked on multi-tier KV caching, RDMA/NVLink transports, or distributed inference scheduling. I made a small repo and a fork of vLLM that integrates it. (Link in the comments) | 2025-11-12T21:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ovic54/crossgpu_prefix_kv_reuse_with_rdma_nvlink_early/ | nsomani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovic54 | false | null | t3_1ovic54 | /r/LocalLLaMA/comments/1ovic54/crossgpu_prefix_kv_reuse_with_rdma_nvlink_early/ | false | false | self | 15 | null |
GitHub - neelsomani/kv-marketplace: Cross-GPU KV Cache Marketplace | 1 | Hi all - this is a small research prototype I built to explore cross-GPU reuse of transformer attention states.
When inference engines like vLLM implement prefix/KV caching, it's local to each replica. LMCache recently generalized this idea to multi-tier storage.
KV Marketplace focuses narrowly on the GPU-to-GPU fast path: peer-to-peer prefix reuse over RDMA or NVLink. Each process exports completed prefix KV tensors (key/value attention states) into a registry keyed by a hash of the input tokens and model version. Other processes with the same prefix can import those tensors directly from a peer GPU, bypassing host memory and avoiding redundant prefill compute.
Under optimistic conditions (perfect prefix importing), the prototype shows about a 15% reduction in latency and throughput gains without heavy tuning. The code is intentionally minimal (no distributed registry, eviction, or CPU/disk tiers yet) but it's a prototype of "memcached for attention."
I thought others exploring distributed LLM inference, caching, or RDMA transports might find the repo useful or interesting. | 2025-11-12T21:28:36 | https://github.com/neelsomani/kv-marketplace | nsomani | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ovi360 | false | null | t3_1ovi360 | /r/LocalLLaMA/comments/1ovi360/github_neelsomanikvmarketplace_crossgpu_kv_cache/ | false | false | default | 1 | null |
What local benchmarks are you running? | 8 | With the caveat upfront that LLM benchmarks all need to be taken with hefty grains of salt, I do think there's value in running them locally to:
1. Check vendor claims on LLM performance
2. Be able to compare the quantized models that most of us actually use
I'm running Aider polyglot right now against a couple Qwen3-Coder variants (unsloth dynamic Q6\_K\_XL vs. bartowski REAP Q8\_0) out of curiosity for these newfangled REAP models and their performance claims. However, it's a very long benchmark (like 4ish hours with 4k tk/s pp and >100 tk/s tg), and the results seem to vary significantly from run-to-run.
So, do any of you run local benchmarks that give quick and/or consistent results? Mostly interested in coding benchmarks, but happy to hear about others as well. | 2025-11-12T21:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ovhhag/what_local_benchmarks_are_you_running/ | MutantEggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovhhag | false | null | t3_1ovhhag | /r/LocalLLaMA/comments/1ovhhag/what_local_benchmarks_are_you_running/ | false | false | self | 8 | null |
What we shipped in MCI v1.2 and why it actually matters | 0 | Just shipped a bunch of quality-of-life improvements to MCI, and I'm honestly excited about how they simplify real workflows for **building custom MCP servers** on the fly 🚀
Here's what landed:
**Environment Variables Got a Major Cleanup**
We added the "mcix envs" command - basically a dashboard that shows you exactly what environment variables your tools can access. Before, you'd be guessing "did I pass that API key correctly?" Now you just run mcix envs and see everything.
Plus, MCI now has three clean levels of environment config:
\- .env (standard system variables)
\- .env.mci (MCI-specific stuff that doesn't pollute everything else)
\- inline env\_vars (programmatic control when you need it)
The auto .env loading feature means one less thing to manually manage. Just works.
**Props Now Parse as Full JSON**
Here's one that annoyed me before: if you wanted to pass complex data to a tool, you had to fight with string escaping. Now mci-py parses props as full JSON, so you can pass actual objects, arrays, nested structures - whatever you need. It just works as well.
**Default Values in Properties**
And the small thing that'll save you headaches: we added default values to properties. So if agent forgets to pass a param, or param is not in required, instead of failing, it uses your sensible default. Less defensive coding, fewer runtime errors.
**Why This Actually Matters**
These changes are small individually but they add up to something important: less ceremony, more focus on what your tools actually do.
Security got cleaner (separation of concerns with env management), debugging got easier (mcix envs command), and day-to-day configuration got less error-prone (defaults, proper JSON parsing).
If you're using MCI or thinking about building tools with it, these changes make things genuinely better. Not flashy, just solid improvements.
Curious if anyone's uses MCI in development - would love to hear what workflows you're trying to build with this stuff.
You can try it here: [https://usemci.dev/](https://usemci.dev/) | 2025-11-12T21:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ovhfrd/what_we_shipped_in_mci_v12_and_why_it_actually/ | Prestigious-Yam2428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovhfrd | false | null | t3_1ovhfrd | /r/LocalLLaMA/comments/1ovhfrd/what_we_shipped_in_mci_v12_and_why_it_actually/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=108&crop=smart&auto=webp&s=df0e9562f404e87d6fa1d2442bb72cc236b2cbcf', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=216&crop=smart&auto=webp&s=64122026f2799018e97a40d2899d7871f484b463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=320&crop=smart&auto=webp&s=562a3267fc1e80f1d93957cea8116f74d2b00adf', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=640&crop=smart&auto=webp&s=2c73c4c6ce6b6f3fc6c7254e9d9a56811f6706e7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=960&crop=smart&auto=webp&s=b78666ccb4cd4fa0c2901b608fe074e4e29f84f9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?width=1080&crop=smart&auto=webp&s=3748fdf664ebdd1e9f5b097111805dac709da8d0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/zEKP_eifkMWx7j3EPpg-r3VcV84QH7pLWMOinnZZUSA.png?auto=webp&s=2b13c1b7e6360639f42741b84efe368d744ae968', 'width': 1200}, 'variants': {}}]} |
Evaluating Voice AI: Why it’s harder than it looks | 0 | I’ve been diving into the space of voice AI lately, and one thing that stood out is how tricky evaluation actually is. With text agents, you can usually benchmark responses against accuracy, coherence, or task success. But with voice, there are extra layers:
* **Latency**: Even a 200ms delay feels off in a live call.
* **Naturalness**: Speech quality, intonation, and flow matter just as much as correctness.
* **Turn-taking**: Interruptions, overlaps, and pauses break the illusion of a smooth conversation.
* **Task success**: Did the agent actually resolve what the user wanted, or just sound polite?
Most teams I’ve seen start with subjective human feedback (“does this sound good?”), but that doesn’t scale. For real systems, you need **structured evaluation workflows** that combine automated metrics (latency, word error rates, sentiment shifts) with **human-in-the-loop reviews** for nuance.
That’s where eval tools come in. They help run realistic scenarios, capture voice traces, and replay them for consistency. Without this layer, you’re essentially flying blind.
**Full disclosure**: I work with [Maxim AI](https://getmax.im/Max1m), and in my experience it’s been the most complete option for voice evals, it lets you test agents in live, multi-turn conversations while also benchmarking latency, interruptions, and outcomes. There are other solid tools too, but if voice is your focus, this one has been a standout. | 2025-11-12T20:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ovgt9b/evaluating_voice_ai_why_its_harder_than_it_looks/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovgt9b | false | null | t3_1ovgt9b | /r/LocalLLaMA/comments/1ovgt9b/evaluating_voice_ai_why_its_harder_than_it_looks/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]} |
Thoughts on the AMD BC-250 16GB "Cards"? | 2 | I have the opportunity to pick up 12 AMD BC-250 cards already in an enclosure for dirt cheap. My biggest gripe with the setup is no PCI-e connection and a limited ethernet speed. I believe the ethernet ports of each are rated for one gigabit per second, though I likely could get \~2/3 Gb/s using the USB 3.0.
With this setup, could I only feasibly run MoE or small models on each? I know it would likely be a pain in the ass to set up, though the price and VRam are making me think it could be worth it. Long term, I'd love to be able to run large dense models which makes me lean against this setup. Any help is appreciated | 2025-11-12T20:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ovgs5p/thoughts_on_the_amd_bc250_16gb_cards/ | nstein5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovgs5p | false | null | t3_1ovgs5p | /r/LocalLLaMA/comments/1ovgs5p/thoughts_on_the_amd_bc250_16gb_cards/ | false | false | self | 2 | null |
Creating an inference provider that host quantized models. Feedback appreciated | 0 | Hello. I think I found a way to create a decent preforming 4-bit quantized model from any given model. I plan to host these quantized models on the cloud and charge for inference. I designed the inference to be faster than other providers.
What models do you think I should quantize and host and are much needed? What you be looking for in a service like this? cost? inference speed? what is your pain points with other provides?
Appreciate your feedback | 2025-11-12T20:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ovg8xl/creating_an_inference_provider_that_host/ | textclf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovg8xl | false | null | t3_1ovg8xl | /r/LocalLLaMA/comments/1ovg8xl/creating_an_inference_provider_that_host/ | false | false | self | 0 | null |
Error handling model response on continue.dev/ollama only on edit mode | 0 | Hi, i get this error only when i need to use edit mode on vs code. I selected 2 lines of code only when i press ctrl + i. Chat and autocomplete works fine. This is my config. Thanks
name: Local Agent
version: 1.0.0
schema: v1
models:
- name: gpt-oss
provider: ollama
model: gpt-oss:20b
roles:
- chat
- edit
- apply
- summarize
capabilities:
- tool_use
- name: qwen 2.5 coder 7b
provider: ollama
model: qwen2.5-coder:7b
roles:
- autocomplete
https://preview.redd.it/k4itvw7nwv0g1.png?width=553&format=png&auto=webp&s=057abd5a1020dc055c61f093207ebd828f529bed
| 2025-11-12T20:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ovg8ju/error_handling_model_response_on/ | sebastraits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovg8ju | false | null | t3_1ovg8ju | /r/LocalLLaMA/comments/1ovg8ju/error_handling_model_response_on/ | false | false | 0 | null | |
lmarena.ai unreliable | 0 | I found the ranking on lmarena.ai quite suspicious.
I tested many prompts across multiple models and discovered signs of manipulation: when I asked GLM‑4.6, it claimed to be Google Gemini.
I suspect they manipulated results to secure sponsorship or funding.
---
Furthermore, under the ranking methodology used by lmarena.ai, models that are overfitting (for example Gemini, Kimi K2 Thinking, Minimax M2) are ranked higher than other models.
Overfitting occurs when a model learns the training data too well and fails to generalize to new data. | 2025-11-12T19:44:35 | https://www.reddit.com/gallery/1ovfafs | LeTanLoc98 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ovfafs | false | null | t3_1ovfafs | /r/LocalLLaMA/comments/1ovfafs/lmarenaai_unreliable/ | false | false | 0 | null | |
Fine-tuning a model on a groupchat: Qwen2.5 0.5B running in-browser | 6 | I fine-tuned my first model with r/LocalLLaMA 's help! I took 50,000 messages from my college groupchat, and trained a Qwen3 4B, Qwen3 0.6B, and ultimately a Qwen2.5 0.5B to shrink it small enough to run **in-browser** with [WebLLM](https://webllm.mlc.ai/). You can even chat with it here: [https://www.infinitegroupchat.com/](https://www.infinitegroupchat.com/) (WebGPU / iOS26 required)
https://reddit.com/link/1ovef51/video/6qklefnpkv0g1/player
Training and running locally with Ollama was super easy, but I couldn't find a good cheap place to host the resulting model - saw a few threads here with a similar problem. Hosting in-browser was actually great for this, and I wanted to share the approach for other folks looking for a free way to share their models with friends. [Here's a Colab notebook](https://colab.research.google.com/drive/1NbdAzcVNGsCSHNTFnOMYpmp-pDwntukw#scrollTo=download-code) to convert models to MLC format which is the only thing needed.
Wondering if anyone else has done something similar, or has other techniques they like? Wrote up a full post below with more detail, happy to answer any questions too
[https://www.brimtown.com/train-on-your-groupchat](https://www.brimtown.com/train-on-your-groupchat) | 2025-11-12T19:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ovef51/finetuning_a_model_on_a_groupchat_qwen25_05b/ | _brimtown | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovef51 | false | null | t3_1ovef51 | /r/LocalLLaMA/comments/1ovef51/finetuning_a_model_on_a_groupchat_qwen25_05b/ | false | false | self | 6 | null |
Why Ampere Workstation/Datacenter/Server GPUs are still so expensive after 5+ years? | 54 | Hello guys, just an small discussion that came to my mind after reading this post [https://www.reddit.com/r/LocalLLaMA/comments/1ovatvf/where\_are\_all\_the\_data\_centers\_dumping\_their\_old/](https://www.reddit.com/r/LocalLLaMA/comments/1ovatvf/where_are_all_the_data_centers_dumping_their_old/)
I feel I guess it does a bit of sense that Ada Workstation/Datacenter/Server are still expensive, as they support fp8, and have way more compute than Ampere, i.e.:
* RTX 6000 Ada (48GB), on ebay for about 5000 USD.
* RTX 5000 Ada (32GB), on ebay for about 2800-3000 USD.
* RTX 4000 Ada (24GB), on ebay for about 1200 USD.
* NVIDIA L40 (48GB), on ebay for about 7000 USD.
* NVIDIA L40S (48GB), on ebay for about 7000USD.
* NVIDIA L4 (24 GB), on ebay for about 2200 to 2800 USD.
While, for Ampere, we have these cases:
* RTX A6000 (48GB), on ebay for about 4000-4500 USD.
* RTX A5000 (24GB), on ebay for about 1400 USD.
* RTX A4000 (16GB), on ebay for about 750 USD.
* NVIDIA A40 (48GB), on ebay for about 4000 USD.
* NVIDIA A100 (40GB) PCIe, on ebay for about 4000 USD.
* NVIDIA A100 (80GB) PCIe, on ebay for about 7000 USD.
* NVIDIA A10 (24GB), on ebat for about 1800 USD.
So these cards are slower (about half perf compared to Ada), some less VRAM and don't support FP8.
Why are they still so expensive, what do you guys think? | 2025-11-12T18:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ove1px/why_ampere_workstationdatacenterserver_gpus_are/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ove1px | false | null | t3_1ove1px | /r/LocalLLaMA/comments/1ove1px/why_ampere_workstationdatacenterserver_gpus_are/ | false | false | self | 54 | null |
LLM for math | 0 | I’m currently curious about what kind of math problems can Ilm solve — does it base on topics (linear algebra, multi-variable calculus …)or base on specific logic? And thus, how could we categorize problems by what can be solved by LLM and what cannot? | 2025-11-12T18:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ovddek/llm_for_math/ | Hopeful_Geologist749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovddek | false | null | t3_1ovddek | /r/LocalLLaMA/comments/1ovddek/llm_for_math/ | false | false | self | 0 | null |
[Followup] Qwen3 VL 30b a3b is pure love (or not so much) | 34 | A couple of days ago I posted here showcasing a video of the webapp I'm currently making. Qwen3-VL 30B-A3B MoE got me back into this project because it amazed how good it is! (Self promotion at the end: My Project is now open sourced and avaialalbe as an easy to deploy docker container...)
**Original post:** [https://www.reddit.com/r/LocalLLaMA/comments/1omr9rc/qwen3\_vl\_30b\_a3b\_is\_pure\_love/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1omr9rc/qwen3_vl_30b_a3b_is_pure_love/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
**TL;DR:** This project provides an easy way to turn images into structured data. But Qwen3-VL 30B-A3B is not following the promt to not extract data that is not visible from images. Instead it confidently generates fake data that passes formatting checks, making it unsuitable for some fully automated tasks.
Well, actually using the model together with my app made me realize that it is not actually as good as expected. It's still pretty good though, to be honest.
However, I ran into a really interesting problem:
Remember that post from a few months or a year ago, where someone showed an image of a cat with 5 photoshopped legs to a Vision LLM with the question "how many legs"? The answer would always be 4. Simply because the LLM learned cats have 4 legs → therefore this cat has 4 legs. It's not actually counting the legs in the image. Instead it sees a cat and answers 4.
Same thing happened to me using Qwen 3.0B-A3B.
I tried to extract structured data from chemical containers. Asking for CAS numbers which have a specific format. I specifically asked the model to not write down a CAS number if it's not visible. Any number that does not fit the specific format can not be a CAS number (Maybe thats even the fault - ill try to not specify the format)
Gemini models would respect that instruction. Qwen3 4B would also respect it (Instead it would sometimes misinterpret other numbers as CAS, ignoring the format instructions, which would then result in them not passing formatting checks).
But Qwen3 30B-A3B would simply ignore my prompt to not make up numbers if they are not visible. Even worse: it's smart enough to make up CAS numbers that fit the formatting rules, and the inbuilt checksum. They seem totally legitimate but are still wrong. Hence I wouldn't be able to filter those with simple postprocessing, but would pollute my dataset if id take the extracted data unreviewed.
I've done a detailed comparison of Qwen3-VL 30B-A3B, Qwen3-VL 4B, and Gemini 2.5 Flash in these scenarios. You can find numbers, plots, and methodology here, have a read if you want to.
[https://janbndrf.github.io/Tabtin/#Qwen](https://janbndrf.github.io/Tabtin/#Qwen)
The Webapp youre seeing in the Video is now available as an easy-to-deploy Docker container. I called it Tabtin. It works with local models, Google AI Studio, and OpenRouter.
Check it out: [https://github.com/janbndrf/tabtin](https://github.com/janbndrf/tabtin)
| 2025-11-12T18:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ovcxu0/followup_qwen3_vl_30b_a3b_is_pure_love_or_not_so/ | Njee_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovcxu0 | false | null | t3_1ovcxu0 | /r/LocalLLaMA/comments/1ovcxu0/followup_qwen3_vl_30b_a3b_is_pure_love_or_not_so/ | false | false | self | 34 | null |
Best method for vision model lora inference | 1 | I have finetuned Qwen 7b VL 4 bit model using unsloth and I want to get the best throughput .
Currently I am getting results for 6 images with a token size of 1000.
How can I increase the speed and what is the best production level solution? | 2025-11-12T18:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ovcwfx/best_method_for_vision_model_lora_inference/ | Unique_Yogurtcloset8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovcwfx | false | null | t3_1ovcwfx | /r/LocalLLaMA/comments/1ovcwfx/best_method_for_vision_model_lora_inference/ | false | false | self | 1 | null |
Replace Sonnet 4.5 with Minimax-M2 for my 3D app -> same quality with like 1/10th costs | 22 | Using LLMs to control a modelling software, which requires a lot of thinking and tool calling, so I've been using Sonnet in the most complex portion of the workflow. Ever since I saw minimax can match sonnet in benchmarks, I replaced the model and haven't seen a degradation in output (3d model output in my case). | 2025-11-12T17:55:54 | spacespacespapce | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovc90m | false | null | t3_1ovc90m | /r/LocalLLaMA/comments/1ovc90m/replace_sonnet_45_with_minimaxm2_for_my_3d_app/ | false | false | default | 22 | {'enabled': True, 'images': [{'id': 'swomto3t6v0g1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=108&crop=smart&auto=webp&s=06978747bfaa8378753e47ddbc1cfc751a8627fe', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=216&crop=smart&auto=webp&s=f9c3014cd0a32c85b6cb60ed04d119f9a38e7198', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=320&crop=smart&auto=webp&s=6dfe26dd34275d2245ece9774fa0e56ec308a971', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=640&crop=smart&auto=webp&s=d82219e652879369cab437c9f8023468d498194a', 'width': 640}, {'height': 750, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=960&crop=smart&auto=webp&s=d9b7678568e3f1aa4954fcb3dff9ae378359e8d5', 'width': 960}, {'height': 844, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?width=1080&crop=smart&auto=webp&s=d622ddb554991d09738a25cb0ede1686d3e72407', 'width': 1080}], 'source': {'height': 1702, 'url': 'https://preview.redd.it/swomto3t6v0g1.png?auto=webp&s=45f6e468e71b4728cc343151b93376be0ee5592d', 'width': 2176}, 'variants': {}}]} | |
Live VLM WebUI - Web interface for Ollama vision models with real-time video streaming | 182 | Hey r/LocalLLaMA! 👋
I'm a Technical Marketing Engineer at NVIDIA working on Jetson, and we just open-sourced [**Live VLM WebUI**](https://github.com/nvidia-ai-iot/live-vlm-webui) \- a tool for testing Vision Language Models locally with real-time video streaming.
# What is it?
Stream your webcam to any Ollama vision model (or other VLM backends) and get real-time AI analysis overlaid on your video feed. Think of it as a convenient interface for testing vision models in real-time scenarios.
**What it does:**
* Stream live video to the model (not screenshot-by-screenshot)
* Show you exactly how fast it's processing frames
* Monitor GPU/VRAM usage in real-time
* Work across different hardware (PC, Mac, Jetson)
* Support multiple backends (Ollama, vLLM, NVIDIA API Catalog, OpenAI)
# Key Features
* **WebRTC video streaming** \- Low latency, works with any webcam
* **Ollama native support** \- Auto-detect `http://localhost:11434`
* **Real-time metrics** \- See inference time, GPU usage, VRAM, tokens/sec
* **Multi-backend** \- Also works with vLLM, NVIDIA API Catalog, OpenAI
* **Cross-platform** \- Linux PC, DGX Spark, Jetson, Mac, WSL
* **Easy install** \- `pip install live-vlm-webui` and you're done
* **Apache 2.0** \- Fully open source, accepting community contributions
# 🚀 Quick Start with Ollama
# 1. Make sure Ollama is running with a vision model
ollama pull gemma:4b
# 2. Install and run
pip install live-vlm-webui
live-vlm-webui
# 3. Open https://localhost:8090
# 4. Select "Ollama" backend and your model
# Use Cases I've Found Helpful
* **Model comparison** \- Testing `gemma:4b` vs `gemma:12b` vs `llama3.2-vision` the same scenes
* **Performance benchmarking** \- See actual inference speed on your hardware
* **Interactive demos** \- Show people what vision models can do in real-time
* **Real-time prompt engineering** \- Tune your vision prompt as seeing the result in real-time
* **Development** \- Quick feedback loop when working with VLMs
# Models That Work Great
Any Ollama vision model:
* `gemma3:4b`, `gemma3:12b`
* `llama3.2-vision:11b`, `llama3.2-vision:90b`
* `qwen2.5-vl:3b`, `qwen2.5-vl:7b`, `qwen2.5-vl:32b`, `qwen2.5-vl:72b`
* `qwen3-vl:2b`, `qwen3-vl:4b`, all the way up to `qwen3-vl:235b`
* `llava:7b`, `llava:13b`, `llava:34b`
* `minicpm-v:8b`
# Docker Alternative
docker run -d --gpus all --network host \
ghcr.io/nvidia-ai-iot/live-vlm-webui:latest
# What's Next?
Planning to add:
* Analysis result copy to clipboard, log and export
* Model comparison view (side-by-side)
* Better prompt templates
# Links
**GitHub:** [https://github.com/nvidia-ai-iot/live-vlm-webui](https://github.com/nvidia-ai-iot/live-vlm-webui)
**Docs:** [https://github.com/nvidia-ai-iot/live-vlm-webui/tree/main/docs](https://github.com/nvidia-ai-iot/live-vlm-webui/tree/main/docs)
**PyPI:** [https://pypi.org/project/live-vlm-webui/](https://pypi.org/project/live-vlm-webui/)
Would love to hear what you think! What features would make this more useful for your workflows? PRs and issues welcome - this is meant to be a community tool.
> ## A bit of background
>
> This community has been a huge inspiration for our work. When we launched the [Jetson Generative AI Lab](https://developer.nvidia.com/blog/bringing-generative-ai-to-life-with-jetson/), r/LocalLLaMA was literally cited as one of the key communities driving the local AI movement.
>
> WebRTC integration for real-time camera streaming into VLMs on Jetson was pioneered by our colleague a while back. It was groundbreaking but tightly coupled to specific setups. Then Ollama came along and with their standardized API we suddenly could serve vision models in a way that works anywhere.
>
> We realized we could take that WebRTC streaming approach and modernize it: make it work with any VLM backend through standard APIs, run on any platform, and give people a better experience than uploading images on Open WebUI and waiting for responses.
>
> So this is kind of the evolution of that original work - taking what we learned on Jetson and making it accessible to the broader local AI community.
Happy to answer any questions about setup, performance, or implementation details! | 2025-11-12T17:50:56 | lektoq | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ovc3sj | false | null | t3_1ovc3sj | /r/LocalLLaMA/comments/1ovc3sj/live_vlm_webui_web_interface_for_ollama_vision/ | false | false | default | 182 | {'enabled': True, 'images': [{'id': 'n5cc10ph5v0g1', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=108&crop=smart&auto=webp&s=439748f9acf605888378bc59c53a3f245e6fbb0d', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=216&crop=smart&auto=webp&s=1acde05144b8654877f953ab9d6f16685dede932', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=320&crop=smart&auto=webp&s=a9faab7714d56acca1e2758556699ee395057b21', 'width': 320}, {'height': 575, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=640&crop=smart&auto=webp&s=badc8223252000cebb903a5e944f40eb1d1caa53', 'width': 640}, {'height': 863, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=960&crop=smart&auto=webp&s=19e2929980d0561e5cbc3afb7e90d24d44080b17', 'width': 960}, {'height': 971, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?width=1080&crop=smart&auto=webp&s=497dc5862d8d1432b046ce51770626f3ce52095e', 'width': 1080}], 'source': {'height': 2281, 'url': 'https://preview.redd.it/n5cc10ph5v0g1.png?auto=webp&s=50e7419d2f915ed6b2462b32c2dcc2dc0bc291bf', 'width': 2536}, 'variants': {}}]} | |
AI setup for cheap? | 5 | Hi. My current setup is: i7-9700f, RTX 4080, 128GB RAM, 3745MHz. In GPT, I get ~10.5 tokens per second with 120b OSS, and only 3.0-3.5 tokens per second with QWEN3 VL 235b A22b Thinking. I allocate maximum context for GPT, and 3/4 of the possible available context for QWEN3. I put all layers on both the GPU and CPU. It's very slow, but I'm not such a big AI fan that I'd buy a 4090 with 48GB or something like that. So I thought: if I'm offloading expert advisors to the CPU, then my CPU is the bottleneck in accelerating the models. What if I build a cheap Xeon system? For example, buy a Chinese motherboard with two CPUs, install 256GB of RAM in quad-channel mode, install two 24-core processors, and your own RTX 4080. Surely such a system should be faster than it is now? With one 8-core CPU, such a setup would be cheaper than the RTX 4090 48GB. I'm not chasing 80 tokens or more; I personally find ~25 tokens per second sufficient, which I consider the minimum acceptable speed. What do you think? Is it a crazy idea? | 2025-11-12T17:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ovbzi3/ai_setup_for_cheap/ | Pretend-Pumpkin7506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovbzi3 | false | null | t3_1ovbzi3 | /r/LocalLLaMA/comments/1ovbzi3/ai_setup_for_cheap/ | false | false | self | 5 | null |
Is Polish better for prompting LLMs? Case study: Logical puzzles | 57 | Hey, recently this article made waves within many LLM communities: [https://www.euronews.com/next/2025/11/01/polish-to-be-the-most-effective-language-for-prompting-ai-new-study-reveals](https://www.euronews.com/next/2025/11/01/polish-to-be-the-most-effective-language-for-prompting-ai-new-study-reveals) as it claimed (based on a study by researchers from The University of Maryland and Microsoft) that Polish is the best language for prompting LLMs.
So I decided to put it to a small test. I have dug up a couple of books with puzzles and chose some random ones, translated them from the original Polish into English and made them into two Benchmarks. Run it on a bunch of LLMs and here are the results. Not so obvious after all:
https://preview.redd.it/iij23lcx2v0g1.png?width=1889&format=png&auto=webp&s=382a824c1a766a14f4bad1b86f158c232463dd5f
On the left you see the results for the original Polish dataset, on the right the English version.
Some quick insights:
* Overall the **average accuracy** was a little over 2 percentage points higher on Polish.
* **Grok models:** Exceptional multilingual consistency
* **Google models:** Mixed—flagship dropped, flash variants improved
* **DeepSeek models:** Strong English bias
* **OpenAI models:** Both ChatGPT-4o and GPT-4o performed worse in Polish
If you want me to run the Benchmarks on any other models, let me know. | 2025-11-12T17:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ovbssf/is_polish_better_for_prompting_llms_case_study/ | Substantial_Sail_668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovbssf | false | null | t3_1ovbssf | /r/LocalLLaMA/comments/1ovbssf/is_polish_better_for_prompting_llms_case_study/ | false | false | 57 | null | |
In theory, does int4 QAT training (e.g. Kimi k2 thinking) help or hurt further quantization? | 5 | With quantization aware training, should we expect Kimi K2 GGUFs at q4 or q3 and below, to be better than FP16 >> Q4, because they are closer to the original Int4? Or worse, because they are further compressing an already very efficiently structured model? | 2025-11-12T17:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ovbq3f/in_theory_does_int4_qat_training_eg_kimi_k2/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovbq3f | false | null | t3_1ovbq3f | /r/LocalLLaMA/comments/1ovbq3f/in_theory_does_int4_qat_training_eg_kimi_k2/ | false | false | self | 5 | null |
Current SoTA with multimodal embeddings | 1 | There have been some great multimodal models released lately, namely the Qwen3 VL and Omni, but looking at the embedding space, multimodal options are quite sparse. It seems like nomic-ai/colnomic-embed-multimodal-7b is still the SoTA after 7 months, which is a long time in this field. Are there any other models worth considering? Most important is vision embeddings, but one with audio as well would be interesting. | 2025-11-12T17:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ovbd3j/current_sota_with_multimodal_embeddings/ | SubstantialSock8002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovbd3j | false | null | t3_1ovbd3j | /r/LocalLLaMA/comments/1ovbd3j/current_sota_with_multimodal_embeddings/ | false | false | self | 1 | null |
Where are all the data centers dumping their old decommissioned GPUs? | 270 | In 2022, I purchased a lot of Tesla P40s on eBay, but unfortunately, because of their outdated architecture, they are now practically useless for what I want to do. It seems like newer-generation GPUs aren’t finding their way into consumers' hands. I asked my data center connection and he said they are recycling them, but they’ve always been doing this and we could still get hardware.
With the amount of commercial GPUs in the market right now, you would think there would be some overflow?
I hope to be wrong and suck at resourcing now, any help? | 2025-11-12T17:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ovatvf/where_are_all_the_data_centers_dumping_their_old/ | AffectSouthern9894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovatvf | false | null | t3_1ovatvf | /r/LocalLLaMA/comments/1ovatvf/where_are_all_the_data_centers_dumping_their_old/ | false | false | self | 270 | null |
Need help training a 1b parameter model | 0 | I know it's a wrong place to post this and I'm really sorry for that but it would be really helpful if someone can help with the 100 dollar. I'll be training on cloud and little tight on budget, so thought maybe asking will be a better idea .
Help Only If you can and not under any force or pressure.
Also I'll definitely public model and the weights if it succeeds. | 2025-11-12T17:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ovaret/need_help_training_a_1b_parameter_model/ | ExcellentAirport504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovaret | false | null | t3_1ovaret | /r/LocalLLaMA/comments/1ovaret/need_help_training_a_1b_parameter_model/ | false | false | self | 0 | null |
An A.I mental wellness tool that sounds human, Requesting honest feedback and offering early access. | 0 | Hello everyone,
During COVID, I developed some social anxiety. I've been sitting on the idea of seeing a professional therapist, but it's not just the cost, there's also a real social stigma where I live. People can look down on you if they find out.
As a Machine Learning Engineer, I started wondering that "could an AI specialized in this field help me, even just a little?"
I tried ChatGPT and other general-purpose LLMs. They were a short bliss yes, but the issue is they always agree with you. It feels good for a second, but in the back of your mind, you know it's not really helping and it's just a "feel good" button.
So, I consulted some friends and built a prototype of a specialized LLM. It's a smaller model for now, but I fine-tuned it on high-quality therapy datasets (using techniques like CBT). The big thing it was missing was a touch of human empathy. To solve this, I integrated a realistic voice that doesn't just sound human but has empathetic expressions, creating someone you can talk to in real-time.
I've called it "Solace."
I've seen other mental wellness AIs, but they seem to lack the empathetic feature I was craving. So I'm turning to you all. Is it just me, or would you also find value in a product like this?
That's what my startup, ApexMind, is based on. I'm desperately looking for honest reviews based on our demo.
If this idea resonates with you and you'd like to see the demo, please tune into here, it's a simple free google form: [https://docs.google.com/forms/d/e/1FAIpQLSc8TAKxjUzyHNou4khxp7Zrl8eWoyIZJXABeWpv3r0nceNHeA/viewform](https://docs.google.com/forms/d/e/1FAIpQLSc8TAKxjUzyHNou4khxp7Zrl8eWoyIZJXABeWpv3r0nceNHeA/viewform)
If you agree this is a needed tool, you'll be among the first to get access when we roll out the Solace beta. But what I need most right now is your honest feedback (positive or negative).
Thank you. Once again, the demo and short survey are in the link of my profile I'm happy to answer any and all questions in the comments or DMs. tell me reddit group name where i can post this to get most users review | 2025-11-12T16:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ovaiuo/an_ai_mental_wellness_tool_that_sounds_human/ | MelkorSparrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ovaiuo | false | null | t3_1ovaiuo | /r/LocalLLaMA/comments/1ovaiuo/an_ai_mental_wellness_tool_that_sounds_human/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=108&crop=smart&auto=webp&s=96e001ab58b4c9dc9a6ec8d170d1103a5d6c8d5f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=216&crop=smart&auto=webp&s=25417b41d96be8b01c8528a4583d2563c26c81d5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=320&crop=smart&auto=webp&s=208c4cdd2ec1845e4ca2771c6b6fcc64075336ce', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=640&crop=smart&auto=webp&s=6a1a5b0337d558012e9bc770e3874faa19992fdb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=960&crop=smart&auto=webp&s=5985afaf9a3d0c1e27e05622112828ce20796578', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?width=1080&crop=smart&auto=webp&s=2a844161ca7643892013f291ef246daf72d9bf89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/htQh4S5ajR3_Vd5guAa9GWjoIrgKqMIhDR_EceTFs3I.png?auto=webp&s=751e022518d1e21166f56108a7e757328a9a6a20', 'width': 1200}, 'variants': {}}]} |
Which Local Language Model Suits my needs. | 0 | Hello, I apologise for asking a question that's probably a bit dumb. But I want a model that doesn't fear-mongers, like the ChatGPT 4o (the 4o which was released before GPT 5 ruined everything for me) which I felt was nice, balanced, and pretty chill to talk to even if a bit obsequious.
So I am wondering if there is a corresponding model that could sort of replicate that feeling for me and I would like to share personal things with a Local LLM that I don't necessarily want to with models hosted on cloud.
Keeping this in mind, what do you guys recommend? What model and which machine?
I have two machines:
MacBook Air M1 Base (8/256)
and a Windows Laptop: Core 5 210H, RTX 3050A-65W TGP, 16GB RAM, 4GB VRAM. (Nothing particularly impressive though lol) | 2025-11-12T16:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ova8hx/which_local_language_model_suits_my_needs/ | bangboobie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ova8hx | false | null | t3_1ova8hx | /r/LocalLLaMA/comments/1ova8hx/which_local_language_model_suits_my_needs/ | false | false | self | 0 | null |
What should I buy: a $3K phone or an nVidia 5090? | 0 | [https://x.com/UniverseIce/status/1988545779410391510](https://x.com/UniverseIce/status/1988545779410391510)
Which one would be better for chatting with an LLM?
I'm thinking the phone because it is portable. What do you think? | 2025-11-12T16:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ova5e7/what_should_i_buy_a_3k_phone_or_an_nvidia_5090/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ova5e7 | false | null | t3_1ova5e7 | /r/LocalLLaMA/comments/1ova5e7/what_should_i_buy_a_3k_phone_or_an_nvidia_5090/ | false | false | self | 0 | null |
Has the USA/EU given up on open weight models? | 92 | In the last couple of months we only see Chinese models (thank God), I don't remember that in recent months we had any open model that came from the USA/EU, do you think they changed their tactics and don't care anymore? | 2025-11-12T16:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ov9lug/has_the_usaeu_given_up_on_open_weight_models/ | justDeveloperHere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov9lug | false | null | t3_1ov9lug | /r/LocalLLaMA/comments/1ov9lug/has_the_usaeu_given_up_on_open_weight_models/ | false | false | self | 92 | null |
What is this new "Viper" model on LMArena? | 6 | It created a very impressive animation of a dog moving its tail, the prompt was "generate a realistic svg of a dog moving its tail"
Codepen: [https://codepen.io/Alecocluc/pen/vEGOvQj](https://codepen.io/Alecocluc/pen/vEGOvQj) | 2025-11-12T15:46:52 | Alecocluc | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ov8obj | false | null | t3_1ov8obj | /r/LocalLLaMA/comments/1ov8obj/what_is_this_new_viper_model_on_lmarena/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'tya8rmm7ku0g1', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/tya8rmm7ku0g1.png?width=108&crop=smart&auto=webp&s=23d41aadea14caa37b162fb88275a9eb18605f28', 'width': 108}, {'height': 262, 'url': 'https://preview.redd.it/tya8rmm7ku0g1.png?width=216&crop=smart&auto=webp&s=0cd182dff88c9fa14294ad10ba3911963abfadc2', 'width': 216}, {'height': 389, 'url': 'https://preview.redd.it/tya8rmm7ku0g1.png?width=320&crop=smart&auto=webp&s=7a1cbab748b55bad705c059bc638cf960229b051', 'width': 320}], 'source': {'height': 614, 'url': 'https://preview.redd.it/tya8rmm7ku0g1.png?auto=webp&s=5a17d85d5939f45d80569466794e9f923dcf0aed', 'width': 505}, 'variants': {}}]} | |
2*dgx spark | 1 | Hi i want to create like 20 AI Assistant each need a different model parameters & contexte lenght ===> up to run 6/8 assistant at the same time
and I am planning to purchase two nvidia dgx spark.
can you give some advice ( I'am a beginner in this field) | 2025-11-12T15:40:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ov8hw5/2dgx_spark/ | Flashy_Upstairs_7491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov8hw5 | false | null | t3_1ov8hw5 | /r/LocalLLaMA/comments/1ov8hw5/2dgx_spark/ | false | false | self | 1 | null |
Anyone know a framework where an AI agent can actually “remember” knowledge without using RAG or fine-tuning? | 1 | [removed] | 2025-11-12T15:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ov809y/anyone_know_a_framework_where_an_ai_agent_can/ | Physical_Event4441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov809y | false | null | t3_1ov809y | /r/LocalLLaMA/comments/1ov809y/anyone_know_a_framework_where_an_ai_agent_can/ | false | false | self | 1 | null |
Anyone know a framework where an AI agent can actually “remember” knowledge without using RAG or fine-tuning? | 1 | [removed] | 2025-11-12T15:16:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ov7val/anyone_know_a_framework_where_an_ai_agent_can/ | Physical_Event4441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov7val | false | null | t3_1ov7val | /r/LocalLLaMA/comments/1ov7val/anyone_know_a_framework_where_an_ai_agent_can/ | false | false | self | 1 | null |
Looking for an agent framework that supports true “knowledge memory” (not RAG, embeddings, or fine-tuning) | 1 | [removed] | 2025-11-12T15:13:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ov7scr/looking_for_an_agent_framework_that_supports_true/ | Physical_Event4441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov7scr | false | null | t3_1ov7scr | /r/LocalLLaMA/comments/1ov7scr/looking_for_an_agent_framework_that_supports_true/ | false | false | self | 1 | null |
Stop fine-tuning your model for every little thing. You're probably wasting your time. | 9 | Alright, confession time. I just wasted three weeks and a chunk of my compute budget trying to fine-tune a model to answer questions about our internal API. The results were... mediocre at best. It kinda knew the stuff, but it also started hallucinating in new and creative ways, and forgot how to do basic things it was good at before.
It was a massive facepalm moment. Because the solution was way, way simpler.
I feel like "fine-tuning" has become this default magic wand people wave when an LLM isn't perfect. But 80% of the time, what you actually need is RAG (Retrieval-Augmented Generation). Let me break it down without the textbook definitions.
RAG is like giving your AI a cheat sheet. You've got a mountain of internal docs, PDFs, or knowledge that the model wasn't trained on? Don't shove it down the model's throat and hope it digests it. Just keep it in a database (a "vector store," if we're being fancy) and teach the AI to look things up before it answers. It's the difference between making an intern memorize the entire employee handbook versus just giving them a link to it and telling them to Ctrl+F. It's faster, cheaper, and the AI can't "forget" or misremember the source material.
Fine-tuning is for changing the AI's personality or teaching it a new skill. This is when you need the model to fundamentally write or reason differently. You want it to sound like a snarky pirate in every response? Fine-tune. You need it to generate code in a very specific, obscure style that no public model uses? Fine-tune. You're teaching it a whole new task that isn't just "recall information," but "process information in this new way."
So, the dumb-simple rule I go by now:
·
Problem:- "The AI doesn't know about X." -> Use RAG.
"The AI doesn't act or sound the way I want." -> Consider Fine-Tuning.
I learned this the hard way so you don't have to. Fight me in the comments if you disagree, but my wallet is still crying from that fine-tuning bill. | 2025-11-12T15:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ov7ogq/stop_finetuning_your_model_for_every_little_thing/ | RYTHEIX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov7ogq | false | null | t3_1ov7ogq | /r/LocalLLaMA/comments/1ov7ogq/stop_finetuning_your_model_for_every_little_thing/ | false | false | self | 9 | null |
AI LLM Workstation setup - Run up to 100B models | 7 | I'm planning to build a workstation for AI - LLM stuff.
^(Please leave the GPU part, I'm gonna grab 24-32GB GPU, obviously RTX one since I need CUDA support for decent Image/Video generations. In future I'm planning to grab 96GB GPU(after price down in 2027))
So for my requirements, I need more RAM since 24-32GB VRAM is not enough.
Planning to buy 320GB DDR5 RAM (5 \* 64GB) first. Also with high MT/s(6000-6800 minimum) as much as possible to get better CPU-only performance. In future, I'll buy some more DDR5 RAM to make that 320GB to 512GB or 1TB.
Here my requirements:
1. Run up to **100B MOE models** (Up to GLM-4.5-Air, GPT-OSS-120B, Llama4-Scout)
2. Run up to **70B Dense models** (Up to Llama 70B)
3. My **daily driver models** gonna be Qwen3-30B models, Qwen3-32B, Gemma3-27B, Mistral series, Phi 4, Seed-OSS-36B, GPT-OSS-20B, GPT-OSS-120B, GLM-4.5-Air
4. I'll be running models with up to **32-128K(rarely 256K) Context**
5. **Agentic Coding**
6. **Writing**
7. **Image, Audio, Video generations** using Image, Audio, Video, Multimodal models (Flux, Wan, Qwen, etc.,) with ComfyUI & other tools
8. Better **CPU-only** performance (Planning to try small-medium models just with RAM for sometime before getting GPU. Would be interesting to see 50+t/s with 30-50B Dense models & 100-200 t/s with 30-50B MOE models while saving power)
9. **AVX-512 Support** (Only recently found that my current laptop don't have this so couldn't get better CPU-only performance using llama.cpp/ik\_llama.cpp)
10. **Optimized Power saving Setup**(For less power consumption, don't want big Electricity bills), that's why I don't want to buy any Used/Old components
So please recommend me below items for my setup.
1. **CPU Processor** : To support up to 1TB DDR5 RAM & 4 GPUs. Preferring Intel.
2. **Motherboard**: To support up to 1TB DDR5 RAM & 4 GPUs
3. **RAM**: DDR5 MT/s(6000-6800 minimum) for better memory bandwidth
4. **Storage**: 2 SSDs - One for 2 OS(Linux & Windows) - 2TB & another for Data - 10TB
5. **Power Supply**: To support all above Processor, Motherboard, RAM, GPUs, Storage, I have no idea what could be better for this.
6. **Cooling**: Best Cooling setup as it has more RAMs, GPU & later more GPUs & RAMs.
7. **Additional Accessories**: Did I miss anything else? Please let me know & recommend as well.
Please mention links if possible. I see some people do share pcpartpicker list in this sub.
Thanks.
^(And No, I don't want Laptop/Mac/MiniPC/UnifiedSetups. With my setup I can upgrade/expand with additional RAM/GPU later whenever needed. Already learned big lesson from our laptop about non-upgradable/expandable thing.) | 2025-11-12T15:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ov7idh/ai_llm_workstation_setup_run_up_to_100b_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov7idh | false | null | t3_1ov7idh | /r/LocalLLaMA/comments/1ov7idh/ai_llm_workstation_setup_run_up_to_100b_models/ | false | false | self | 7 | null |
Deep fake quiz test for users | 1 | I’m interested in a **quiz for employees** in our organization to **identify Deepfakes** using a mix of real videos and AI-generated ones, where participants will have to decide which is which.
They’ll connect through a **link or QR code**.
Is there an existing solution for this? | 2025-11-12T14:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ov6fmk/deep_fake_quiz_test_for_users/ | Boonyal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov6fmk | false | null | t3_1ov6fmk | /r/LocalLLaMA/comments/1ov6fmk/deep_fake_quiz_test_for_users/ | false | false | self | 1 | null |
Adding memory to GPU | 2 | The higher GB cards cost a ridiculous amount. I'm curious if anyone has tried adding memory to their GPU like Chinese modders do and what your results were. Not that I would ever do it, but I find it fascinating.
For context YT gave me this short:
https://youtube.com/shorts/a4ePX1TTd5I?si=xv6ek5rTDFB3NmPw | 2025-11-12T14:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ov68ox/adding_memory_to_gpu/ | wikbus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov68ox | false | null | t3_1ov68ox | /r/LocalLLaMA/comments/1ov68ox/adding_memory_to_gpu/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'EAASCsAyK0t9ZsnTiK-8D7Yu_kbPaKJFDLrh1K17noo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EAASCsAyK0t9ZsnTiK-8D7Yu_kbPaKJFDLrh1K17noo.jpeg?width=108&crop=smart&auto=webp&s=181bf146757ed4918dd8d18ab5d3192cfbde7fcc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EAASCsAyK0t9ZsnTiK-8D7Yu_kbPaKJFDLrh1K17noo.jpeg?width=216&crop=smart&auto=webp&s=d84ad6245ce253d1088e9e71f35b0a79de8edd30', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EAASCsAyK0t9ZsnTiK-8D7Yu_kbPaKJFDLrh1K17noo.jpeg?width=320&crop=smart&auto=webp&s=aa63a784ba78ef9b6f8b9cd52021688c7ea38564', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EAASCsAyK0t9ZsnTiK-8D7Yu_kbPaKJFDLrh1K17noo.jpeg?auto=webp&s=4a23e1fafcc00eb5715f37a5d80ea3fe3a5a7c72', 'width': 480}, 'variants': {}}]} |
[D] Linguistic RL: 3B Models Exceed 100B Performance Through Self-Reflection (86% vs 81%) | 0 | **TL;DR**
: We taught tiny models (3B/1.5B) to beat Claude 3.5 Haiku (100B) by having Claude "journal" about its mistakes, then training small models on the learned strategy. Cost: <$10. Student exceeds teacher.
---
## Results
| Model | Size | Baseline | After LRL+LoRA | Improvement |
|-------|------|----------|----------------|-------------|
|
**Qwen2.5-3B**
| 3B | 12% |
**86.0%**
✨ |
**+74pp**
|
|
**Qwen2.5-1.5B**
| 1.5B | ~8% |
**82.7%**
|
**+75pp**
|
| Claude 3.5 Haiku | ~100B | 81.3% → 84.0% | baseline | +2.7pp (via LRL) |
Both students
**outperformed the 67× larger teacher**
they learned from.
---
## How It Works
**Step 1: Teacher Self-Improvement ("Linguistic RL")**
Give Claude a problem → it solves → tell it if correct → ask it to reflect:
```
"What did I miss? How can I improve?"
```
Through pure self-reflection (no gradients!), Claude writes journal entries like:
```
"I was only checking adjacent meetings.
I need to check ALL overlaps to find
the maximum simultaneous conflicts."
```
Accuracy improves 81% → 84% just from thinking about mistakes.
**Step 2: Extract Strategy**
Pull out Claude's learned solving strategy as natural language curriculum.
**Step 3: Train Student with LoRA**
Fine-tune small model (3B/1.5B) on examples showing:
- Problem
- Claude's strategic thinking
- Answer
**Result**
: 3B model learns O(n log n) sweep line algorithm, achieves 96% on easy problems.
---
## Why This Matters
**💰 Economics**
- Training: <$10 in API calls
- Inference: Free forever (runs locally)
- 100-1000× cheaper than API deployment
**🧠 Science**
- 67× compression (100B → 1.5B)
*with performance gain*
- Learned algorithmic reasoning, not pattern matching
- Students exceed teacher = knowledge is compressible
**🔍 Safety**
- Human-readable learning process
- Can audit what was learned
- No black-box distillation
**🌍 Democratization**
- Frontier capabilities on consumer hardware
- One-time extraction, infinite reuse
- Fully open source
---
## Code & Reproducibility
✅ Published to Zenodo: [DOI 10.5281/zenodo.17585532](
https://zenodo.org/records/17585532
)
✅ GitHub: https://github.com/DRawson5570/linguistic-rl-scheduling-experiments
✅ Fixed seeds, full logs, complete configs
✅ Universal framework - adapt to any domain
**Quick start:**
```bash
git clone https://github.com/DRawson5570/linguistic-rl-scheduling-experiments
cd validated_results_qwen3b_claude35haiku
pip install transformers torch peft anthropic
python run_validation.py
```
Requirements: 12GB GPU, Anthropic API key (~$5)
---
## Framework
We built a universal pipeline - works for any domain:
```python
from framework import run_knowledge_transfer
results = run_knowledge_transfer(
domain=YourCustomDomain(),
teacher_model="claude-3-5-haiku-20241022",
student_model="Qwen/Qwen2.5-3B-Instruct"
)
```
Currently testing: Sudoku (constraint satisfaction), 7B models, multi-domain transfer.
---
## Open Questions
1.
**How small can we go?**
Testing 1.5B → 0.5B compression
2.
**What knowledge compresses well?**
Algorithmic vs. factual vs. creative reasoning
3.
**Recursive teaching?**
Can students become teachers?
4.
**Safety implications?**
More auditable than weight distillation?
---
## Links
- 📄 Paper: https://zenodo.org/records/17585532
- 💻 Code: https://github.com/DRawson5570/linguistic-rl-scheduling-experiments
- 📊 3B Results: [validated_results_qwen3b_claude35haiku/](
https://github.com/DRawson5570/linguistic-rl-scheduling-experiments/tree/main/validated_results_qwen3b_claude35haiku
)
- 📊 1.5B Results: [validated_results_qwen1.5b_claude35haiku/](
https://github.com/DRawson5570/linguistic-rl-scheduling-experiments/tree/main/validated_results_qwen1.5b_claude35haiku
)
---
Happy to answer questions! This could be a new paradigm: extract specific capabilities from frontier models into tiny specialized models that run anywhere.
**Edit**
: Currently running 7B experiments and Sudoku domain. Will update with results! | 2025-11-12T13:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ov5sxh/d_linguistic_rl_3b_models_exceed_100b_performance/ | Next_Bid_8339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov5sxh | false | null | t3_1ov5sxh | /r/LocalLLaMA/comments/1ov5sxh/d_linguistic_rl_3b_models_exceed_100b_performance/ | false | false | self | 0 | null |
Any experience serving LLMs locally on Apple M4 for multiple users? | 4 | Has anyone tried deploying an LLM *as a shared service* on an Apple M4 (Pro/Max) machine? Most benchmarks I’ve seen are single-user inference tests, but I’m wondering about *multi-user* or small-team usage.
Specifically:
* How well does the M4 handle concurrent inference requests?
* Does vLLM or other high-throughput serving frameworks run reliably on macOS?
* Any issues with batching, memory fragmentation, or long-running processes?
* Is quantization (Q4/Q8, GPTQ, AWQ) stable on Apple Silicon?
* Any problems with MPS vs CPU fallback?
I’m debating whether a maxed-out M4 machine is a reasonable alternative to a small NVIDIA server (e.g., a single A100, 5090, 4090, or a cloud instance) for local LLM serving. A GPU server obviously wins on throughput, but if the M4 can support 2–10 users with small/medium models at decent latency, it might be attractive (quiet, compact, low-power, macOS environment).
If anyone has practical experience (even anecdotal) about:
✅ Running vLLM / llama.cpp / mlx
✅ Using it as a local “LLM API” for multiple users
✅ Real performance numbers or gotchas
…I'd love to hear details. | 2025-11-12T13:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ov5bpe/any_experience_serving_llms_locally_on_apple_m4/ | ApprenticeLYD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov5bpe | false | null | t3_1ov5bpe | /r/LocalLLaMA/comments/1ov5bpe/any_experience_serving_llms_locally_on_apple_m4/ | false | false | self | 4 | null |
Can a local LLM beat ChatGPT for business analysis? | 1 | I work in an office environment and often use ChatGPT to help with business analysis — identifying trends, gaps, or insights that would otherwise take me hours to break down, then summarizing them clearly. Sometimes it nails it, but other times I end up spending hours fixing inaccuracies or rephrasing its output.
I’m curious whether a local LLM could do this better. My gut says no, I doubt I can run a model locally that matches ChatGPT’s depth or reasoning, but I’d love to hear from people who’ve tried.
Let’s assume I could use something like an RTX 6000 for local inference, and that privacy isn’t a concern in my case. And, also I will not be leveraging it for AI coding. Would a local setup beat ChatGPT’s performance for analytical and writing tasks like this? | 2025-11-12T13:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ov59n6/can_a_local_llm_beat_chatgpt_for_business_analysis/ | reallionkiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov59n6 | false | null | t3_1ov59n6 | /r/LocalLLaMA/comments/1ov59n6/can_a_local_llm_beat_chatgpt_for_business_analysis/ | false | false | self | 1 | null |
Fast and Affordable LLMs serving on Intel Arc Pro B-Series GPUs with vLLM | 13 | 2025-11-12T13:21:38 | https://blog.vllm.ai/2025/11/11/intel-arc-pro-b.html | reps_up | blog.vllm.ai | 1970-01-01T00:00:00 | 0 | {} | 1ov4z8j | false | null | t3_1ov4z8j | /r/LocalLLaMA/comments/1ov4z8j/fast_and_affordable_llms_serving_on_intel_arc_pro/ | false | false | default | 13 | null | |
Tool-agent: minimal CLI agent | 2 | Hey folks. Later this week I’m running a tech talk in my local community on building AI agents. Thought I’d share the code I’m using for a demo as folks may find it a useful starting point for their own work.
For those in this sub who occasionally ask how to get better web search results than OpenWebUI: my quest to understand effective web search led me here. I find this approach delivers good quality results for my use case. | 2025-11-12T13:03:22 | https://github.com/dsartori/tool-agent | dsartori | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ov4kaz | false | null | t3_1ov4kaz | /r/LocalLLaMA/comments/1ov4kaz/toolagent_minimal_cli_agent/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=108&crop=smart&auto=webp&s=69fe5f9aa5bfaca851690b9d3b5e5c2997d90025', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=216&crop=smart&auto=webp&s=e018b820c2e2d79d0481dad9898a3df687ec47d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=320&crop=smart&auto=webp&s=1dd3ff8f5cb68ab8cccdbadfd66f5f78bc91dcb2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=640&crop=smart&auto=webp&s=4a3d68e61e2296247310c42fd4e0f822d6d4cf29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=960&crop=smart&auto=webp&s=f15a5818a2e7b31a332f494aa5491af482a0f3a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?width=1080&crop=smart&auto=webp&s=558014d1511ead94794901bd9943434a7af91721', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kk7j4fCF3dmpN0x2LklyXszpS5Vo5N2GCJlJTPNoYB8.png?auto=webp&s=dc68a367b1f14a34995288c94b286d0c0fa71c19', 'width': 1200}, 'variants': {}}]} | |
I built my own self-hosted GPT with LM Studio, Caddy, and Cloudflare Tunnel | 1 | Inspired by another post here, I’ve just put together a little self-hosted AI chat setup that I can use on my LAN and remotely and a few friends asked how it works.
*Processing img uqmudnzjqt0g1...*
[](https://preview.redd.it/i-built-my-own-self-hosted-chatgpt-with-lm-studio-caddy-and-v0-uyvp7j948t0g1.png?width=1405&format=png&auto=webp&s=0580c987510364e37b65c1ad10029fdcb1324103)
*Processing img l65xmq0mqt0g1...*
*Processing img 22d2pjspqt0g1...*
# What I built
* A local AI chat app that looks and feels like ChatGPT/other generic chat, but everything runs on my own PC.
* LM Studio hosts the models and exposes an OpenAI-style API on `127.0.0.1:1234`.
* Caddy serves my `index.html` and proxies API calls on `:8080`.
* Cloudflare Tunnel gives me a protected public URL so I can use it from anywhere without opening ports (and share with friends).
* A custom front end lets me pick a model, set temperature, stream replies, and see token usage and tokens per second.
# The moving parts
1. **LM Studio**
* Runs the model server on `http://127.0.0.1:1234`.
* Endpoints like `/v1/models` and `/v1/chat/completions`.
* Streams tokens so the reply renders in real time.
2. **Caddy**
* Listens on `:8080`.
* Serves `C:\site\index.html`.
* Forwards `/v1/*` to [`127.0.0.1:1234`](http://127.0.0.1:1234/) so the browser sees a single origin.
* Fixes CORS cleanly.
3. **Cloudflare Tunnel**
* Docker container that maps my local Caddy to a public URL (a random subdomain I have setup).
* No router changes, no public port forwards.
4. **Front end (single HTML file which I then extended to abstract css and app.js)**
* Model dropdown populated from `/v1/models`.
* “Load” button does a tiny non-stream call to warm the model.
* Temperature input `0.0 to 1.0`.
* Streams with `Accept: text/event-stream`.
* Usage readout: prompt tokens, completion tokens, total, elapsed seconds, tokens per second.
* Dark UI with a subtle gradient and glassy panels.
# How traffic flows
Local:
Browser → http://127.0.0.1:8080 → Caddy
static files from C:\
/v1/* → 127.0.0.1:1234 (LM Studio)
Remote:
Browser → Cloudflare URL → Tunnel → Caddy → LM Studio
# Why it works nicely
* Same relative API base everywhere: `/v1`. No hard coded [`http://127.0.0.1:1234`](http://127.0.0.1:1234/) in the front end, so no mixed-content problems behind Cloudflare.
* Caddy is set to `:8080`, so it listens on all interfaces. I can open it from another PC on my LAN:http://<my-LAN-IP>:8080/
* Windows Firewall has an inbound rule for TCP 8080.
# Small UI polish I added
* Replaced over-eager `---` to `<hr>` with a stricter rule so pages are not full of lines.
* Simplified bold and italic regex so things like `**:**` render correctly.
* Gradient background, soft shadows, and focus rings to make it feel modern without heavy frameworks.
# What I can do now
* Load different models from LM Studio and switch them in the dropdown from anywhere.
* Adjust temperature per chat.
* See usage after each reply, for example:
* Prompt tokens: 412
* Completion tokens: 286
* Total: 698
* Time: 2.9 s
* Tokens per second: 98.6 tok/s
[](https://preview.redd.it/i-built-my-own-self-hosted-chatgpt-with-lm-studio-caddy-and-v0-9xaxj6jtpt0g1.png?width=1394&format=png&auto=webp&s=addec4688c8aa0dbd460b68d7ea7a410099d320d) | 2025-11-12T13:01:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ov4isj/i_built_my_own_selfhosted_gpt_with_lm_studio/ | shaundiamonds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov4isj | false | null | t3_1ov4isj | /r/LocalLLaMA/comments/1ov4isj/i_built_my_own_selfhosted_gpt_with_lm_studio/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=108&crop=smart&auto=webp&s=154fde7dec2e775b21241a19a15f67cb842da980', 'width': 108}, {'height': 187, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=216&crop=smart&auto=webp&s=89ff41bd0d697bf409b0ee43103bb62b186bcaa9', 'width': 216}, {'height': 277, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=320&crop=smart&auto=webp&s=1e438a2b3ff1715a521b84beaacc280c5f5b3299', 'width': 320}, {'height': 554, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=640&crop=smart&auto=webp&s=483a5308e62250cdbcbc6c2e91aa1888a48a1fd6', 'width': 640}, {'height': 831, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=960&crop=smart&auto=webp&s=00f5e31b1ab58c9638063019bb98452f28e27c01', 'width': 960}, {'height': 935, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?width=1080&crop=smart&auto=webp&s=1f18e981793bec3966b9648b85869ecbae113b8c', 'width': 1080}], 'source': {'height': 1217, 'url': 'https://external-preview.redd.it/fWrXQWzqcIXy63K93ixxm2kpdW4Oi2HihQyfaV0EM1c.png?auto=webp&s=7e40033fb9798419a3ddc11270e85db7f781fb7b', 'width': 1405}, 'variants': {}}]} | |
First Attempt at creating local models + SWE Benching | 0 | 2025-11-12T13:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ov4hp8/first_attempt_at_creating_local_models_swe/ | _blkout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov4hp8 | false | null | t3_1ov4hp8 | /r/LocalLLaMA/comments/1ov4hp8/first_attempt_at_creating_local_models_swe/ | false | false | 0 | null | ||
Cannot get qwen3 vl instruct versions working | 1 | Hi everyone, I am new to this so forgive me if I am missing something simple.
I am trying to use qwen3 vl in my thesis project and i was exploring the option of using GGUF weights to process my data locally.
The main issue is that get the instruct variants of the model running.
I have tried Ollama + following instructions on huggingface (e.g. ollama run hf-model ....) which leads to an error 500 : unable to load model.
I have also tried llama cpp python (version 0.3.16 ) + manually downloading model and mmproj weights from github and putting them in a model folder, however i get the same error (which makes sense to me since ollama is using llama cpp).
I was able to use the thinking variants by loading the models found at [https://ollama.com/library/qwen3-vl](https://ollama.com/library/qwen3-vl) , however this does not really suit my usecase and i would like the instruct versions. I am on linux (wsl)
Any help is appreciated | 2025-11-12T12:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ov48i4/cannot_get_qwen3_vl_instruct_versions_working/ | matthisonfire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov48i4 | false | null | t3_1ov48i4 | /r/LocalLLaMA/comments/1ov48i4/cannot_get_qwen3_vl_instruct_versions_working/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
Ollares one: miniPC with RTX 5090 mobile (24GB VRAM) + Intel 275HX (96GB RAM) | 6 | It came to my attention this new product: [https://one.olares.com](https://one.olares.com) that is still not available for sale (kickstarter campaign to start soon).
The specs:
* Processor: Intel® Ultra 9 275HX 24 Cores, 5.4GHz
* GPU: NVIDIA GeForce RTX 5090 Mobile 24GB GDDR7
* Memory: 96GB RAM (2×48GB) DDR5 5600MHz
* Storage: 2TB NVMe SSD PCIe 4.0
* Ports: 1 × Thunderbolt™ 5 1 × RJ45 Ethernet (2.5Gbps) 1 × USB-A 1 × HDMI 2.1
* Wireless Connectivity: Wi-Fi 7 Bluetooth 5.4
* Power: 330W
* Dimensions (L × W × H): 320 × 197 × 55mm
* Weight: 2.15kg (3.1kg with PSU)
Initial price seems it would be around $4000 based on the monthly calculations where they compare it with rented services, where it says "*Stop Renting***"**
It would come with a special distribution of Linux (\[Olares\](https://github.com/beclab/Olares)) that would make easier to install containerized apps via an app-store and it will run run Kubernetes under the hood, but being a standard Intel chip it should not be difficult to wipe that and install whatever you want inside.
Would this be able to compete with other mini-PCs based on the Ryzen AI Max+ 395 (Strix Halo) or with the NVIDIA DGX Spark ? | 2025-11-12T12:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ov3x0m/ollares_one_minipc_with_rtx_5090_mobile_24gb_vram/ | CapoDoFrango | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov3x0m | false | null | t3_1ov3x0m | /r/LocalLLaMA/comments/1ov3x0m/ollares_one_minipc_with_rtx_5090_mobile_24gb_vram/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'qZ5R5FoQkUW94iO0uhpSMNyC5hsB9VdazBY670QUPLE', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/qZ5R5FoQkUW94iO0uhpSMNyC5hsB9VdazBY670QUPLE.png?width=108&crop=smart&auto=webp&s=45f477d9979ee4defe84dc15f64872020957882f', 'width': 108}, {'height': 72, 'url': 'https://external-preview.redd.it/qZ5R5FoQkUW94iO0uhpSMNyC5hsB9VdazBY670QUPLE.png?width=216&crop=smart&auto=webp&s=af65eb38b3e62cfee8d59687fba867068840eea9', 'width': 216}], 'source': {'height': 91, 'url': 'https://external-preview.redd.it/qZ5R5FoQkUW94iO0uhpSMNyC5hsB9VdazBY670QUPLE.png?auto=webp&s=9b37ece74e7d7b245c1e6907abe5ca6b8ada78e5', 'width': 270}, 'variants': {}}]} |
AELLA: 100M+ research papers: an open-science initiative to make scientific research accessible via structured summaries created by LLMs | 448 | Blog: [https://inference.net/blog/project-aella](https://inference.net/blog/project-aella)
Models: [https://huggingface.co/inference-net](https://huggingface.co/inference-net)
Visualizer: [https://aella.inference.net](https://aella.inference.net/)
| 2025-11-12T12:06:13 | https://v.redd.it/du59aiutet0g1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ov3dkb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/du59aiutet0g1/DASHPlaylist.mpd?a=1765541188%2CMDViMjdjMDYwODRjODdkYzIxMzYxMTg5MzhlYWQ3YjIxMGQ5YTA2MDgwY2UwZWRlMTkxMDRlYmE2ZjgzM2EwYg%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/du59aiutet0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/du59aiutet0g1/HLSPlaylist.m3u8?a=1765541188%2COTJkMTAyYzk2MDQwZWZkNzNhZjUzZDIzOTZmNmE0ODM0ZTBjYWViNzhjNzZlMzU0OWM2NjAwNTA1ZTcxNmE4NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/du59aiutet0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1674}} | t3_1ov3dkb | /r/LocalLLaMA/comments/1ov3dkb/aella_100m_research_papers_an_openscience/ | false | false | 448 | {'enabled': False, 'images': [{'id': 'Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=108&crop=smart&format=pjpg&auto=webp&s=91ec8a843f97bd6906bb32c93c57a2c700e4d107', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=216&crop=smart&format=pjpg&auto=webp&s=65406b57558c3c72086441361b7c8ca61cf23d21', 'width': 216}, {'height': 206, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=320&crop=smart&format=pjpg&auto=webp&s=e2d747437353ef9e7df747a4ed9c5cecde4731bd', 'width': 320}, {'height': 412, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=640&crop=smart&format=pjpg&auto=webp&s=98521818a9bfba5097d7a0c87a8aed8ad7e1937b', 'width': 640}, {'height': 619, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=960&crop=smart&format=pjpg&auto=webp&s=388306ab0c0910b13ad8faa6af114ebf49342b17', 'width': 960}, {'height': 696, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=571aef70772431a13dbac4cbeadd891210d1fc03', 'width': 1080}], 'source': {'height': 2036, 'url': 'https://external-preview.redd.it/Ym1xdmdzdXRldDBnMR0L-Ennn3ovi4auFkXdc601F67-ibAb8bxVVAjHQXSP.png?format=pjpg&auto=webp&s=489d1ebd66efd40d0ea2c43afb1c6e62de629297', 'width': 3156}, 'variants': {}}]} | |
Is Deepseek-OCR SOTA for OCR-related tasks? | 33 | For those running local setups (e.g 16 GB VRAM), how does DeepSeek-OCR stack up against recent VLMs — is it considered SOTA for document parsing?
I’m experimenting with adding an LLM layer on top to extract structured fields, but I’m wondering if models like Qwen3-VL-8B might still outperform it overall.
Anyone here been playing with the latest VLMs and have thoughts or benchmarks to share? | 2025-11-12T11:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ov2wmu/is_deepseekocr_sota_for_ocrrelated_tasks/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov2wmu | false | null | t3_1ov2wmu | /r/LocalLLaMA/comments/1ov2wmu/is_deepseekocr_sota_for_ocrrelated_tasks/ | false | false | self | 33 | null |
My (open-source) continuation (FlexAttention, RoPE, BlockMasks, Muon, etc.) to Karpathy's NanoGPT | 43 | Hey everyone,
First of all, I am not fully sure if this useful to r/LocalLLaMA, because I would assume this is more about running existing models that starting from scratch? Or maybe you expect higher quality models.
In any case, I have been following and coding along Andrej Karpathy's 'Let's reproduce GPT-2 (124M)', and after finishing the four hours, I decided to continue adding some modern changes. At iteration 31, the repo contains:
* FlashAttention (sdpa) / FlexAttention
* Sliding Window Attention (attend to a subset of tokens), Doc Masking (attend to same-doc tokens only), and Attention Logit Soft-capping (if FlexAttention, for performance)
* Sliding Window Attention ramp (increase window size over training)
* Attention logit soft-capping ("clamp", "ptx" -faster-, "rational" or "exact")
* Custom masking (e.g., padding mask if non-causal)
* AdamW or AdamW and Muon
* Muon steps, momentum, use Nesterov
* MHA/MQA/GQA (n\_heads vs n\_kv\_heads)
* QK norm (RMS/L2)
* RMSNorm or LayerNorm
* GELU, ReLU, ReLU\*\*2, SiLU or SwiGLU (fair or unfair) activations
* Bias or no bias
* Tied or untied embeddings
* Learning rate warmup and decay
* RoPE/NoPE/absolute positional encodings
* LM head logit soft-capping
* Gradient norm clipping
* Kernel warmup steps
I share the repo in case it is helpful to someone starting out. I've tried to comment the code, because I was learning these concepts as I was going along. Also, I have tried to make it configurable at the start, with GPTConfig and TrainingConfig (meaning, you should be able to mix the above as you want, e.,g., GELU + AdamW + gradient norm clipping, or SiLU + Muon + FlexAttention + RoPE, etc.
I am not sure if the code is useful to anyone else, or maybe my comments only make sense to me.
In any case, here is the GitHub. Version 1 (\`00-gpt-3-small-overfit-batch.py\`) is the batch overfitting from the tutorial, while version 31 (\`30-gpt-3-small-with-training-config-and-with-or-without-swa-window-size-ramp.py\`) for instance adds a SWA ramp to version 30. And in between, intermediate versions progressively adding the above.
[https://github.com/Any-Winter-4079/GPT-3-Small-Pretraining-Experiments](https://github.com/Any-Winter-4079/GPT-3-Small-Pretraining-Experiments)
Finally, while it is in the README as well, let me say this is the good, most efficient version of the speedrun: [https://github.com/KellerJordan/modded-nanogpt](https://github.com/KellerJordan/modded-nanogpt)
With this I mean, if you want super fast code, go there. This repo tries to be more configurable and more explained, but it doesn't match yet the speedrun's performance. So take my version as that of someone that is learning along, more than a perfect repo.
Still, I would hope it is useful to someone.
Cheers! | 2025-11-12T11:31:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ov2qee/my_opensource_continuation_flexattention_rope/ | Any-Winter-4079 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov2qee | false | null | t3_1ov2qee | /r/LocalLLaMA/comments/1ov2qee/my_opensource_continuation_flexattention_rope/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=108&crop=smart&auto=webp&s=bdfe631cda468c7da6f9392b19a8fa407e64b6d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=216&crop=smart&auto=webp&s=ff57645ad34b201d34c1c23324fad0690dca101b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=320&crop=smart&auto=webp&s=6d78f645abc7efa17ef408a280d4cc30d5274d34', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=640&crop=smart&auto=webp&s=54cebf09f4a8f033acf967d36f269bcb5c492d15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=960&crop=smart&auto=webp&s=4b13c8fdfece05bb8a7c310cdffd905490e22c52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?width=1080&crop=smart&auto=webp&s=c41f35581c55df6c476e518e7c6a4151c9c5c9d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kJ3EC-R02BkQqzMWZNyrVTbRZZsc5VBvnDOrTJ7NaXk.png?auto=webp&s=8c62f2ce4228b4824714dd33fb16f19a75278920', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.