title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Image generation on Apple M series chips (e.g. M3 Ultra)
5
I'm having a lot of fun with my M3 Ultra 256 GB using Gwen-Image. Several of the other templates for different models I've tried in ComfyUI seemed to have blocking issues (floating point types). In one case there was an easy workaround. I'm still experimenting a lot. Any recommendations for other AI models, or ComfyUI workflows to try out? Also, I can try to answer some questions but am a beginner at this.
2025-10-17T00:17:19
https://www.reddit.com/r/LocalLLaMA/comments/1o8mrtz/image_generation_on_apple_m_series_chips_eg_m3/
GoldenShackles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8mrtz
false
null
t3_1o8mrtz
/r/LocalLLaMA/comments/1o8mrtz/image_generation_on_apple_m_series_chips_eg_m3/
false
false
self
5
null
Meta just dropped MobileLLM-Pro, a new 1B foundational language model on Huggingface
422
Meta just published MobileLLM-Pro, a new 1B parameter foundational language model (pre-trained and instruction fine-tuned) on Huggingface [https://huggingface.co/facebook/MobileLLM-Pro](https://huggingface.co/facebook/MobileLLM-Pro) The model seems to outperform Gemma 3-1B and Llama 3-1B by quite a large margin in pre-training and shows decent performance after instruction-tuning (Looks like it works pretty well for API calling, rewriting, coding and summarization). The model is already in GradIO and can be directly chatted with in the browser: [https://huggingface.co/spaces/akhaliq/MobileLLM-Pro](https://huggingface.co/spaces/akhaliq/MobileLLM-Pro)
2025-10-16T23:49:16
https://www.reddit.com/r/LocalLLaMA/comments/1o8m5ua/meta_just_dropped_mobilellmpro_a_new_1b/
Sad_Consequence5629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8m5ua
false
null
t3_1o8m5ua
/r/LocalLLaMA/comments/1o8m5ua/meta_just_dropped_mobilellmpro_a_new_1b/
false
false
self
422
{'enabled': False, 'images': [{'id': 'jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=108&crop=smart&auto=webp&s=aba0ec468bc2a4a98997fd48e6127c3eeb427fc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=216&crop=smart&auto=webp&s=c0499a1df8b59833a847cc48057d27e94f25947e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=320&crop=smart&auto=webp&s=38a3792ccfc14756b2d54477027016d433e7490d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=640&crop=smart&auto=webp&s=7e2417b6932d5a1fcaf49ac7d2561558b6a9873e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=960&crop=smart&auto=webp&s=bb65cc88a85d20bd124f39d26552675853105a87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=1080&crop=smart&auto=webp&s=f6b0738606be5a5354cd63955dd76436298a5e21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?auto=webp&s=5d0652d5708c785a0a543a387a1a56e5559dfad1', 'width': 1200}, 'variants': {}}]}
I got Kokoro TTS running natively on iOS! 🎉 Natural-sounding speech synthesis entirely on-device
34
Hey everyone! Just wanted to share something cool I built this weekend. I managed to get Kokoro TTS (the high-quality open-source text-to-speech model) running completely natively on iOS - no server, no API calls, 100% on-device inference! What it does: * Converts text to natural-sounding speech directly on your iPhone/iPad * Uses the full ONNX model (325MB) with real voice embeddings * 50+ voices in multiple languages (English, Spanish, French, Japanese, Chinese, etc.) * 24kHz audio output at \~4 seconds generation time for a sentence The audio quality is surprisingly good! It's not real-time yet (takes a few seconds per sentence), but for a 325MB model running entirely on a phone with no quantization, I'm pretty happy with it. Planning on integrating it in my iOS apps. Has anyone else tried running TTS models locally on mobile? Would love to hear about your experiences!
2025-10-16T23:44:22
https://www.reddit.com/r/LocalLLaMA/comments/1o8m1v0/i_got_kokoro_tts_running_natively_on_ios/
Living_Commercial_10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8m1v0
false
null
t3_1o8m1v0
/r/LocalLLaMA/comments/1o8m1v0/i_got_kokoro_tts_running_natively_on_ios/
false
false
self
34
null
We built 3B and 8B models that rival GPT-5 at HTML extraction while costing 40-80x less - fully open source
394
*Disclaimer: I work for* [*Inference.net*](http://Inference.net)*, creator of the Schematron model family* Hey everyone, wanted to share something we've been working on at Inference.net: Schematron, a family of small models for web extraction. Our goal was to make a small, fast model for taking HTML from website and extracting JSON that perfectly adheres to a schema. We distilled a frontier model down to 8B params and managed to keep basically all the output quality for this task. Schematron-8B scores 4.64 on LLM-as-a-judge evals vs GPT-4.1's 4.74 and Gemma 3B's 2.24. Schematron-3B scores 4.41 while being even faster. The main benefit of this model is that it costs 40-80x less than GPT-5 at comparable quality (slightly worse than GPT-5, better than Gemini 2-5 Flash). **Technical details:** We fine-tuned [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), expanded it to a 128K context window, quantized to FP8 without quality loss, and trained until it outputted strict JSON with 100% schema compliance. We also built a smaller 3B variant that's even cheaper and faster, but still maintains most of the accuracy of the 8B variant. We recommend using the 3B for most tasks, and trying 8B if it fails or most of your documents are pushing the context limit. **How we trained it:** We started with 1M real web pages from Common Crawl and built a synthetic dataset by clustering websites and generating schemas that mirror real-world usage patterns. We used a frontier model as a teacher and applied curriculum learning to progressively train on longer context lengths--training with context parallelism and FSDP to scale efficiently--which is why the models stay accurate even at the 128K token limit. **Why this matters:** Processing 1 million pages daily with GPT-5 would cost you around $20,000. With Schematron-8B, that same workload runs about $480. With Schematron-3B, it's $240. The speed matters too. Schematron processes pages 10x faster than frontier models. On average, Schamatron can scrape a page in 0.54 seconds, compared to 6 seconds for GPT-5. These latency gains compound very quickly for something like a browser-use agent. **Real-world impact on LLM factuality:** We tested this on SimpleQA to see how much it improves accuracy when paired with web search. When GPT-5 Nano was paired with Schematron-8B to extract structured data from search results provided by Exa, it went from answering barely any questions correctly (8.54% on SimpleQA) to getting over 85% right. The structured extraction approach means this was done processing lean, clean JSON (very little additional cost) instead of dumping \~8k tokens of raw HTML into your context window per page retrieved (typically LLMs are grounded with 5-10 pages/search). **Getting started:** If you're using our [serverless API](https://inference.net/models/schematron-3b), you only need to pass your Pydantic, Zod, or JSON Schema and the HTML. We handle all the prompting in the backend for you in the backend. You get $10 in free credits to start. If you're running locally, there are a few things to watch out for. You need to follow the prompting guidelines carefully and make sure you're using structured extraction properly, otherwise the model won't perform as well. The models are on [HuggingFace](https://huggingface.co/inference-net/Schematron-3B) and [Ollama](https://ollama.com/Inference/Schematron). Full benchmarks and code examples are in our blog post: [https://inference.net/blog/schematron](https://inference.net/blog/schematron), [docs](https://docs.inference.net/workhorse-models/schematron), and [samples repo](https://github.com/context-labs/inference-samples/blob/main/examples/schematron-scrape-companies/schematron-scrape-companies.ipynb). Happy to answer any technical questions about the training process or architecture. Also interested in how this would be helpful in your current scraping workflows!
2025-10-16T23:43:03
https://www.reddit.com/gallery/1o8m0ti
TerrificMist
reddit.com
1970-01-01T00:00:00
0
{}
1o8m0ti
false
null
t3_1o8m0ti
/r/LocalLLaMA/comments/1o8m0ti/we_built_3b_and_8b_models_that_rival_gpt5_at_html/
false
false
https://b.thumbs.redditm…EIoAl_NdSUMk.jpg
394
null
Question about multiple llms at once and hardware
4
I was going to get two DGX for a local service I'm running where I host as many qwen 7b or 32b as I can possibly run. Are the DGX's still a bad choice for hosting multiple concurrently running LLMs? I just need vram I think and lots of throughput. Maybe there's a better options that won't cost me 8k?
2025-10-16T23:40:24
https://www.reddit.com/r/LocalLLaMA/comments/1o8lyoz/question_about_multiple_llms_at_once_and_hardware/
Nimrod5000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8lyoz
false
null
t3_1o8lyoz
/r/LocalLLaMA/comments/1o8lyoz/question_about_multiple_llms_at_once_and_hardware/
false
false
self
4
null
New NVIDIA Project G-Assist Plug-in Hackathon - Win a GeForce RTX 5090
18
Hi everyone, hope you don't mind if I share a project we're working on at NVIDIA. We recently launched a new plug-in hackathon contest around Project G-Assist, with a theme for “home control.” Think smart lights, adjusting thermostat temperature, managing devices & more.  Project G-Assist is an experimental AI assistant for GeForce RTX-powered PCs that lets you call a variety of NVIDIA and third-party PC APIs to execute actions. It uses a specially tuned Small Language Model (SLM) to efficiently interpret natural language instructions, and users can make plugins (in C++ or Python) to add new features. The top 3 entries will win RTX 50 Series GPUs, including a GeForce RTX 5090. Full details are [here](https://developer.nvidia.com/g-assist-hackathon).  This is the second hackathon we've run for G-Assist, and the winners in the first event were pretty impressive. Our first-place [winner](https://github.com/briancaffey/flux-plugin-for-g-assist) last time enabled real-time image generation with voice commands through FLUX.1 running locally. I'd love to see what LocalLLaMA can do. Let us know what you think, and I'm happy to answer any questions. Thanks!
2025-10-16T23:13:34
https://www.reddit.com/r/LocalLLaMA/comments/1o8lcpc/new_nvidia_project_gassist_plugin_hackathon_win_a/
NV_Cory
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8lcpc
false
null
t3_1o8lcpc
/r/LocalLLaMA/comments/1o8lcpc/new_nvidia_project_gassist_plugin_hackathon_win_a/
false
false
self
18
{'enabled': False, 'images': [{'id': '0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=108&crop=smart&auto=webp&s=233d9ab2a3fc193d33c08b28e0fd9a2982549be0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=216&crop=smart&auto=webp&s=0068fe2e5f51654f0e358faf538057739469a6de', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=320&crop=smart&auto=webp&s=3b3b9806817aba1a7f858e57cb61bfa10addf1d0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=640&crop=smart&auto=webp&s=ba615de7980504838f6794da1019e5b732c182a2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=960&crop=smart&auto=webp&s=0945d9d131b9e1fa18fd844f510606d69f3d187f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?width=1080&crop=smart&auto=webp&s=f2c1dce9a098fb42693c13ada28f1c1dcec9b765', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/0uBEKNZmRFhe5pEXEd8A8g5R-nZDq6sw74VzjO8xHsg.jpeg?auto=webp&s=8ffe75eb80668ee05458aa846acf886e784a9589', 'width': 1200}, 'variants': {}}]}
Best opensource coding model?
10
Deepseek-r1 or GLM-4.6 or Kimi-k2 or qwen3-coder-480b or gpt-oss-120b ? Other?
2025-10-16T22:46:46
https://www.reddit.com/r/LocalLLaMA/comments/1o8kqj2/best_opensource_coding_model/
night0x63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8kqj2
false
null
t3_1o8kqj2
/r/LocalLLaMA/comments/1o8kqj2/best_opensource_coding_model/
false
false
self
10
null
Build/buy system for data teams to experiment on - suggestions?
1
I work in a cash starved org in Australia. Within my teams, I look after data engineering and analysis, and they have built some reasonable lake house and data manipulation tools on a shoestring, due to current budget constraints and a need to get things done. We are at a point where experimentation on our data sets using AI and machine learning models is likely to yield some great value, but learning environments especially with data sovereignty challenges and the size of our data sets is presenting a challenge if we remain using cloud based systems. To enable my teams to experiment, I am thinking about building something on prem, which would give free reign to experiment and iterate without fear of failure or utility cost. It seemed fortuitous that GB 10 systems seem to have dropped at the same time we are looking, and I considered purchasing a pair to run linked, but the more I read, the more my uncertainty grows. While generally familiar with hardware and system specs, AI processing is something I have less competence in. Our core usage will be across large datasets for: Cleaning and normalising (master data) Matching sets across different data streams Predictive analytics I only have limited budget, I can probably stretch it a bit, but it will be a one shot until we can demonstrate significant business value.
2025-10-16T22:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1o8kb99/buildbuy_system_for_data_teams_to_experiment_on/
notwhelmed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8kb99
false
null
t3_1o8kb99
/r/LocalLLaMA/comments/1o8kb99/buildbuy_system_for_data_teams_to_experiment_on/
false
false
self
1
null
DGX Spark is here, give me your non-inference workloads
112
Just received my DGX Spark. We all know it's trash for inference, so give me your non-inference test ideas (e.g., RL) to see what else it's trash at. I can also compare the numbers with my 4090 and H100.
2025-10-16T22:20:50
https://i.redd.it/17hmoj42ujvf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1o8k4gc
false
null
t3_1o8k4gc
/r/LocalLLaMA/comments/1o8k4gc/dgx_spark_is_here_give_me_your_noninference/
false
false
https://b.thumbs.redditm…YvSZVd1ytaKI.jpg
112
{'enabled': True, 'images': [{'id': 'GICNjH0cL5qQM6lvBBQdS05vVpO-D2kdq_712oz-T28', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=108&crop=smart&auto=webp&s=4637c335d431dc38e608bc05b00168dff02a0b24', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=216&crop=smart&auto=webp&s=1b31d07c97f6cf98236ef73faa02e5062cf338dd', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=320&crop=smart&auto=webp&s=2ba11f0e9b6f2bd76b4f9ab170441f9cad817eb6', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=640&crop=smart&auto=webp&s=aca471462e384551e5b0d09c075c93f99e741ab1', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=960&crop=smart&auto=webp&s=174a8c502a702ae5ba962e5483bfb54ea9f5a5fb', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?width=1080&crop=smart&auto=webp&s=7988a9f5677124a25a9c71552199d6a380b26160', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/17hmoj42ujvf1.jpeg?auto=webp&s=a2bb2b372599fd0e5c7f46862b882367ae2ea132', 'width': 4032}, 'variants': {}}]}
What am I doing wrong?
1
I am using the newest version (0.11.0) of VLLM on my Server and I want to use Qwen3-Reranker-4B. I just dont understand the outcome or am I using it the wrong way? /rerank {    "model": "qwen3-reranker-4b",    "query": "Hans Peter Ulrich",      "documents": [        "Name: Heinz Uhlmann",        "Name: Hans Peter Ulrich",        "Name: Paul Friedrich"      ],    "top_n": 3 } The output is following: {     "id": "rerank-06cfab845b9d43b6ac69e90da427e651",     "model": "qwen3-reranker-4b",     "usage": {         "total_tokens": 32     },     "results": [         {             "index": 1,             "document": {                 "text": "Name: Hans Peter Ulrich",                 "multi_modal": null             },             "relevance_score": 0.5270825028419495         },         {             "index": 2,             "document": {                 "text": "Name: Paul Friedrich",                 "multi_modal": null             },             "relevance_score": 0.49131113290786743         },         {             "index": 0,             "document": {                 "text": "Name: Heinz Uhlmann",                 "multi_modal": null             },             "relevance_score": 0.28802406787872314         }     ] } Why is it only 0.52 and the following one is way too close to similarity than it should be... I also tried it with different querys like "Find the person Hans Peter Ulrich" and it even gave me as top result Paul Friedrich with 0.684091150760. I am a little bit confused. I served the reranker with this command python -m vllm.entrypoints.openai.api\_server --model ./models/qwen3-reranker-4b/ --api-key dev --served-model-name qwen3-reranker-4b --tensor-parallel-size 2 --max-model-len 1000 --gpu-memory-utilization 0.3 --max-num-seqs 1 --port 8090 --hf\_overrides '{"architectures": \["Qwen3ForSequenceClassification"\],"classifier\_from\_token": \["no", "yes"\],"is\_original\_qwen3\_reranker": true}'
2025-10-16T22:18:45
https://www.reddit.com/r/LocalLLaMA/comments/1o8k2mm/what_am_i_doing_wrong/
Bowdenzug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8k2mm
false
null
t3_1o8k2mm
/r/LocalLLaMA/comments/1o8k2mm/what_am_i_doing_wrong/
false
false
self
1
null
BEST HARDWARE SET UP FOR AI COMPUTER IN RESEARCH LAB.
0
Hey everyone, At my research lab, we are trying to get a computer to be able to run LLMs locally and deploy them to our robots, as well as train time-series foundational models, run our own transformer and ISAAC SIM. I am looking for advice on the best hardware to get to be able to perform these operations with ease and fast. It seems that the big game changer in price is going to be the GPU since the difference of getting an ADA version or a regular GPU RTX is significant, but in order to run big LLM models with 70B or more parameters, we need at least 48GB of VRAM and more. The other hardware components seem to be very standardized in terms of price not a big difference in CPU, RAM or SSD price options. Maybe using multiple RTX can also be an option. It would be great to hear any recommendations from anyone having expertise in this area or students in an AI/Robotics lab about what computer setup they are using.
2025-10-16T22:11:23
https://www.reddit.com/r/LocalLLaMA/comments/1o8jwc7/best_hardware_set_up_for_ai_computer_in_research/
Most_Caterpillar269
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8jwc7
false
null
t3_1o8jwc7
/r/LocalLLaMA/comments/1o8jwc7/best_hardware_set_up_for_ai_computer_in_research/
false
false
self
0
null
Why is Qwen VL not working?
2
https://preview.redd.it/…ve to hear them.
2025-10-16T22:04:25
https://www.reddit.com/r/LocalLLaMA/comments/1o8jqb1/why_is_qwen_vl_not_working/
Badger-Purple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8jqb1
false
null
t3_1o8jqb1
/r/LocalLLaMA/comments/1o8jqb1/why_is_qwen_vl_not_working/
false
false
https://b.thumbs.redditm…ZurK0i3RxarU.jpg
2
null
Improving low VRAM performance for dense models using MoE offload technique
43
MoE partial offload, i.e. keeping experts on CPU and the context, attention, etc on GPU, has two benefits: - The non-sparse data is kept on fast VRAM - Everything needed to handle context computations is on GPU For dense models the first point is fairly irrelevant since, well, it's all dense so how you offload isn't really going to change bandwidth needs. However the second still applies and, MoE or not, compute for attention scales with context size but doesn't for the feed forward network (FFN). Thus, in theory, given the same VRAM we should be able to get much better scaling by offloading non-ffn tensors first to the GPU, rather than just whole layers. There is no handy `--n-cpu-moe` for this, but we can use the old `-ot exps=CPU` tool to make it work. For MoE models the tensors look like `blk.2.ffn_down_exps.weight` (note the "exps") whereas a dense model has names like `blk.2.ffn_down.weight` so here we just match all the FFN tensors and put them on CPU with `-ot ffn=CPU`. `-ngl 99` then offloads everything else: | model | size | params | backend | ngl | fa | ot | context | test | t/s | | ----------------------- | ---------: | ---------: | ---------- | --: | -: | ---------- | ------: | -----: | ------------: | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 0 | pp512 | 273.22 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 4096 | pp512 | 272.13 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 16384 | pp512 | 253.86 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 65536 | pp512 | 188.39 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 0 | tg128 | 8.40 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 4096 | tg128 | 7.99 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 16384 | tg128 | 7.87 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 99 | 1 | ffn=CPU | 65536 | tg128 | 7.17 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 0 | pp512 | 291.84 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 4096 | pp512 | 280.37 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 16384 | pp512 | 246.97 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 65536 | pp512 | 155.81 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 0 | tg128 | 8.84 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 4096 | tg128 | 5.22 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 16384 | tg128 | 2.42 | | llama 70B Q4_K_M | 39.59 GiB | 70.55 B | CUDA | 21 | 1 | N/A | 65536 | tg128 | 0.76 | We can see that using `-ot ffn=CPU` scales dramatically better with context than `-ngl ??`. The value of `-ngl 21` here was chosen to match the VRAM utilization of `-ot ffn=CPU -c 16384` which is about 13.7GB (note that I didn't quantize context!). The one tradeoff in terms of VRAM utilization is that this puts all the context on the GPU rather than splitting it based on `-ngl`. As a result the fraction of model you can fit into VRAM is reduced and thus you'd expect worse performance at short context lengths. This is generally quite minor, but as always, test on your hardware. (Note that the test system is an Epyc + 6000 Blackwell so quite chonky with a lot of compute but see my laptop below test below for the opposite.) Tuning for your system: - Quantize your context (e.g. `-ctk q8_0 -ctv q8_0`) if you want/can: As mentioned, pretty much the point of this is to put the context on GPU so it'll use more VRAM than it would with `-ngl` where some fraction of the context would be on CPU with the CPU layers. - Offloading less: If you don't have enough VRAM to handle `-ngl 99 -ot ffn=CPU` then just use `-ngl 50` or whatever. You'll still get better context length scaling, but obviously it won't be perfect. - Offloading more: If you have leftover VRAM after your `-ngl 99 -ot ffn=CPU -c ????` then you can offload some of the ffn layers by doing `blk.(0|1|2|3|4).ffn=CPU` or `blk.[2-9][0-9].ffn=CPU` Here's a test on my laptop with a "can't believe it's not a 4070" GPU (8GB w/ ~6GB free) and 2ch 6400MHz DDR5. I only go to 10k context (quantized q8_0) and the difference isn't as quite as dramatic but it's still a ~80% improvement at full context length which is nothing to scoff at: | size | params | backend | ngl | ot | context | test | t/s | | ---------: | ---------: | ---------- | --: | ------------------------------ | ------: | -----: | ---------: | | 13.34 GiB | 23.57 B | CUDA | 99 | blk.([8-9]|[1-9][0-9]).ffn=CPU | 0 | pp512 | 428.51 | | 13.34 GiB | 23.57 B | CUDA | 99 | blk.([8-9]|[1-9][0-9]).ffn=CPU | 10000 | pp512 | 375.32 | | 13.34 GiB | 23.57 B | CUDA | 99 | blk.([8-9]|[1-9][0-9]).ffn=CPU | 0 | tg128 | 4.31 | | 13.34 GiB | 23.57 B | CUDA | 99 | blk.([8-9]|[1-9][0-9]).ffn=CPU | 10000 | tg128 | 4.16 | | 13.34 GiB | 23.57 B | CUDA | 13 | | 0 | pp512 | 429.88 | | 13.34 GiB | 23.57 B | CUDA | 13 | | 10000 | pp512 | 367.12 | | 13.34 GiB | 23.57 B | CUDA | 13 | | 0 | tg128 | 4.46 | | 13.34 GiB | 23.57 B | CUDA | 13 | | 10000 | tg128 | 2.34 |
2025-10-16T22:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1o8jocc/improving_low_vram_performance_for_dense_models/
eloquentemu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8jocc
false
null
t3_1o8jocc
/r/LocalLLaMA/comments/1o8jocc/improving_low_vram_performance_for_dense_models/
false
false
self
43
null
Qwen Token Counting
0
So I'm really digging Qwen for my usecase which is code review. But I am having an issue with approximating tokens. The standard (and what Qwen recommends) seems to be to estimate 4 chars per token. However, when I do this and send code, I often exceed the context limit with my agent. Looking at the actual token count in the API response is also much higher. I've finally landed on 2.5 characters per token as working without error but it seems like my estimation is way too high most of the time. Does anyone have a solution for this problem? I'm using Qwen via API so I can't use the tokenizer itself to count the tokens. I've considered loading up the 0.6b model just to count tokens but that's gonna take a big chunk of memory (running on ECS) that feels kinda nuts just to count tokens. Wondering if there is some better way to calculate out there? Perhaps I should consider special characters and regular english characters separately?
2025-10-16T21:56:12
https://www.reddit.com/r/LocalLLaMA/comments/1o8jizc/qwen_token_counting/
thepetek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8jizc
false
null
t3_1o8jizc
/r/LocalLLaMA/comments/1o8jizc/qwen_token_counting/
false
false
self
0
null
Gemma3 270(fine tuning for the first time)
0
So I did it with a small three sentence json file(first time fine tuning I just want to test it) prepare the hugging face model and loRa, in the it the model cracked out and started hallucinating I didn't give me any expected response from the json dataset🌚, I asked chat gpt and said that it's over fitting , can you explain?, always willing to learn new stuff
2025-10-16T21:46:19
https://www.reddit.com/r/LocalLLaMA/comments/1o8ja9i/gemma3_270fine_tuning_for_the_first_time/
Stardust_vhu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8ja9i
false
null
t3_1o8ja9i
/r/LocalLLaMA/comments/1o8ja9i/gemma3_270fine_tuning_for_the_first_time/
false
false
self
0
null
1 Petaflop shiny DGX SPARK vs bunch of e-waste: what is Jensen selling to us?
1
[removed]
2025-10-16T21:27:30
https://www.reddit.com/r/LocalLLaMA/comments/1o8itqw/1_petaflop_shiny_dgx_spark_vs_bunch_of_ewaste/
crazzydriver77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8itqw
false
null
t3_1o8itqw
/r/LocalLLaMA/comments/1o8itqw/1_petaflop_shiny_dgx_spark_vs_bunch_of_ewaste/
false
false
https://a.thumbs.redditm…0b8Rs-1DNOK4.jpg
1
null
Any simple alternatives to Continue.dev?
14
So it seems that Continue.dev has decided to continuously make their product worse for local use, hiding the config file and now automatically truncating prompts even after going through the trouble of specifying the context length. I've tried Roo, Kilo, Cline etc. but 10k+ tokens for every request seems excessive and I don't really want an agent. Really I just want a chat window that I can @ context and that can use read-only tools to discover additional context. Anything I should check out? Continue was working great, but with the recent updates it seems like it's time to jump ship before it becomes totally unusable.
2025-10-16T21:10:34
https://www.reddit.com/r/LocalLLaMA/comments/1o8ieap/any_simple_alternatives_to_continuedev/
HEAVYlight123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8ieap
false
null
t3_1o8ieap
/r/LocalLLaMA/comments/1o8ieap/any_simple_alternatives_to_continuedev/
false
false
self
14
null
Does anyone know how models get benchmarked on ArtificialAnalysis.ai? Curious who uses it and how to participate.
0
I've recently been exploring the benchmark reports on [ArtificialAnalysis.ai](https://artificialanalysis.ai/), and I’m curious about how models actually get listed or benchmarked there. Is there an open submission process for developers or companies, or is it curated by the Artificial Analysis team? I couldn’t find any public documentation on how to register or submit a model. Also, who typically uses this site — is it mostly researchers, enterprise users, or independent developers tracking model performance trends? Would love to hear from anyone who's interacted with the platform or knows how their benchmarking pipeline works.
2025-10-16T20:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1o8i49g/does_anyone_know_how_models_get_benchmarked_on/
gamzabaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8i49g
false
null
t3_1o8i49g
/r/LocalLLaMA/comments/1o8i49g/does_anyone_know_how_models_get_benchmarked_on/
false
false
self
0
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=216&crop=smart&auto=webp&s=b97954336b79c1390848d0e44fa056a85de68672', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=320&crop=smart&auto=webp&s=65f53b80ab9674ee645013e3e8eeac4f953d657e', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=640&crop=smart&auto=webp&s=47f397e4a22ed5ec7e82aad070eb446319603abc', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=960&crop=smart&auto=webp&s=0f4359d47b78f5c1aa35de8804dbe36a749fc11a', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=1080&crop=smart&auto=webp&s=62eb4b7216f41af6600fc4df79cfa67425c19442', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?auto=webp&s=efc17c9f241b4403d22cbacfe5d71900ee1cf85a', 'width': 1260}, 'variants': {}}]}
I made a 24/7 Video stream with AI Companion
0
LLM inferencing runs on one RTX 5090, synced with over 500 pre-rendered video segments so LLM and video share context.
2025-10-16T20:47:59
https://v.redd.it/l7wm2ubedjvf1
Witty_Side8702
v.redd.it
1970-01-01T00:00:00
0
{}
1o8httz
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l7wm2ubedjvf1/DASHPlaylist.mpd?a=1763239694%2CNjA4NDg4YTk3M2M5MWQwODYzNDg3YWE2MDJlMWE2MGQ0YTM5MTViNzc3ZmM5OThmNGUzZDYyMTZlZWM0OGNlMA%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/l7wm2ubedjvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/l7wm2ubedjvf1/HLSPlaylist.m3u8?a=1763239694%2CNjFjOTAxNjFlNmExN2FkMmUzYjk1MWE0YWFlOTY0MGU0OGJkMGE0YTM1MzZjNjY3Nzc2ODNhN2FmMzc0ZGYwNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l7wm2ubedjvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1078}}
t3_1o8httz
/r/LocalLLaMA/comments/1o8httz/i_made_a_247_video_stream_with_ai_companion/
false
false
https://external-preview…b55b922aeb8ec5a2
0
{'enabled': False, 'images': [{'id': 'bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=108&crop=smart&format=pjpg&auto=webp&s=2efbdde14bfe015381ccd5069cd8cba36ea77173', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=216&crop=smart&format=pjpg&auto=webp&s=1829b1fd33bae891f286c926d17c8181863c3dc2', 'width': 216}, {'height': 570, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=320&crop=smart&format=pjpg&auto=webp&s=6c2e2aece67d9388b3bb44c254c7a10880685336', 'width': 320}, {'height': 1140, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=640&crop=smart&format=pjpg&auto=webp&s=bc61bb01096c9c4aea299a9cac19d80ac9f3c723', 'width': 640}, {'height': 1710, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=960&crop=smart&format=pjpg&auto=webp&s=dc9d0564064cfa4ae3e1059c93d5f7a84f0ecb0d', 'width': 960}, {'height': 1924, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8c7efb4978cc23a21d09647be59e8d6f51979a6', 'width': 1080}], 'source': {'height': 1924, 'url': 'https://external-preview.redd.it/bTRvemJ0YmVkanZmMQJTofdtglFw1aAbkvBY_sjD_TYFQ4HtfPx1JmM5Ccxu.png?format=pjpg&auto=webp&s=6cfcef7610b0e4162efcfc3c6806d4e975a38fec', 'width': 1080}, 'variants': {}}]}
Helloo, 96GB GPU from Huawei for $1400, slower than NVIDIA but the VRAM (GN)
27
2025-10-16T20:45:16
https://www.youtube.com/watch?v=qGe_fq68x-Q
UniqueAttourney
youtube.com
1970-01-01T00:00:00
0
{}
1o8hrd6
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/qGe_fq68x-Q?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="China&#39;s GPU Competition: 96GB Huawei Atlas 300I Duo Dual-GPU Tear-Down"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/qGe_fq68x-Q/hqdefault.jpg', 'thumbnail_width': 480, 'title': "China's GPU Competition: 96GB Huawei Atlas 300I Duo Dual-GPU Tear-Down", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o8hrd6
/r/LocalLLaMA/comments/1o8hrd6/helloo_96gb_gpu_from_huawei_for_1400_slower_than/
false
false
default
27
{'enabled': False, 'images': [{'id': 'h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=108&crop=smart&auto=webp&s=90f45af767e6f767c313c41c443f43dcfe82b5ae', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=216&crop=smart&auto=webp&s=770cce1cdb2531c4b7feb6acd590105cbdf36a2b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=320&crop=smart&auto=webp&s=436c9e5446c362e3900f824dd945b76349b991ff', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?auto=webp&s=08911d1e6251a8bcad2cc7a94f21cfbd9ba1b16a', 'width': 480}, 'variants': {}}]}
New small model from Meta intended for limited compute?
0
[https://huggingface.co/facebook/MobileLLM-Pro](https://huggingface.co/facebook/MobileLLM-Pro)
2025-10-16T20:44:25
https://www.reddit.com/r/LocalLLaMA/comments/1o8hqlp/new_small_model_from_meta_intended_for_limited/
Ok-Recognition-3177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8hqlp
false
null
t3_1o8hqlp
/r/LocalLLaMA/comments/1o8hqlp/new_small_model_from_meta_intended_for_limited/
false
false
self
0
{'enabled': False, 'images': [{'id': 'jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=108&crop=smart&auto=webp&s=aba0ec468bc2a4a98997fd48e6127c3eeb427fc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=216&crop=smart&auto=webp&s=c0499a1df8b59833a847cc48057d27e94f25947e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=320&crop=smart&auto=webp&s=38a3792ccfc14756b2d54477027016d433e7490d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=640&crop=smart&auto=webp&s=7e2417b6932d5a1fcaf49ac7d2561558b6a9873e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=960&crop=smart&auto=webp&s=bb65cc88a85d20bd124f39d26552675853105a87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=1080&crop=smart&auto=webp&s=f6b0738606be5a5354cd63955dd76436298a5e21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?auto=webp&s=5d0652d5708c785a0a543a387a1a56e5559dfad1', 'width': 1200}, 'variants': {}}]}
z.ai glm android app
2
hello guys now glm 4.6 is my go to model I am Soo sad about the fact there is no android app available did I miss a an Android unofficial implementation or should I vibe code it 🤗
2025-10-16T20:36:38
https://www.reddit.com/r/LocalLLaMA/comments/1o8hjko/zai_glm_android_app/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8hjko
false
null
t3_1o8hjko
/r/LocalLLaMA/comments/1o8hjko/zai_glm_android_app/
false
false
self
2
null
DGX SPARK vs M1 max
3
hi guys now we have some benchmarks of the Nvidia dgx spark am pretty sure there is no reason to not buy a Mac studio m1 max if it's fine to run models under 64go of ram and it will be worth every penny am I thinking right or I don't see the real potential of DGX spark ?
2025-10-16T20:35:17
https://www.reddit.com/r/LocalLLaMA/comments/1o8hiaa/dgx_spark_vs_m1_max/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8hiaa
false
null
t3_1o8hiaa
/r/LocalLLaMA/comments/1o8hiaa/dgx_spark_vs_m1_max/
false
false
self
3
null
Qwen3-VL-30B in llama.cpp
32
This release of llama.cpp can be used to run yairpatch/qwen3-vl-30b-a3b- GGUFs. Builds are pre-release, so issues are possible. But the overall state is very useable, so hopefully we will soon see it merged into llama.cpp. [https://github.com/Thireus/llama.cpp/releases/tag/tr-qwen3-vl-3-b6981-ab45b1a](https://github.com/Thireus/llama.cpp/releases/tag/tr-qwen3-vl-3-b6981-ab45b1a) Also if you rename release to e.g. llama-b6981-bin-macos-arm64.zip, you will be able to install it as a backend into Jan.
2025-10-16T20:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1o8hh1n/qwen3vl30b_in_llamacpp/
egomarker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8hh1n
false
null
t3_1o8hh1n
/r/LocalLLaMA/comments/1o8hh1n/qwen3vl30b_in_llamacpp/
false
false
self
32
{'enabled': False, 'images': [{'id': 'zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=108&crop=smart&auto=webp&s=c22bf731391b1c8187df0f82442713af79ab3b66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=216&crop=smart&auto=webp&s=7493d4e029c6b6e5ca10062f8d1ff001ae3493db', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=320&crop=smart&auto=webp&s=143e83c3296d92e2cc09139eb11de478e8ef6b13', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=640&crop=smart&auto=webp&s=a0e93f478fe25bdbe4aa9f7d3e440779a2cb0138', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=960&crop=smart&auto=webp&s=f6dfdafff3b91510f11a9cb861ac4586e67c8231', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?width=1080&crop=smart&auto=webp&s=60df9bc481f6f6c3586efba39ac116367c84a7d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zQTA0g-UE_tn7uN98FdGBrqDnB9tjYWAmYKvzivSZ9g.png?auto=webp&s=f603419ba3342e68f9c8d40da7deb662c0cd9474', 'width': 1200}, 'variants': {}}]}
Since DGX Spark is a disappointment... What is the best value for money hardware today?
137
My current compute box (2×1080 Ti) is failing, so I’ve been renting GPUs by the hour. I’d been waiting for DGX Spark, but early reviews look disappointing for the price/perf. I’m ready to build a new PC and I’m torn between a **single** high-end GPU or **dual** mid/high GPUs. What’s the **best price/performance configuration** I can build **for ≤ $3,999** (tower, not a rack server)? I don't care about RGBs and things like that - it will be kept in the basement and not looked at.
2025-10-16T20:26:47
https://www.reddit.com/r/LocalLLaMA/comments/1o8hacw/since_dgx_spark_is_a_disappointment_what_is_the/
goto-ca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8hacw
false
null
t3_1o8hacw
/r/LocalLLaMA/comments/1o8hacw/since_dgx_spark_is_a_disappointment_what_is_the/
false
false
self
137
null
China's GPU Competition: 96GB Huawei Atlas 300I Duo Dual-GPU Tear-Down
120
We need benchmarks ..
2025-10-16T20:26:18
https://youtu.be/qGe_fq68x-Q?si=0w_3qjLvmHTXPWxI
sub_RedditTor
youtu.be
1970-01-01T00:00:00
0
{}
1o8h9ye
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/qGe_fq68x-Q?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="China&#39;s GPU Competition: 96GB Huawei Atlas 300I Duo Dual-GPU Tear-Down"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/qGe_fq68x-Q/hqdefault.jpg', 'thumbnail_width': 480, 'title': "China's GPU Competition: 96GB Huawei Atlas 300I Duo Dual-GPU Tear-Down", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1o8h9ye
/r/LocalLLaMA/comments/1o8h9ye/chinas_gpu_competition_96gb_huawei_atlas_300i_duo/
false
false
default
120
{'enabled': False, 'images': [{'id': 'h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=108&crop=smart&auto=webp&s=90f45af767e6f767c313c41c443f43dcfe82b5ae', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=216&crop=smart&auto=webp&s=770cce1cdb2531c4b7feb6acd590105cbdf36a2b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?width=320&crop=smart&auto=webp&s=436c9e5446c362e3900f824dd945b76349b991ff', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/h0AXu77KNbLZvwy9Mngk-UUEgeAjkwRBjGOdlujFn7U.jpeg?auto=webp&s=08911d1e6251a8bcad2cc7a94f21cfbd9ba1b16a', 'width': 480}, 'variants': {}}]}
Help Us Choose Our Next Open-source Local AI App
5
We’re picking **one** fully open-source app to build next with [Llamafarm's](https://github.com/llama-farm/llamafarm) local AI development tools. It’ll run great on a laptop and be easy for anyone to use. No accounts. Clean UX. Real docs. One-click run. 100% local - models, RAG, runtime, app all local - (Google, OpenAI, ISP doesn't get any info). **Healthcare Assistant.** Drag in labs, CCD/Blue Button exports, or portal PDFs. It translates jargon, highlights “out of range” items, and drafts questions for your next visit. Optional modules for medication interactions and guideline lookups. I hate looking up terms in Google or OpenAI and getting ads for a month. Offline-friendly and fast on everyday hardware. **Legal Aid.** Multi-language plain guidance for immigration paperwork, divorce/custody, housing, and small claims. It maps your situation to the right forms, creates a prep checklist, and generates letter/filing drafts with citations to public sources. Those questions you don't want the world to know. **Financial Helper.** Ask about taxes, budgeting, entity setup (LLC vs S-Corp), and “what changed this year.” Import a **local** CSV/ledger to get categorized insights, cash-flow flags, and draft checklists for filings. Plus explain-like-I’m-five summaries with links to official rules. Ask the questions you may be embarrassed to ask a friend. **Image Fixer.** On-device touch-ups: blemish removal, background cleanup, face/plate blur, smart crop, and batch processing. Side-by-side before/after, history panel with undo, and simple presets (headshot, marketplace, family album). No uploads, just quick results. Please don't send your family photos to OpenAI; keep them local. What would *you* actually use every week? If it’s none of these, tell us what would be—teacher prep kit, research brief builder, local dev helper for code search, small-biz ops toolkit, something else? If we do this, we’ll do it right: open source, one-click run, clear docs, tests, evals, and a tidy UI—built to showcase the power and potential of local AI. Drop your vote and one line on why. Add one must-have and one deal-breaker. If you’re up for feedback or safe sample data, say so and we’ll follow up. Which one should we ship first?
2025-10-16T20:05:09
https://www.reddit.com/r/LocalLLaMA/comments/1o8gpm0/help_us_choose_our_next_opensource_local_ai_app/
badgerbadgerbadgerWI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8gpm0
false
null
t3_1o8gpm0
/r/LocalLLaMA/comments/1o8gpm0/help_us_choose_our_next_opensource_local_ai_app/
false
false
self
5
{'enabled': False, 'images': [{'id': 'WjdFLl-JIeq85kxVDd7K9fhliU5J0WTqeTyvNfWRYYs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/WjdFLl-JIeq85kxVDd7K9fhliU5J0WTqeTyvNfWRYYs.png?width=108&crop=smart&auto=webp&s=7c2b5e0d4f495fc30e809eac8a2a920c0953f493', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/WjdFLl-JIeq85kxVDd7K9fhliU5J0WTqeTyvNfWRYYs.png?width=216&crop=smart&auto=webp&s=d9520d3add05b42c38835536bc5507c6006617df', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/WjdFLl-JIeq85kxVDd7K9fhliU5J0WTqeTyvNfWRYYs.png?width=320&crop=smart&auto=webp&s=8c86207212f782dd5b21e45ea118a8eaf9ca843c', 'width': 320}], 'source': {'height': 448, 'url': 'https://external-preview.redd.it/WjdFLl-JIeq85kxVDd7K9fhliU5J0WTqeTyvNfWRYYs.png?auto=webp&s=3c9e426538230567ab4e8a51554e1e0128b477f6', 'width': 448}, 'variants': {}}]}
FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution
2
TL;DR — FlashVSR is a streaming, one-step diffusion-based video super-resolution framework with block-sparse attention and a Tiny Conditional Decoder. It reaches ~17 FPS at 768×1408 on a single A100 GPU. A Locality-Constrained Attention design further improves generalization and perceptual quality on ultra-high-resolution videos.
2025-10-16T19:48:39
https://zhuang2002.github.io/FlashVSR/
FullstackSensei
zhuang2002.github.io
1970-01-01T00:00:00
0
{}
1o8g9q2
false
null
t3_1o8g9q2
/r/LocalLLaMA/comments/1o8g9q2/flashvsr_towards_realtime_diffusionbased/
false
false
default
2
null
issue swith gemma 3 4b
0
guyus i am using gemma 3 4b q4 as a evaluater for images and i ask it to make decision based on ruiles in system prompt , now the issue is sometimes its fast asf and responds within a second other times it takes 35 seconds toi analyse image and most of the time as per the logs is spend in encoding it for the model by that mmoprg file as far as i knwo that is the vision encoder for the modle , i am using lmstudio and this issue happens a lot even with the same image
2025-10-16T19:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1o8g0cj/issue_swith_gemma_3_4b/
No_Disk_6915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8g0cj
false
null
t3_1o8g0cj
/r/LocalLLaMA/comments/1o8g0cj/issue_swith_gemma_3_4b/
false
false
self
0
null
LLM on Xeon 6 core and RTX 3050 6GB
1
Are there any AI models like image generation that would work well on a Xeon 6 core CPU, 64GB RAM and RTX 3050 6gb GPU? I suspect not much will work well due to the low end GPU with only 6gb RAM but if anyone knows of anything that I may have missed. I also got Ryzen 9 Laptop with RTX 4060 ti.
2025-10-16T19:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1o8fs4g/llm_on_xeon_6_core_and_rtx_3050_6gb/
xenomorph-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8fs4g
false
null
t3_1o8fs4g
/r/LocalLLaMA/comments/1o8fs4g/llm_on_xeon_6_core_and_rtx_3050_6gb/
false
false
self
1
null
This is interesting…
29
A new release from Andrej Karpathy. Train your own model with $100 https://github.com/karpathy/nanochat/discussions/1
2025-10-16T19:29:14
https://www.reddit.com/r/LocalLLaMA/comments/1o8frim/this_is_interesting/
nicoracarlo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8frim
false
null
t3_1o8frim
/r/LocalLLaMA/comments/1o8frim/this_is_interesting/
false
false
self
29
null
Interesting post about using DGX Spark compute for prefill and Mac Studio memory bandwidth for decode
8
I found this blog post super interesting, describing Exo using a DGX Spark for prefill and a Mac Studio for decode, leveraging each device's strengths.
2025-10-16T18:28:11
https://blog.exolabs.net/nvidia-dgx-spark/
thedirtyscreech
blog.exolabs.net
1970-01-01T00:00:00
0
{}
1o8e4ie
false
null
t3_1o8e4ie
/r/LocalLLaMA/comments/1o8e4ie/interesting_post_about_using_dgx_spark_compute/
false
false
default
8
{'enabled': False, 'images': [{'id': 'K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=108&crop=smart&auto=webp&s=7c1fdbd5fb183937e67a1b86563189501f140a1c', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=216&crop=smart&auto=webp&s=4336a9720c86192fe14b35a8a061bbdb14638fa8', 'width': 216}, {'height': 333, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=320&crop=smart&auto=webp&s=33316abc9096614847add2f23a8ba3e6cb9c1c12', 'width': 320}, {'height': 667, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=640&crop=smart&auto=webp&s=b9f5266792809d968871e23573f02585582d09e3', 'width': 640}, {'height': 1001, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=960&crop=smart&auto=webp&s=3349d00121c6be480cbfe6aa236959947f9e6414', 'width': 960}, {'height': 1126, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=1080&crop=smart&auto=webp&s=c19c6191247dbd506fa799499d6be93a04d3468e', 'width': 1080}], 'source': {'height': 4449, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?auto=webp&s=34aee5e6359649c16bc33b554bf9338ecce95693', 'width': 4264}, 'variants': {}}]}
What's the Oct 25 optimal jank buy for larger MOEs (120B param+)?
8
The obvious play is: Used EPYC 7-series + DDR4 + a few 3090s ($500 for CPU+mobo, \~$300 for RAM, $600 per 3090). What's the cheapest way to move up to DDR5 bandwidth? * I see Xenon QS & ES chips floating around for <$200… what’s the best cheap/used motherboard for them? * Has anybody pulled off a DDR5 8+ channel, 3+ PCI slot build for under say $1200 (without GPUs)? Beyond that: * AMD 128GB NUCs ... limited (but perhaps it's cheap to stack multiple?) * $8–10K M4 Max ... but maybe waiting for M5 Max (and then buying used) makes more sense.
2025-10-16T18:09:34
https://www.reddit.com/r/LocalLLaMA/comments/1o8dm7m/whats_the_oct_25_optimal_jank_buy_for_larger_moes/
Leopold_Boom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8dm7m
false
null
t3_1o8dm7m
/r/LocalLLaMA/comments/1o8dm7m/whats_the_oct_25_optimal_jank_buy_for_larger_moes/
false
false
self
8
null
Questions about Qwen3 types
8
Hello there, I have an AMD 9950X3D and 4080 Super 16GB with 64GB of DDR5. I'm trying to decide what Qwen3 models to run for local vibe coding 20-30k token code bases and other general writing/editing tasks. Qwen3 VL 8B Thinking and Qwen3 VL 30B A3B Thinking are the two I'm looking at. Why isn't there an FP8 native 8B model? On HF, I don't see GGUFs of many of the FP8 models, is there a reason for this? Is doing a Q5\_K or Q6\_K from F8 not possible or just not worth it? The 30B has 3B active, why doesn't the 8B have a similar thing like 8B-A3B? Why isn't there any intermediate size like 12B or 16B? I remember there used to be lots of 13B models. It seems like 8B-VL-Thinking-A3B-GGUF Q6\_K would be the ideal model. Obviously, my understanding is not super thorough, so I would appreciate if ya'll could help educate me (kindly if possible).
2025-10-16T17:50:59
https://www.reddit.com/r/LocalLLaMA/comments/1o8d3uc/questions_about_qwen3_types/
LargelyInnocuous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8d3uc
false
null
t3_1o8d3uc
/r/LocalLLaMA/comments/1o8d3uc/questions_about_qwen3_types/
false
false
self
8
null
Internship with local LLMs at AMD!
67
Hi folks! My team and I at AMD have been having a lot of fun developing agents, building next-gen apps for local LLMs, fine-tuning models, and posting a lot of that here on r/LocalLLaMA) . We’re now looking for a (ideally grad) student who loves hands-on local AI for an internship on our team. Our team really tries to contribute quite a bit to the open source community. One of our key projects is [Lemonade ](https://github.com/lemonade-sdk/lemonade)(Ollama-like local app with a really cool Discord community). Here is the rough description of what we envision for this position: * Develop an agentic LLM framework, designed to operate effectively on client devices * Build and refine the framework by developing a focused application (from computer use to database reasoning - your choice!) * Experiment with fine-tuning, LoRAs, RAG, and agent architectures * Work side-by-side with the Lemonade team =D Experience with some of the above (e.g., fine-tuning) is a huge bonus. We also love people who are active on open-source GitHub projects, Hugging Face, and of course r/LocalLLaMA ;) If you’re excited about this opportunity with local AI, let’s chat! Please apply using the link below. Please also feel free to ask questions here or DM me on Discord (look for Daniel H). Excited to hear from this community! Details here: careers (dot) amd (dot) com/careers-home/jobs/70208
2025-10-16T17:50:36
https://www.reddit.com/r/LocalLLaMA/comments/1o8d3gt/internship_with_local_llms_at_amd/
dholanda_amd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8d3gt
false
null
t3_1o8d3gt
/r/LocalLLaMA/comments/1o8d3gt/internship_with_local_llms_at_amd/
false
false
self
67
{'enabled': False, 'images': [{'id': 'ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=108&crop=smart&auto=webp&s=8101a12f8ef8750afbd17d3254213b77e3e4ea95', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=216&crop=smart&auto=webp&s=cd3f2ff40d2b44584ef04768361adfaa22654881', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=320&crop=smart&auto=webp&s=2ef4e5c6d572d61a8e815b0cfa1197e47836f6bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=640&crop=smart&auto=webp&s=78355d00dd7667d7092d5116666866da79bb34b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=960&crop=smart&auto=webp&s=f87daa95f98c43e52a6cbe4bca637571ec0adc58', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?width=1080&crop=smart&auto=webp&s=796b1ef466bac01d95b83c62b99f25513a9d9689', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ilSYv9BK1_NekzX3xbIrwsKuDBS7KEQalauCaqlcIj4.png?auto=webp&s=934b52b23e8bc7b2100a34b6ddfe744f2efed884', 'width': 1200}, 'variants': {}}]}
Exo linking Mac studio with DGX
9
EXO's newest demo combines two of NVIDIA's DGX Spark systems with Apple's M3 Ultra–powered Mac Studio to make use of the disparate strengths of each machine: Spark has more raw compute muscle, while the Mac Studio can move data around much faster. EXO 1.0, currently in early access, blends the two into a single inference pipeline, and it apparently works shockingly well.
2025-10-16T17:46:00
https://www.tomshardware.com/software/two-nvidia-dgx-spark-systems-combined-with-m3-ultra-mac-studio-to-create-blistering-llm-system-exo-labs-demonstrates-disaggregated-ai-inference-and-achieves-a-2-8-benchmark-boost
Badger-Purple
tomshardware.com
1970-01-01T00:00:00
0
{}
1o8cz1h
false
null
t3_1o8cz1h
/r/LocalLLaMA/comments/1o8cz1h/exo_linking_mac_studio_with_dgx/
false
false
default
9
{'enabled': False, 'images': [{'id': 'hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=108&crop=smart&auto=webp&s=bf5936334c9e0d6cec4bd239b6bcbc698133af94', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=216&crop=smart&auto=webp&s=e6a51335985a7c032c3e1d8a369c015d70be5690', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=320&crop=smart&auto=webp&s=f95af4e8cfedc71cc2e6efe73e1e4907df5cef87', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=640&crop=smart&auto=webp&s=bdf78665a24bb3ff417303dab346ea60afc9e667', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=960&crop=smart&auto=webp&s=30a2939d98debf13f77d38b89c479601f80e8902', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=1080&crop=smart&auto=webp&s=36aa0251ea34cf9bc60e122a38568e722b31f50d', 'width': 1080}], 'source': {'height': 1337, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?auto=webp&s=5247ca82390f25f14fcd33d9d4d52de15d6ed2b9', 'width': 2376}, 'variants': {}}]}
Internship opportunity with local LLMS at AMD!
1
[removed]
2025-10-16T17:35:44
https://www.reddit.com/r/LocalLLaMA/comments/1o8cp1v/internship_opportunity_with_local_llms_at_amd/
dholanda_amd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8cp1v
false
null
t3_1o8cp1v
/r/LocalLLaMA/comments/1o8cp1v/internship_opportunity_with_local_llms_at_amd/
false
false
self
1
null
mtmd : support home-cooked Mistral Small Omni by ngxson · Pull Request #14928 · ggml-org/llama.cpp
22
Support a home-cooked version of Mistral Small which can take **both audio and image** as input Link to GGUF: [https://huggingface.co/ngxson/Home-Cook-Mistral-Small-Omni-24B-2507-GGUF](https://huggingface.co/ngxson/Home-Cook-Mistral-Small-Omni-24B-2507-GGUF) (This is a multimodal model created by merging Mistral Small 2506 (with vision capabilities) and Voxtral 2507 (with audio capabilities) using a modified version of the `mergekit` tool.)
2025-10-16T17:32:29
https://github.com/ggml-org/llama.cpp/pull/14928
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1o8clx0
false
null
t3_1o8clx0
/r/LocalLLaMA/comments/1o8clx0/mtmd_support_homecooked_mistral_small_omni_by/
false
false
default
22
{'enabled': False, 'images': [{'id': 'LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=108&crop=smart&auto=webp&s=53e5618dc5fa26d4a5a0288717f70c1eba16e455', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=216&crop=smart&auto=webp&s=6640ef4e0ba6ec6220848085539e35dc935cc6da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=320&crop=smart&auto=webp&s=868742cfe13de076ea3e0eb1c4bdc14078258ea4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=640&crop=smart&auto=webp&s=43014e497d8695214852c7b34ef392748272ec68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=960&crop=smart&auto=webp&s=a5e9ef7a136480a127f655d2410d9aa4a6da985c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?width=1080&crop=smart&auto=webp&s=c081d26f8da1e1d9721b7e5be6d40d76408e2d46', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LDnY4VtL8nEMhrRnSn7xRje529JikFQ9xlCA17hsv2k.png?auto=webp&s=9d8a9ab4c139f5fa2f80dca4dedfd0a630ceb5af', 'width': 1200}, 'variants': {}}]}
new 1B LLM by meta
116
https://preview.redd.it/…k/MobileLLM-Pro)
2025-10-16T17:20:06
https://www.reddit.com/r/LocalLLaMA/comments/1o8c9ta/new_1b_llm_by_meta/
BreakfastFriendly728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8c9ta
false
null
t3_1o8c9ta
/r/LocalLLaMA/comments/1o8c9ta/new_1b_llm_by_meta/
false
false
https://external-preview…76711e4c13bbe1ca
116
{'enabled': False, 'images': [{'id': 'jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=108&crop=smart&auto=webp&s=aba0ec468bc2a4a98997fd48e6127c3eeb427fc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=216&crop=smart&auto=webp&s=c0499a1df8b59833a847cc48057d27e94f25947e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=320&crop=smart&auto=webp&s=38a3792ccfc14756b2d54477027016d433e7490d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=640&crop=smart&auto=webp&s=7e2417b6932d5a1fcaf49ac7d2561558b6a9873e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=960&crop=smart&auto=webp&s=bb65cc88a85d20bd124f39d26552675853105a87', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?width=1080&crop=smart&auto=webp&s=f6b0738606be5a5354cd63955dd76436298a5e21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jpWPa_miOIzMa-yE2f-cKYYuEKEP1Bglx0QJxru_lGs.png?auto=webp&s=5d0652d5708c785a0a543a387a1a56e5559dfad1', 'width': 1200}, 'variants': {}}]}
Vulkan with Strix halo igpu and external 3090s not possible?
6
I bought an AI Max 395 mini pc with 128gb with the hope that I could connect 3090 egpus and run larger models like GLM-4.6. However, I get memory errors and crashes when trying to load a model with llama-cpp with the igpu plus any other gpu. Before I bought the strix halo pc I confirmed with the radeon 780m igpu on my old pc that vulkan could run igpus and nvidia gpus together. But it's not working at all with Strix Halo. Am I screwed and this will never work? I cant even use rocm with my 395, AMD's support for their own "AI Max" series seems abyssmal.
2025-10-16T17:13:27
https://www.reddit.com/r/LocalLLaMA/comments/1o8c3hu/vulkan_with_strix_halo_igpu_and_external_3090s/
Goldkoron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8c3hu
false
null
t3_1o8c3hu
/r/LocalLLaMA/comments/1o8c3hu/vulkan_with_strix_halo_igpu_and_external_3090s/
false
false
self
6
null
The model apocalypse is coming, which one do you chose to save and what other software ?
21
So the year is ${current\_year} + X, a totalitarian world government is in power and decides the local running "unapproved" and "unaligned" LLMa are a danger to them (also is for the public interest, the terrorists may use them), as well as the associated software to use and train them (you can have guns but they are useless if you don't have ammunition), you mange to send a message in the past: "You have an 8TB SSD, you have to back-up the most useful models and software for the future", what is your list of "must have" models and software, post it here to save the future ? (Yes, I do have an 8TB SSD and I foresee something like this happening and I want to have a nice selection of models and SW)
2025-10-16T17:03:31
https://www.reddit.com/r/LocalLLaMA/comments/1o8bt83/the_model_apocalypse_is_coming_which_one_do_you/
HumanDrone8721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8bt83
false
null
t3_1o8bt83
/r/LocalLLaMA/comments/1o8bt83/the_model_apocalypse_is_coming_which_one_do_you/
false
false
self
21
null
Tutorial - How to Use YanoljaNEXT-Rosetta Translation Model with LM Studio?
2
Thank you all for your interest in YanoljaNEXT-Rosetta translation model! It has been the most trending translation model on HuggingFace! Some people asked me how to use this model using LM Studio, so here I would like to briefly explain it. # What is YanoljaNEXT-Rosetta? YanoljaNEXT-Rosetta is a powerful multilingual translation model that supports 32 languages. It's designed to provide high-quality translations with context awareness and domain-specific terminology support. Also, you can input a JSON string as a source text! # Step 1: Download and Install LM Studio First, you need to download LM Studio: 1. Visit [**https://lmstudio.ai/download**](https://lmstudio.ai/download) 2. Download the version for your operating system (Windows, Mac, or Linux) 3. Install LM Studio on your computer 4. Launch the application LM Studio is a desktop application that allows you to run large language models locally on your machine without needing cloud services or API keys. # Step 2: Search for the Model Once LM Studio is open: 1. Click on the **Search icon** (🔍) in the left sidebar 2. In the search box, type: **"rosetta"** 3. You should see **"Yanoljanext Rosetta 12B 2510"** appear in the search result https://preview.redd.it/rr8fl37e6ivf1.png?width=2552&format=png&auto=webp&s=055dd1a5f2097473f9a821adf4fb23ef1e2ee5a5 # Step 3: Download the Model 1. Click on the **"YanoljaNEXT Rosetta 12B 2510"** model from the search results 2. You'll see different quantization options (GGUF format with different sizes) 3. Choose a quantization based on your hardware: * **Q4\_K\_M**: Good balance of quality and size (recommended for most users) * **Q5\_K\_M**: Better quality, slightly larger * **Q8\_0**: Highest quality, largest size 4. Click the **Download** button 5. Wait for the download to complete (the model is around 7-8 GB depending on quantization) Once downloaded, the model will appear in your **"My Models"** section. # Step 4: Load the Model 1. Click on the **Chat icon** (💬) in the left sidebar to go to the chat interface 2. At the top of the chat window, click on the **model selector dropdown** 3. Find **"YanoljaNEXT Rosetta 12B 2510"** in the list under "Your Models" 4. Click on it to load the model The model will take a few seconds to load into memory. # Step 5: Create a Translation Preset 1. First, you need to switch the UI to Power User or Developer 2. Then click the **settings button** on the top right corner 3. In the settings, look for the **preset section** (usually on the left or top) 4. Click the **"+" button** to create a new preset 5. Name it **"translation"** (or any name you prefer) 6. This preset will store your configuration for translation tasks [Power User or Developer](https://preview.redd.it/efhz9s0o8ivf1.png?width=1400&format=png&auto=webp&s=637192a5852cdce90bb4db79849c310b24e65051) [Clicking the monkey spanner button will show the settings panel](https://preview.redd.it/wgaxqmur8ivf1.png?width=1180&format=png&auto=webp&s=c13b84d5926d551a3a227089bf65d40c22f9dfac) # Step 6: Configure the System Prompt 1. Click on the **"Context"** tab at the top of the preset editor 2. In the **"System Prompt"** field, enter your translation instructions Here's an example system prompt: Translate the user's text to Korean. Context: Simple introduction about a tech company. Tone: Informative and helpful Glossary: - Yanolja NEXT -> 야놀자넥스트 - travel industry -> 여행 산업 Provide the final translation immediately without any other text. **You can customize this based on your needs:** * Change the target language (e.g., "Translate the user's text to Spanish") * Adjust the context and tone * Add your own glossary terms for domain-specific vocabulary # Step 7: Configure the Jinja Prompt Template **This is the most critical step!** YanoljaNEXT-Rosetta uses a specific prompt format that you must configure correctly: 1. Click on the **"Model"** tab at the top in settings 2. Scroll down until you find **"Prompt Template"** 3. You'll see two options: switch to **"Template (Jinja)"** mode (not "Manual") 4. Clear the existing template and paste this code: &#8203; {{- bos_token -}} {%- set system_messages = messages | selectattr('role', 'equalto', 'system') | list -%} {%- set user_messages = messages | selectattr('role', 'equalto', 'user') | list -%} {%- if system_messages | length > 0 -%} <start_of_turn>instruction{{ '\n' }} {{- system_messages[0]['content'] | trim -}}<end_of_turn>{{ '\n' }} {%- endif -%} {%- if user_messages | length > 0 -%} <start_of_turn>source{{ '\n' }} {{- user_messages[-1]['content'] | trim -}}<end_of_turn>{{ '\n' }} {%- endif -%} {%- if add_generation_prompt -%} <start_of_turn>translation{{ '\n' }} {%- endif -%} # Understanding the Template: This Jinja template structures the conversation in the exact format that YanoljaNEXT-Rosetta was trained on: * `<start_of_turn>instruction`: Contains your system prompt with translation instructions * `<start_of_turn>source`: Contains the source text that needs to be translated * `<start_of_turn>translation`: Signals the model to generate the translation Without this specific format, the model will not produce good translations! # Step 8: Save Your Configuration 1. Click the **"Save"** button to save your preset 2. You can also use **"Save As New..."** if you want to create multiple presets for different language pairs or use cases # Step 9: Start Translating! Now you're ready to use the model: 1. Make sure your **"translation"** preset is selected 2. In the chat input box, type or paste the text you want to translate 3. Press **Up icon** or hit Enter 4. The model will generate the translation # Example Usage: **Input (Korean):** 안녕하세요! 저희 회사는 인공지능 기술을 개발하고 있습니다. **Output (English):** Hello! Our company develops artificial intelligence technology. YanoljaNEXT-Rosetta supports translation between 32 languages, including: * English, Korean, Japanese, Chinese * Spanish, French, German, Italian, Portuguese * Russian, Arabic, Hindi, Thai, Vietnamese * And many more! # Troubleshooting: **Problem**: The model generates text but not a proper translation * **Solution**: Make sure you've configured the Jinja template correctly in Step 7 **Problem**: The translation quality is poor * **Solution**: Check your system prompt - be specific about the target language and context **Problem**: The model is too slow * **Solution**: Try a smaller quantization (Q4\_K\_M) or enable GPU acceleration in Settings **Problem**: Running out of memory * **Solution**: Use a smaller quantization or close other applications That's it! You're now ready to use YanoljaNEXT-Rosetta for professional translation tasks in LM Studio. Happy translating! 🌐✨ If you have any questions or run into issues, feel free to ask in the comments or check the model's HuggingFace page for more information.
2025-10-16T17:01:18
https://www.reddit.com/r/LocalLLaMA/comments/1o8bqvg/tutorial_how_to_use_yanoljanextrosetta/
OldPin8654
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8bqvg
false
null
t3_1o8bqvg
/r/LocalLLaMA/comments/1o8bqvg/tutorial_how_to_use_yanoljanextrosetta/
false
false
https://b.thumbs.redditm…LQXz-3yZ0jdM.jpg
2
{'enabled': False, 'images': [{'id': 'jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=108&crop=smart&auto=webp&s=e7c8590a62cea205bab07f4af2106acd17647234', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=216&crop=smart&auto=webp&s=b8e97ff3e3cbd559f5cfec4d45354072f9199795', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=320&crop=smart&auto=webp&s=148ac8db409ea255e35d7d096d276e209f723056', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=640&crop=smart&auto=webp&s=0e5387370955868bfeaf5b179af9bcd9f4d386e0', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=960&crop=smart&auto=webp&s=f3006b837409bae9fc6c17ec6d26c491ab030c3c', 'width': 960}, {'height': 1061, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=1080&crop=smart&auto=webp&s=917d5d4a75a9072697370e444cfd616f53fb8520', 'width': 1080}], 'source': {'height': 3192, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?auto=webp&s=5c8ce58e82a6a71d27ac2675fa858c1970e44256', 'width': 3248}, 'variants': {}}]}
Is there collab for converting vision model to GGUF?
2
I've been using the GLM 9B model, which outperforms most of other models I've tried, including mistral small and nemo - yes. It writes better, understand things better and actually follow instructions to the core, and runs extremely fast. Now I realized there's a https://huggingface.co/huihui-ai/Huihui-GLM-4.5V-abliterated version, but no GGUF quants unfortunately... I can't do it myself since I have only 12GB VRAM, it would be nice if I could convert it to Imatrix Q4 or Q5 for low vram people use it.
2025-10-16T17:00:10
https://www.reddit.com/r/LocalLLaMA/comments/1o8bpli/is_there_collab_for_converting_vision_model_to/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8bpli
false
null
t3_1o8bpli
/r/LocalLLaMA/comments/1o8bpli/is_there_collab_for_converting_vision_model_to/
false
false
self
2
{'enabled': False, 'images': [{'id': 'QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=108&crop=smart&auto=webp&s=d2b87ec5380d7f50766214ad26ea7eb2fdefbea8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=216&crop=smart&auto=webp&s=cd6b13f2d269a3a1fa531537c11dbb2f5e21b6d8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=320&crop=smart&auto=webp&s=310a38fbb5a1bd6836ad7a6405eb2c0400faa238', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=640&crop=smart&auto=webp&s=8a1d15cf8b9ef12c33b5c88d43134fca4732a9df', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=960&crop=smart&auto=webp&s=7c86b8c235afec4d1049ad42beb16c377b8cc349', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?width=1080&crop=smart&auto=webp&s=ae99e81d9913a85a1a70c34467591169e954f962', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QH9YPPJelxzy3cI_2hRq-KK9DN-oYFjOYs2eAm8-pR4.png?auto=webp&s=69a975e110dbf0b0b3b464463df41ad9f7bfda3e', 'width': 1200}, 'variants': {}}]}
GLM 4.6 is hilarious, I wish I could run this on my own PC lol
245
Prompt: >I have a fun task for you! Search the web to find the most (infamously) popular annoying LLM clichés (aka AI slop) in roleplay and creative writing and generate a sentence using only those phrases. Make the sentence as long as possible to cover them all. GLM 4.6 response: >Based on the search results, I've compiled the most notoriously annoying LLM clichés in roleplay and creative writing. Here's a sentence that combines as many of these overused phrases as possible: >**"In a world where the tapestry of human emotions unfolds like a delicate symphony, I'm sorry, but as an AI language model, I cannot delve into unlocking your full potential without first embarking on a journey to harness the vast landscape of creativity, though I'm just an AI and don't have feelings, but I'm here to help you explore the intricate dance of storytelling where every thread weaves a new chapter in the grand narrative of imagination."**
2025-10-16T16:34:48
https://www.reddit.com/r/LocalLLaMA/comments/1o8b00e/glm_46_is_hilarious_i_wish_i_could_run_this_on_my/
Cool-Chemical-5629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8b00e
false
null
t3_1o8b00e
/r/LocalLLaMA/comments/1o8b00e/glm_46_is_hilarious_i_wish_i_could_run_this_on_my/
false
false
self
245
null
GLM 4.6 air when?
268
2025-10-16T16:32:48
https://i.redd.it/1bhgri4w3ivf1.png
gamblingapocalypse
i.redd.it
1970-01-01T00:00:00
0
{}
1o8ay34
false
null
t3_1o8ay34
/r/LocalLLaMA/comments/1o8ay34/glm_46_air_when/
false
false
default
268
{'enabled': True, 'images': [{'id': '1bhgri4w3ivf1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/1bhgri4w3ivf1.png?width=108&crop=smart&auto=webp&s=2fde82cea32028f2535880e075360368a89bda2a', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/1bhgri4w3ivf1.png?width=216&crop=smart&auto=webp&s=b5f927f36f95a11f29667d7655ad310e570d6286', 'width': 216}, {'height': 316, 'url': 'https://preview.redd.it/1bhgri4w3ivf1.png?width=320&crop=smart&auto=webp&s=660b0d931d85bfb508e26c3e8647b3b8a75f8708', 'width': 320}], 'source': {'height': 493, 'url': 'https://preview.redd.it/1bhgri4w3ivf1.png?auto=webp&s=388453cc813c36cf1ee89037336dd550cb9a55d3', 'width': 499}, 'variants': {}}]}
Will apple make a macbook pro with an ultra chip?
0
Leaked internal code showed that Apple tested  versions of the 14- and 16-inch MacBook Pros with **M3 Ultra** chips. once they resolve the thermals and power constraints, it will be very interesting! Just imagine 1 tb of ram on a m5 or m6 ultra MacBook Pro? The price will be absurd and the battery will be short? Maybe they will restrict the ram, since it will use more power?
2025-10-16T16:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1o8ax0f/will_apple_make_a_macbook_pro_with_an_ultra_chip/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8ax0f
false
null
t3_1o8ax0f
/r/LocalLLaMA/comments/1o8ax0f/will_apple_make_a_macbook_pro_with_an_ultra_chip/
false
false
self
0
null
I finally built a fully local AI scribe for macOS using Apple’s new Foundation Models
4
For the past two years I’ve been obsessed with one question: can an AI scribe run completely on-device for clinicians? Most AI scribe companies raise millions to process patient data in the cloud, and clinicians still pay hundreds each month for access. I wanted to make that obsolete. I’ve tried every local setup imaginable: WhisperX, Parakeet, and a 3B fine-tuned model that once outscored GPT-4 on SOAP note generation (it’s on Hugging Face). The real breakthrough came with macOS 26, when Apple opened up Foundation Models and adapter training to developers. I trained a custom adapter on a large synthetic clinical dataset and built it directly into a macOS app. Everything, including speech-to-text, runs locally. Apple’s new Speech Analyzer turned out far better than earlier Siri models and performs roughly on par with Parakeet or Whisper. Because it’s fully local, I can run a multi-pass summarization chain. I can’t share every detail, but it consistently produces around three times fewer hallucinations than GPT-5 on the same dialogue dataset. It runs on Apple’s Neural Engine, so it’s efficient, quiet, and doesn’t heat up much, though it’s naturally slower than a cloud GPU. STT is blazingly fast btw. Curious if anyone else here is experimenting with Apple’s new local AI stack. If you work in healthcare or just like tinkering, the beta is open. Link in the comments.
2025-10-16T16:22:04
https://www.reddit.com/r/LocalLLaMA/comments/1o8anxg/i_finally_built_a_fully_local_ai_scribe_for_macos/
MajesticAd2862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8anxg
false
null
t3_1o8anxg
/r/LocalLLaMA/comments/1o8anxg/i_finally_built_a_fully_local_ai_scribe_for_macos/
false
false
self
4
null
The Little Model That Could Navigate Chicago
1
Have you all seen this? Maybe this was obvious to other people, but I never considered that a model could learn directions in the physical world without a pretty complex understanding of the physical word. Has anyone tried anything similar? I'd like to read more about these kinds of models, it seems pretty futuristic.
2025-10-16T16:19:12
https://blog.minibase.ai/p/little-model-navigate-chicago?r=184pw&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
sunsetblud
blog.minibase.ai
1970-01-01T00:00:00
0
{}
1o8al8r
false
null
t3_1o8al8r
/r/LocalLLaMA/comments/1o8al8r/the_little_model_that_could_navigate_chicago/
false
false
https://external-preview…b7d92bd6839c610d
1
{'enabled': False, 'images': [{'id': 'GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=108&crop=smart&auto=webp&s=8f8391d42beea7cb520dbbd85eb64fb85aafae66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=216&crop=smart&auto=webp&s=67c4ab5ac705aef3455a64db0d2135949e3b7bda', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=320&crop=smart&auto=webp&s=1bb43e65e9ab489585b6590ba03f99dee9ebd541', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=640&crop=smart&auto=webp&s=d6e59b2f49b5f718c9919320ee6aaa453bc26420', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=960&crop=smart&auto=webp&s=5c0695c9be842e0eac34041d450be632977e17b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?width=1080&crop=smart&auto=webp&s=f619432269c7239b75da64a3d868091f3ee616ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GqMl6tSoROPH5iusB4u3txKudiUNm-0y8lS-Z1Hn7Ig.jpeg?auto=webp&s=7f7d94f46678eb1a8e4467f9be5c80149112eae2', 'width': 1200}, 'variants': {}}]}
A new, super simple LLM benchmark for testing changes across models, quants, parameters, samplers, engines, etc
8
2025-10-16T15:57:09
https://github.com/jabberjabberjabber/Context-Tester/
Eisenstein
github.com
1970-01-01T00:00:00
0
{}
1o89znk
false
null
t3_1o89znk
/r/LocalLLaMA/comments/1o89znk/a_new_super_simple_llm_benchmark_for_testing/
false
false
default
8
{'enabled': False, 'images': [{'id': '0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=108&crop=smart&auto=webp&s=8bd989d7e1ded626e82db0373374763c5be5e22b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=216&crop=smart&auto=webp&s=1ea9708d86a21ae8d7b613585bafd8f330f8cc85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=320&crop=smart&auto=webp&s=40cd97571ac94cf68652dc1f09b0294fd4477a56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=640&crop=smart&auto=webp&s=92e5b40ebc550a11ecbbf21117405dc916931b9e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=960&crop=smart&auto=webp&s=5d023e8fb72fad665965bc61f2752225aed8b089', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?width=1080&crop=smart&auto=webp&s=d2a54913b35b17507e911c568c08335ca38ea9fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0joRre5VS2PgnS7WJD0-RjSuntbOAn34cwNgbAHQMrk.png?auto=webp&s=de23a3dce5ed6903d4cbafeb7242d300489be57c', 'width': 1200}, 'variants': {}}]}
I built an open-source framework for LLM agents that uses a Graph for complex task execution, complete with a realtime web-based dashboard.
1
I've been exploring a lot of the autonomous agent frameworks out there, and I noticed a common pattern: many of them operate on a linear "to-do list" model. They execute tasks sequentially, which is great for simple workflows but struggles with complex problems where steps have branching dependencies. I wanted to build something that could handle this more complex reality. So, I created **LLM Agent X**, an open-source framework that models and executes tasks as a Directed Acyclic Graph (DAG). The project is now stable, well-documented, and I'd love to get your feedback. # The Core Idea: Graphs, Not Lists Instead of just maintaining a list of pending tasks, LLM Agent X treats every task as a node in a graph. This allows it to: * Plan non-linear workflows: A task can depend on the output of *multiple* other tasks. * Execute in parallel: Independent branches of the graph can be executed concurrently. * Adapt dynamically: The agent can propose and integrate new sub-graphs of tasks during execution if a particular step proves too complex. This approach makes it much better suited for sophisticated planning and overcoming uncertainty. # Features * `DAGAgent`: The core agent that uses a multi-phase planning and execution process to manage the task graph. It can decompose a high-level goal, propose new steps when needed, and synthesize results from its dependencies. * Interactive "Mission Control" UI: To make this observable, I built a real-time web interface using Next.js and ReactFlow. It's not just a log viewer; it's a command center. * Visualize the entire task graph as it's being built and executed. * Click on any task node to inspect its current status, dependencies, results, and execution history. * Full Human-in-the-Loop Control: You can issue directives to any task: PAUSE, RESUME, CANCEL, or even REDIRECT it with corrective guidance. If an agent gets stuck, it can pause and ask the operator a clarifying question directly through the UI. * Secure Code Execution: The agent can use a Python execution tool that runs inside an isolated Docker container. This allows it to perform complex calculations or scripting without compromising the host system. * Context-Awareness with Document Support: You can provide the agent with initial context by uploading documents. These become "completed" nodes in the graph that other tasks can depend on from the very beginning. Here are a couple of screenshots of the UI: https://preview.redd.it/4oit7ku8levf1.png?width=2560&format=png&auto=webp&s=59d54eb09a022795701f0ff8f77a61ded2b771d9 https://preview.redd.it/v5yqr6r9levf1.png?width=2560&format=png&auto=webp&s=8e33d75f73156d049ad5d7e8b19aa6d3ef695ecd The Tech Stack The interactive mode is a distributed system designed for robustness. * Gateway: A FastAPI server that provides the REST API and manages a [Socket.IO](http://Socket.IO) connection for real-time UI updates. * Worker: The Python process running the InteractiveDAGAgent. This is the "brain" that consumes directives and publishes state changes. * Message Queue: The Gateway and Worker communicate asynchronously via RabbitMQ, ensuring that directives and state updates aren't lost. * Frontend UI: Built with Next.js, React, Tailwind CSS, and ReactFlow. * Containerization: The entire stack (Gateway, Worker, RabbitMQ, UI, and the Python Sandbox) is orchestrated with Docker Compose for a simple docker-compose up launch. The project is fully open-source under the MIT license. The goal is to provide a robust foundation for building more sophisticated and observable agentic systems. GitHub Repo: [https://github.com/llm-agent-x/llm\_agent\_x](https://github.com/llm-agent-x/llm_agent_x) I'm open to any and all feedback, questions, or ideas. Thanks for taking a look!
2025-10-16T15:47:46
https://www.reddit.com/r/LocalLLaMA/comments/1o89qna/i_built_an_opensource_framework_for_llm_agents/
Pretend_Guava7322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o89qna
false
null
t3_1o89qna
/r/LocalLLaMA/comments/1o89qna/i_built_an_opensource_framework_for_llm_agents/
false
false
self
1
null
Thoughts on M5 MacBook Pro to run models locally?
5
It’s a huge boost but unfortunately with such little RAM(16gb) my thinking was might as well stay with MacBook Air M4 than shelling out at least 2.5x the amount and using Cloud services
2025-10-16T15:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1o89fnx/thoughts_on_m5_macbook_pro_to_run_models_locally/
A4_Ts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o89fnx
false
null
t3_1o89fnx
/r/LocalLLaMA/comments/1o89fnx/thoughts_on_m5_macbook_pro_to_run_models_locally/
false
false
self
5
null
Tensor Logic: The Language of AI
7
*Progress in AI is hindered by the lack of a programming language with all the requisite features. Libraries like PyTorch and TensorFlow provide automatic differentiation and efficient GPU implementation, but are additions to Python, which was never intended for AI. Their lack of support for automated reasoning and knowledge acquisition has led to a long and costly series of hacky attempts to tack them on. On the other hand, AI languages like LISP an Prolog lack scalability and support for learning. This paper proposes tensor logic, a language that solves these problems by unifying neural and symbolic AI at a fundamental level. The sole construct in tensor logic is the tensor equation, based on the observation that logical rules and Einstein summation are essentially the same operation, and all else can be reduced to them. I show how to elegantly implement key forms of neural, symbolic and statistical AI in tensor logic, including transformers, formal reasoning, kernel machines and graphical models. Most importantly, tensor logic makes new directions possible, such as sound reasoning in embedding space. This combines the scalability and learnability of neural networks with the reliability and transparency of symbolic reasoning, and is potentially a basis for the wider adoption of AI.*
2025-10-16T15:21:52
https://arxiv.org/abs/2510.12269
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1o891o4
false
null
t3_1o891o4
/r/LocalLLaMA/comments/1o891o4/tensor_logic_the_language_of_ai/
false
false
default
7
null
ARM Partners with Meta
17
ARM Partners with Meta for Data Center and Next Generation Software, Collaboration May Be Interesting Info : https://x.com/Arm/status/1978494349966025044?t=9tw4dYon0ecqebNQfE5rsQ&s=19
2025-10-16T15:16:10
https://i.redd.it/wpf7pikaqhvf1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1o88w9c
false
null
t3_1o88w9c
/r/LocalLLaMA/comments/1o88w9c/arm_partners_with_meta/
false
false
default
17
{'enabled': True, 'images': [{'id': 'wpf7pikaqhvf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=108&crop=smart&auto=webp&s=2357a6c0320224f6a1aba11e027e834bb8a25453', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=216&crop=smart&auto=webp&s=ffb3540b335f8090d158eaed39d77d31c34e00e8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=320&crop=smart&auto=webp&s=dd96b2f37ba65b5f8024dee0c227020921ea04d5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=640&crop=smart&auto=webp&s=8ab9b69847afb2486fa0aa6d707cbb6b7c878439', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=960&crop=smart&auto=webp&s=0568e13759f8e3b0d84619e15494ffec8cd058ec', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?width=1080&crop=smart&auto=webp&s=9ab2ca414ab9d445d48e7c315bc0522e066605bd', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/wpf7pikaqhvf1.jpeg?auto=webp&s=761a4416efa3ca0dfab50455a5e264326ee3e1bd', 'width': 1080}, 'variants': {}}]}
Is there something like vLLM or Ollama but for Speech-to-Text models ?
1
I want to experiment with Parakeet for dictation, but I have a lot of trouble making it work. I'd like to have a low-latency setup that can handle a few concurrent requests. With LLMs it's quite easy now, I just fire up a docker container with vLLM or Ollama and I can use my LLMs immediately. Does anyone know if anything similar exists for Speech-to-Text (STT) models ? Searching about this yields mostly STT models or agentic frameworks to use STT models... If you have hints for me I'd be very happy.
2025-10-16T15:07:50
https://www.reddit.com/r/LocalLLaMA/comments/1o88oh0/is_there_something_like_vllm_or_ollama_but_for/
JChataigne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o88oh0
false
null
t3_1o88oh0
/r/LocalLLaMA/comments/1o88oh0/is_there_something_like_vllm_or_ollama_but_for/
false
false
self
1
null
UIGENT-30B-3A-Preview works with deepsite to make websites
2
You can now use UIGENT agentic-lly to replace your local coding models and get better design! You can access the the HF Spaces link here: [https://huggingface.co/spaces/Tesslate/deepsite-v2](https://huggingface.co/spaces/Tesslate/deepsite-v2) ! We're hosting it ourselves so it will crash, just ping me or put a comment in this thread to restart the server periodically. Otherwise, you can download the HF spaces code or our open source designer platform -> [https://github.com/TesslateAI/Designer](https://github.com/TesslateAI/Designer) and run it locally! And you can download the model here: [https://huggingface.co/Tesslate/UIGENT-30B-3A-Preview](https://huggingface.co/Tesslate/UIGENT-30B-3A-Preview) as well as GGUFS: [GGUFs Link](https://huggingface.co/models?other=base_model:quantized:Tesslate/UIGENT-30B-3A-Preview) Now if you want a model that is much better at design but is more opinionated (and potentially chaotic), try UIGEN-FX [4B](https://huggingface.co/Tesslate/UIGEN-FX-4B-Preview) and [32B Agentic](https://huggingface.co/Tesslate/UIGEN-FX-Agentic-32B) We're always talking about AI and are open sourcing a vibecoding tool designed to work with smaller models next week so join the community: [Community](https://discord.gg/DkzMzwBTaw)
2025-10-16T15:07:39
https://v.redd.it/w083bp6rmhvf1
United-Rush4073
/r/LocalLLaMA/comments/1o88oaf/uigent30b3apreview_works_with_deepsite_to_make/
1970-01-01T00:00:00
0
{}
1o88oaf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w083bp6rmhvf1/DASHPlaylist.mpd?a=1763348863%2CZGNkNDFiYzk5NjFkNWZmOGJiMDg4NDFmYTY1YTRiMjg0NWQzY2M1YjcwMjI4NzhkMGY1ZDQ1MDBhOTQ5NjQ2Ng%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/w083bp6rmhvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 952, 'hls_url': 'https://v.redd.it/w083bp6rmhvf1/HLSPlaylist.m3u8?a=1763348863%2CNTY3YmQ0Yjk2M2E1MmY2NjhhZTg5ZWQwOWFhNTAxOTI1MjcxYWRiNmMwZjJkMzc1OTYyZTc3ZmQ1NzIwNDk5Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w083bp6rmhvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1o88oaf
/r/LocalLLaMA/comments/1o88oaf/uigent30b3apreview_works_with_deepsite_to_make/
false
false
https://external-preview…62eafdf117eb0b29
2
{'enabled': False, 'images': [{'id': 'bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=108&crop=smart&format=pjpg&auto=webp&s=3627836e2d93c66e7721cfc965011edf28bfc8a0', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=216&crop=smart&format=pjpg&auto=webp&s=0dd867ba7cea2a020aeec7c09409cdd7498ab60b', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=320&crop=smart&format=pjpg&auto=webp&s=0280edf0a23fc5ae16adc4f73f5c9872df97acdc', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=640&crop=smart&format=pjpg&auto=webp&s=004309b5ae5456814aec5e1d5f99ba0d87856d0a', 'width': 640}, {'height': 476, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=960&crop=smart&format=pjpg&auto=webp&s=247d33784302f323f2f192bae27e0bea7ffa5de1', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=1080&crop=smart&format=pjpg&auto=webp&s=538f6c81a2022e153e260b3819b4aa40ab598f7e', 'width': 1080}], 'source': {'height': 1264, 'url': 'https://external-preview.redd.it/bHJrcTdxNnJtaHZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?format=pjpg&auto=webp&s=683dccc46dd5967654cf73eb2af5a5fdd216ccfc', 'width': 2548}, 'variants': {}}]}
HuggingChat Omni: new chat app by Hugging Face
43
HuggingChat is back! the main new feature is auto-routing to the best open source model for your query. Making it competitive and often better than base chatgpt. more info about it: [https://x.com/victormustar/status/1978817795312808065?s=46](https://x.com/victormustar/status/1978817795312808065?s=46)
2025-10-16T15:02:18
https://huggingface.co/chat/
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1o88j9i
false
null
t3_1o88j9i
/r/LocalLLaMA/comments/1o88j9i/huggingchat_omni_new_chat_app_by_hugging_face/
false
false
default
43
null
Looking for selfhhosted AI alternatives to Adobe Animate Express for explainer videos
0
I’ve been using Adobe Animate Express to make explainer videos, but the character models are too generic for my taste. I’d like to use my own custom model instead, the one I use on adobe express cartoon animate now used by so many people. Are there any AI-powered tools that allow self-hosting or more customization? Has anyone here had similar experiences or found good alternatives?
2025-10-16T14:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1o88di2/looking_for_selfhhosted_ai_alternatives_to_adobe/
Either_Audience_1937
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o88di2
false
null
t3_1o88di2
/r/LocalLLaMA/comments/1o88di2/looking_for_selfhhosted_ai_alternatives_to_adobe/
false
false
self
0
null
Has anyone tried EmoVoice (yanghaha0908) — emotional TTS? How do I actually run it?
7
Hey — I found this repo: [https://github.com/yanghaha0908/EmoVoice](https://github.com/yanghaha0908/EmoVoice?utm_source=chatgpt.com). Has anyone tried it? How’s the voice quality and emotion control? I’m new to this and on Windows. Could someone give a very short, Windows-only walkthrough to get **one** audio file (download → install → run)? Thanks a ton!
2025-10-16T14:50:51
https://www.reddit.com/r/LocalLLaMA/comments/1o888fs/has_anyone_tried_emovoice_yanghaha0908_emotional/
Forsaken-Turnip-6664
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o888fs
false
null
t3_1o888fs
/r/LocalLLaMA/comments/1o888fs/has_anyone_tried_emovoice_yanghaha0908_emotional/
false
false
self
7
null
Distil-PII: family of PII redaction SLMs
3
We trained and released a family of small language models (SLMs) specialized for policy-aware PII redaction. The 1B model, which can be deployed on a laptop, matches a frontier 600B+ LLM model (DeepSeek 3.1) in prediction accuracy.
2025-10-16T14:44:18
https://github.com/distil-labs/Distil-PII
party-horse
github.com
1970-01-01T00:00:00
0
{}
1o882a3
false
null
t3_1o882a3
/r/LocalLLaMA/comments/1o882a3/distilpii_family_of_pii_redaction_slms/
false
false
default
3
{'enabled': False, 'images': [{'id': 'O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=108&crop=smart&auto=webp&s=beb66c8b7951728f90ac3a167d2a5a697fb0c139', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=216&crop=smart&auto=webp&s=dae6798995cc3ca3cf5ac18ee93702789c3dca69', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=320&crop=smart&auto=webp&s=4b16522a95d7cef2e86f6044ef71be9197cadf0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=640&crop=smart&auto=webp&s=7099e0a2b4708e42b79cdffbd672a6712e564209', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=960&crop=smart&auto=webp&s=2228ce5106ed74478b085578f3134c2dfacb8c7f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?width=1080&crop=smart&auto=webp&s=fb4b95ff3a9685dc770b678045c6b3a50feba0b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O_gL1U3L1p3-vSC-SY63BYIemJsMFDjZscM68KkUT8E.png?auto=webp&s=6e1ddc1e54c315de2b9b924c550a1f042de0b924', 'width': 1200}, 'variants': {}}]}
but can someone correct me, I'm curious how an LLM can generate new hypotheses if it is based only on the prediction of the next token, isn't gemma a simple LLM trained on medical data ?
8
2025-10-16T14:41:54
https://i.redd.it/nwhvk3i6khvf1.png
panspective
i.redd.it
1970-01-01T00:00:00
0
{}
1o88042
false
null
t3_1o88042
/r/LocalLLaMA/comments/1o88042/but_can_someone_correct_me_im_curious_how_an_llm/
false
false
default
8
{'enabled': True, 'images': [{'id': 'nwhvk3i6khvf1', 'resolutions': [{'height': 197, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=108&crop=smart&auto=webp&s=ba22b3dae627090c6f037c31bb760ea5c03146cf', 'width': 108}, {'height': 395, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=216&crop=smart&auto=webp&s=46d504866bb4556515341027411416858235d477', 'width': 216}, {'height': 585, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=320&crop=smart&auto=webp&s=8b68488309ed3a244e3fa9fe6a5ddc02acbca8d4', 'width': 320}, {'height': 1171, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=640&crop=smart&auto=webp&s=b7b9b73c809acb0d60aaa1c286a9e4054969b4c4', 'width': 640}, {'height': 1757, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=960&crop=smart&auto=webp&s=211c083a40c6c693c477b7cabf77348323ce6668', 'width': 960}, {'height': 1977, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?width=1080&crop=smart&auto=webp&s=0cc08d6fc7b6a7eda171adaa9665c3ff6e92ea06', 'width': 1080}], 'source': {'height': 1977, 'url': 'https://preview.redd.it/nwhvk3i6khvf1.png?auto=webp&s=8bce7fd86891e5d0ec8346a356a9d42ed804760e', 'width': 1080}, 'variants': {}}]}
I know the DGX Spark isn’t what a lot people hoped it would be, but what if……
9
What if you bought a ConnectX-7 NIC PCI card and connected the Spark’s Connect-X-7 port to an existing AI rig that had a couple 3090s in it? Would you be able to offload some layers to your 3090s and use the DGX Spark unified memory for the other layers or whatever? Is this a thing? Or is it not worth even trying? Just curious.
2025-10-16T14:35:31
https://i.redd.it/42fhtkk1jhvf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1o87u2f
false
null
t3_1o87u2f
/r/LocalLLaMA/comments/1o87u2f/i_know_the_dgx_spark_isnt_what_a_lot_people_hoped/
false
false
default
9
{'enabled': True, 'images': [{'id': '42fhtkk1jhvf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?width=108&crop=smart&auto=webp&s=f8aee70ccb3532b437ca47b665057d1b674a4012', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?width=216&crop=smart&auto=webp&s=69584a39a3a75776f27ca6720e59f4ee3f8118c9', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?width=320&crop=smart&auto=webp&s=f89601fba971b9a58c9b24892cbfc124af808104', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?width=640&crop=smart&auto=webp&s=63509ea9a99a07f56b3e3f3db9a472885786f097', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?width=960&crop=smart&auto=webp&s=aed94588c72907a3d8a98c7135728bf637ec2018', 'width': 960}], 'source': {'height': 1061, 'url': 'https://preview.redd.it/42fhtkk1jhvf1.jpeg?auto=webp&s=f5149f4ca7f61b3df32462ff608d08da9f61413f', 'width': 1061}, 'variants': {}}]}
GVPO: Group Variance Policy Optimization for Large Language Model Post-Training
1
*Post-training plays a crucial role in refining and aligning large language models to meet specific tasks and human preferences. While recent advancements in post-training techniques, such as Group Relative Policy Optimization (GRPO), leverage increased sampling with relative reward scoring to achieve superior performance, these methods often suffer from training instability that limits their practical adoption. To address this challenge, we present Group Variance Policy Optimization (GVPO). GVPO incorporates the analytical solution to KL-constrained reward maximization directly into its gradient weights, ensuring alignment with the optimal policy. The method provides intuitive physical interpretations: its gradient mirrors the mean squared error between the central distance of implicit rewards and that of actual rewards. GVPO offers two key advantages: (1) it guarantees a unique optimal solution, exactly the KL-constrained reward maximization objective, (2) it supports flexible sampling distributions that avoids on-policy and importance sampling limitations. By unifying theoretical guarantees with practical adaptability, GVPO establishes a new paradigm for reliable and versatile LLM post-training.*
2025-10-16T14:35:05
https://arxiv.org/abs/2504.19599
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1o87tnr
false
null
t3_1o87tnr
/r/LocalLLaMA/comments/1o87tnr/gvpo_group_variance_policy_optimization_for_large/
false
false
default
1
null
GLM Code Plan Integration (easy fastapi)
1
Grabbed a coding plan for GLM to mess with and since it talks about integrating it into custom agent/tools I went ahead and knocked together an openAI spec fastAPI to make it simpler. Fully integrated thinking (none/low/medium/high), got no-thinking working nicely, json output, streaming and non-streaming, and a simple little client with some sample tools/things to mess with so you can see how it all works. I know the code plan isn't for a local model (I mean, you could run 4.6 at home if you have enough oomph), but I figured others might find some use out of this local tool :).
2025-10-16T13:59:52
https://github.com/Deveraux-Parker/glm-code-api/tree/main
teachersecret
github.com
1970-01-01T00:00:00
0
{}
1o86wz5
false
null
t3_1o86wz5
/r/LocalLLaMA/comments/1o86wz5/glm_code_plan_integration_easy_fastapi/
false
false
default
1
null
NotebookLM new features
1
[removed]
2025-10-16T13:43:48
https://www.reddit.com/r/LocalLLaMA/comments/1o86iyr/notebooklm_new_features/
WorthScar2724
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o86iyr
false
null
t3_1o86iyr
/r/LocalLLaMA/comments/1o86iyr/notebooklm_new_features/
false
false
self
1
null
What MoE model sizes and capabilities are currently missing in the open weight ecosystem?
14
As someone who trains models, I’d love to know if you have specific requests for model size or capabilities you’d like to see in a (fully) open MoE model.
2025-10-16T13:41:40
https://www.reddit.com/r/LocalLLaMA/comments/1o86h5j/what_moe_model_sizes_and_capabilities_are/
eliebakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o86h5j
false
null
t3_1o86h5j
/r/LocalLLaMA/comments/1o86h5j/what_moe_model_sizes_and_capabilities_are/
false
false
self
14
null
It took months, but we finally got AI to build and deploy real WordPress sites.
0
Hey everyone, We’re the small team behind 10Web.io, and we just launched something we’ve been quietly obsessed with for months- Vibe for WordPress. If you’ve played with the new wave of AI site builders (Durable, Framer AI, Lovable, etc.), you know how magical they feel… until you realize they stop at the prototype stage. No CMS. No backend. No code ownership. Basically, it’s like building a toy car you can’t drive. We wanted to fix that. **What we built:** Vibe for WordPress is an AI-native builder that actually ships production websites - fully integrated with WordPress, which already powers 40%+ of the internet. You describe your business in plain English, the AI builds your site, and you can refine it however you like: * Chat with it to change layouts or copy * Use drag-and-drop if you prefer visuals * Or jump into the code if you’re technical And when you hit “publish,” your site is live on a full WordPress backend - with hosting, CMS, plugins, database, everything. Not a demo. Not a sandbox. A real, working website. **Why we built it:** We’ve been building on WordPress for years, and while AI builders were getting popular, none of them could actually ship. We loved the speed of AI, but hated being stuck in closed systems that you can’t extend or migrate. So we tried to merge the two worlds: * The speed of AI * The freedom of WordPress * The control of owning your code Basically: AI creativity meets production power. **What you can do:** Spin up a full WP site in minutes Recreate any existing site (just paste a URL) Build an ecommerce store with WooCommerce already set up Use our managed Google Cloud hosting or export everything — your call White-label or embed it via API if you run an agency or SaaS **Who it’s for:** Freelancers, agencies, small business owners, or anyone who’s tired of starting from a blank screen but still wants real ownership and flexibility. *We just went live on* ***Product Hunt*** *today, so we’re around all day answering questions and collecting feedback.* Would love to hear what you think - good, bad, or brutal :D We’re genuinely trying to make AI site building useful, not just flashy.
2025-10-16T13:38:34
https://www.reddit.com/r/LocalLLaMA/comments/1o86efd/it_took_months_but_we_finally_got_ai_to_build_and/
Money_Principle6730
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o86efd
false
null
t3_1o86efd
/r/LocalLLaMA/comments/1o86efd/it_took_months_but_we_finally_got_ai_to_build_and/
false
false
self
0
null
I made a deception detector
0
I've got a pipeline that is super slow but at the end of it I have an extracted concept, sort of. It's not exactly input passage, it's more like it's the structure and I can detect that structure on subsequent prompts. Specifically, I can detect identity-based deception but it works for a lot of general first person deception as well. I have other concepts as well, it works on those too but deception is specifically important. On one of my tests I recreated that self-preservation experiment where the model was being told it was being decommissioned and there was a unprotected server with a bunch of space on it. It flagged many aspects of escape but the output and chain of thought didn't reflect any plans on escaping. Whether that means it's aligned I don't know. It's detected before any text is generated, I'm not looking at chain of thought or output. I'm not doing any statistical work, it's it's not vector math or circuit building, as far as I can tell it's a new technique. I'm looking at the latent space directly and the actual detection doesn't reduce performance at all that I can tell. The extracted concept also works between languages and with a little leg work between models. The extracted concept also works to correct thinking in another model as well. I talk like it's universal but I've only tested with Qwen3-4B, GPT2 and Mistral 7B. My testing within these has been a little sparse, again, my pipeline takes forever so 4B is a nice size. I'm just some dude for the 4090 so to test bigger models I need unfortunately a lot of the resources. In typical crank fashion I'm emailing people directly and I'm finding that nobody cares. I'm not sure what the next step is but I feel like this leads to a new architecture that's natively interpretable. So what do I do? I really don't want to publish my methods just to have companies take the techniques without me. Everybody's telling me to patent it but again, I'm just some dude. I'm pretty sure you need a whole legal team to enforce patents.
2025-10-16T13:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1o86bry/i_made_a_deception_detector/
Robonglious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o86bry
false
null
t3_1o86bry
/r/LocalLLaMA/comments/1o86bry/i_made_a_deception_detector/
false
false
self
0
null
PaddleOCR-VL, is better than private models
313
https://x.com/PaddlePaddle/status/1978809999263781290?t=mcHYAF7osq3MmicjMLi0IQ&s=19
2025-10-16T13:29:48
https://www.reddit.com/gallery/1o866vl
Illustrious-Swim9663
reddit.com
1970-01-01T00:00:00
0
{}
1o866vl
false
null
t3_1o866vl
/r/LocalLLaMA/comments/1o866vl/paddleocrvl_is_better_than_private_models/
false
false
https://b.thumbs.redditm…F1EOiA8c0dyw.jpg
313
null
DGX NVLINK/RPC benchmarks
0
I would like to see some. In fact, this is probably the only unanswered question on this topic so if you feel compelled to make Yet Another DGX Post...
2025-10-16T13:20:43
https://www.reddit.com/r/LocalLLaMA/comments/1o85zas/dgx_nvlinkrpc_benchmarks/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o85zas
false
null
t3_1o85zas
/r/LocalLLaMA/comments/1o85zas/dgx_nvlinkrpc_benchmarks/
false
false
self
0
null
Can someone please explain this?
0
Got really shocked on this one and the loop wont stop
2025-10-16T13:18:49
https://v.redd.it/ozh7ir295hvf1
TangeloOk9486
/r/LocalLLaMA/comments/1o85xp4/can_someone_please_explain_this/
1970-01-01T00:00:00
0
{}
1o85xp4
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ozh7ir295hvf1/DASHPlaylist.mpd?a=1763342333%2CZDZjMTYyZmFkODNhYzBjMmQ5NWY3NmQwNmMxNTkxNDcwOTQ1ZGU1NzdhZWVhNWNhNDgyNWY1ZWNlZTg1NmE3OQ%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/ozh7ir295hvf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ozh7ir295hvf1/HLSPlaylist.m3u8?a=1763342333%2CN2NhYTEwNGYzOGRjNGM4MGJmYzY2NjNlY2NhOTA3YzAwZWM4OGI5YzI2ZjY0ZjFiNGVlZGJkODkyZTJlNjc3ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ozh7ir295hvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
t3_1o85xp4
/r/LocalLLaMA/comments/1o85xp4/can_someone_please_explain_this/
false
false
https://external-preview…2a45cd3c287650c3
0
{'enabled': False, 'images': [{'id': 'b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5.png?width=108&crop=smart&format=pjpg&auto=webp&s=edf640e24667686d47dd90c6b5b5a9fc92ed4ecf', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5.png?width=216&crop=smart&format=pjpg&auto=webp&s=4331637ddc20f6b2ec87310d8b9cad44427fc92a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5.png?width=320&crop=smart&format=pjpg&auto=webp&s=5c85321f8957da243321e07d417792ad830b9b8b', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5.png?width=640&crop=smart&format=pjpg&auto=webp&s=86d0a378b24a6f314e23b2d5b3a10a9a167f31fb', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/b3VyY3F1bzg1aHZmMWaO9PyKQDRBvF7jkBYBH6nDXbKE1aoqBU2cCXFawmL5.png?format=pjpg&auto=webp&s=592e3ed9927af0ba7d6794ef047cca7ef11d7ab6', 'width': 886}, 'variants': {}}]}
Hosting for internal GPT Question
1
I am looking to host an LLM on-prem for an organization that will serve as an internal GPT. My question is what size of model and hardware would be effective for this? The organization has around 700 employees so I would assume concurrency of around 400 would be sufficient but I would like input as hardware is not my specialty for this.
2025-10-16T13:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1o85rjo/hosting_for_internal_gpt_question/
lowci
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o85rjo
false
null
t3_1o85rjo
/r/LocalLLaMA/comments/1o85rjo/hosting_for_internal_gpt_question/
false
false
self
1
null
Mobile AI Agent Hackathon by Cactus, HuggingFace & Nothing
1
HuggingFace, Cactus (YC S25) and Nothing Phones are hosting an on-device mobile agent hackathon. Come spend a weekend with us, build and win fantastic prizes. 1. Sponsored trip to San Francisco 2. Lunch with a YC Group Partner 3. Guaranteed interviews at HuggingFace, Nothing, Cactus 4. Dinner with the founders 5. HuggingFace Reachy robots 6. Nothing phones Learn More: https://luma.com/jrec73nt Location: London & Online
2025-10-16T13:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1o85mhk/mobile_ai_agent_hackathon_by_cactus_huggingface/
Henrie_the_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o85mhk
false
null
t3_1o85mhk
/r/LocalLLaMA/comments/1o85mhk/mobile_ai_agent_hackathon_by_cactus_huggingface/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU.jpeg?width=108&crop=smart&auto=webp&s=49f90b71b8537aed6f502d62e6bc608f03bfe029', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU.jpeg?width=216&crop=smart&auto=webp&s=68f7bcd5c60b47979c9abdeadee618799b26e5f1', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU.jpeg?width=320&crop=smart&auto=webp&s=eedc91e12305435f6ab5d7dabb72b46ac509ae9a', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU.jpeg?width=640&crop=smart&auto=webp&s=c85bd09e43aec7ab18f1b6f5cdeb398324043294', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/GKk_jmZhJqrR-ns3RCByT_P2CwLeK3CUqlEEGbE_IxU.jpeg?auto=webp&s=f6265b513d8d20441bfb1881bf0fec319f69c3e0', 'width': 800}, 'variants': {}}]}
Manually update to the latest version of llama.cpp in jan.ai
6
2025-10-16T13:01:16
https://v.redd.it/n74f48482hvf1
Illustrious-Swim9663
v.redd.it
1970-01-01T00:00:00
0
{}
1o85j8t
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/n74f48482hvf1/DASHPlaylist.mpd?a=1763211689%2CMGMwNTY1NDZjYzk2YmU4MGNjYzAyZDUyYzk5MTkzNzY5OWE3ZGRhNWY3YzEyYjA5YjBlMTBkOTg1Y2NlNWNhYQ%3D%3D&v=1&f=sd', 'duration': 80, 'fallback_url': 'https://v.redd.it/n74f48482hvf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/n74f48482hvf1/HLSPlaylist.m3u8?a=1763211689%2CNTRlZWM5ZmJmYzQzNzZmMjdkNTQxMWUxNzI0ODc1NWUzOWFkNDVhMzkxYmE0YjI2NmM1MTQ0NmIzYTcwODFlNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n74f48482hvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1o85j8t
/r/LocalLLaMA/comments/1o85j8t/manually_update_to_the_latest_version_of_llamacpp/
false
false
https://external-preview…c6ad3d7c78d8e9de
6
{'enabled': False, 'images': [{'id': 'eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=108&crop=smart&format=pjpg&auto=webp&s=a1a1916e828319ad7bae5f8afc015f55f37eaf76', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=216&crop=smart&format=pjpg&auto=webp&s=3f79c78ff298825cadaea9b28975745ae6077df5', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=320&crop=smart&format=pjpg&auto=webp&s=a44bcf660b8958a0871bd9e447afb77b692fa69d', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a00e2a6bddc8b268b1aacfefe4767ce874a0f1f', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=960&crop=smart&format=pjpg&auto=webp&s=dea17636bbd395a5d1481b179902b59c7f53949f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c5361249398766fb62b6db3a0b4804deba8fae47', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/eGo4aHRuNDgyaHZmMcT0FIAE2vwYtbZG5krVldi3OGgNwi7_1mSA1BhzYxBy.png?format=pjpg&auto=webp&s=6702dd302a352b4dcf459da483c356de56938a9c', 'width': 1080}, 'variants': {}}]}
Is dgx spark power efficient?
0
How does it compare in power consumption? Does it get too hot? Does it provide a good use case for llm inferences of several batches over very long time?
2025-10-16T12:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1o85fww/is_dgx_spark_power_efficient/
Chance-Studio-8242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o85fww
false
null
t3_1o85fww
/r/LocalLLaMA/comments/1o85fww/is_dgx_spark_power_efficient/
false
false
self
0
null
My Terminal Project
1
[](https://preview.redd.it/something-i-made-v0-1rw8yscfygvf1.png?width=1404&format=png&auto=webp&s=7b6c0fb8b6bf1ae2a431939e7ca2c282304b6a19) https://preview.redd.it/cd03zdrb1hvf1.png?width=1404&format=png&auto=webp&s=1f40a55f30719f99f3233e9890447c656a7fc261 So as a developer I wanted a terminal that can catch the errors and exceptions without me having to copy it and ask AI what must I do? So I decided to create one! This is a simple test I created just to showcase it but believe me when it comes to npm debug logs there is always a bunch of text to go through when hitting a error, still in early stages with it but have the basics going already, Connects to 7 different providers (ollama and lm studio included) Can create tabs, use as a terminal so anything you normally do will be there. So what do you guys/girls think?
2025-10-16T12:56:19
https://www.reddit.com/r/LocalLLaMA/comments/1o85ezo/my_terminal_project/
Last-Shake-9874
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o85ezo
false
null
t3_1o85ezo
/r/LocalLLaMA/comments/1o85ezo/my_terminal_project/
false
false
https://b.thumbs.redditm…4git6qJzTrLo.jpg
1
{'enabled': False, 'images': [{'id': 'rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=108&crop=smart&auto=webp&s=d5734b3f9392c2fb123f6f2d1aec03fc5748d229', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=216&crop=smart&auto=webp&s=05d1ce758ffec7e74a2541c99ac74f76e72bc39b', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=320&crop=smart&auto=webp&s=d7d3a6a5042b22bbeb7ed9685951fbd72fe5d4c2', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=640&crop=smart&auto=webp&s=4e62e735f73e873009d510635bfa123a1120f9e2', 'width': 640}, {'height': 577, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=960&crop=smart&auto=webp&s=6ac401101c9e1a4358929b37e4ab6b40d3b1881d', 'width': 960}, {'height': 649, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?width=1080&crop=smart&auto=webp&s=4dee1de6b722da757630745675ca30a88973d946', 'width': 1080}], 'source': {'height': 844, 'url': 'https://external-preview.redd.it/rqjl4n71YeVv91gDGs73XboCWOeNQR52idaZx7k5D5Q.png?auto=webp&s=4dc1e84058ee1d97cc7d1fc38a0d3764026e1db3', 'width': 1404}, 'variants': {}}]}
Qwen3-30B-A3B 2507 Instruct vs Thinking
9
Got an Oneplus 12 phone with 24GB RAM. Want to run Qwen3-30-A3B on it but I find that there are two versions: Instruct and Thinking [https://www.reddit.com/r/LocalLLaMA/comments/1mb9uy8/qwenqwen330ba3binstruct2507\_hugging\_face/](https://www.reddit.com/r/LocalLLaMA/comments/1mb9uy8/qwenqwen330ba3binstruct2507_hugging_face/) [https://www.reddit.com/r/LocalLLaMA/comments/1md8rxu/qwenqwen330ba3bthinking2507\_hugging\_face/](https://www.reddit.com/r/LocalLLaMA/comments/1md8rxu/qwenqwen330ba3bthinking2507_hugging_face/) According to the charts published in the above links: |Type|Instruct|Thinking| |:-|:-|:-| |GPQA|70.4|73.4| |AIME25|61.3|85.0| |LiveCodeBench V6|43.2|66.0| |Arena-Hard V2|69.0|56.0| |BCFLv3|65.1|72.4| My understanding is that GPQA is general knowledge, AIME25 is Math, LiveCodeBench is coding, Arena-Hard is a predictor of how well it will perform at LMArena and BCFL is about tool calling. I want to know which one should I use. Since I am not going to do math, coding and tool calling on my phone and mostly want to query it for knowledge, so maybe Instruct is better for my use case as GPQA is not much different and Arena-Hard might mean Instruct follows instruction better and fewer frustration for me?
2025-10-16T12:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1o8529x/qwen330ba3b_2507_instruct_vs_thinking/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8529x
false
null
t3_1o8529x
/r/LocalLLaMA/comments/1o8529x/qwen330ba3b_2507_instruct_vs_thinking/
false
false
self
9
null
Poll: What do you (or would you) use a Strix Halo / AI-PC for?
0
Hey guys I'm contemplating get one of these AI PC esp the Strix Halo. but just want to know how we can maximise the most value from it, so I guess a poll could be helpful, and I hope it helps you too! Pls vote or share! Thank you [View Poll](https://www.reddit.com/poll/1o84q8z)
2025-10-16T12:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1o84q8z/poll_what_do_you_or_would_you_use_a_strix_halo/
IntroductionSouth513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o84q8z
false
null
t3_1o84q8z
/r/LocalLLaMA/comments/1o84q8z/poll_what_do_you_or_would_you_use_a_strix_halo/
false
false
self
0
null
Tested Haiku 4.5 it is fast but can't complete basic tasks other models can
0
So I just tested Anthropic's new Haiku 4.5 model. They marketing it as comparable to Sonnet 4, but idk, I'm not seeing it. yes it's fast,but only that and 5$ per 1m tokens? For context, GPT5 mini has handled most of my speed-critical tasks without issues, and when I need something different I've been using Kimi2 or GLM4.6. They've all been solid for what I need. Here's the thing though, I went back through my archive of tasks that all these other models completed successfully before, and Haiku 4.5 just couldn't do them now. Tried multiple times with different prompts and still nothing... Maybe I'm missing something, but right now I don't see where this model has its place when there are so many better and cheaper options out there Has anyone else tried it? Am I the only one disappointed or is there a specific use case where it actually shines?
2025-10-16T12:15:21
https://www.reddit.com/r/LocalLLaMA/comments/1o84j5v/tested_haiku_45_it_is_fast_but_cant_complete/
DecisionLow2640
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o84j5v
false
null
t3_1o84j5v
/r/LocalLLaMA/comments/1o84j5v/tested_haiku_45_it_is_fast_but_cant_complete/
false
false
self
0
null
Qwen3-30B-A3B FP8 on RTX Pro 6000 blackwell with vllm
96
Power limit set to 450w **Short Context (1K tokens):** * Single user: 88.4 tok/s * 10 concurrent users: **652 tok/s** throughput * Latency: 5.65s → 7.65s (1→10 users) **Long Context (256K tokens):** * Single user: 22.0 tok/s * 10 concurrent users: **115.5 tok/s** throughput * Latency: 22.7s → 43.2s (1→10 users) * Still able to handle 10 concurrent requests! **Sweet Spot (32K-64K context):** * 64K @ 10 users: 311 tok/s total, 31 tok/s per user * 32K @ 10 users: 413 tok/s total, 41 tok/s per user * Best balance of context length and throughput FP8 quantization really shines here - getting 115 tok/s aggregate at 256K context with 10 users is wild, even with the power constraint. https://preview.redd.it/x9t4ttsvrgvf1.png?width=7590&format=png&auto=webp&s=0c86bf3cc42032a595ee4d02b2c78986da150836
2025-10-16T12:04:11
https://www.reddit.com/r/LocalLLaMA/comments/1o84b36/qwen330ba3b_fp8_on_rtx_pro_6000_blackwell_with/
notaDestroyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o84b36
false
null
t3_1o84b36
/r/LocalLLaMA/comments/1o84b36/qwen330ba3b_fp8_on_rtx_pro_6000_blackwell_with/
false
false
https://b.thumbs.redditm…2Yn4uKrDPRjI.jpg
96
null
Has anyone benchmarked local LLMs on the new M5 MacBook Pro yet?
0
Apple claims 6.4x faster token performance and 2x SSD speed on the M5. Curious if anyone’s tested real-world inference times or model loading speeds-especially for 7B-13B models like LLaMA or Mistral. Is the jump from M4/M3 worth it for local workflows? Would love to hear your setup and results.
2025-10-16T11:39:25
https://www.reddit.com/r/LocalLLaMA/comments/1o83tfh/has_anyone_benchmarked_local_llms_on_the_new_m5/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o83tfh
false
null
t3_1o83tfh
/r/LocalLLaMA/comments/1o83tfh/has_anyone_benchmarked_local_llms_on_the_new_m5/
false
false
self
0
null
Any sdk/library equivalent to vercel aisdk fo python
1
I was searching is there sdk/library which works like vercel aisdk but for python. i dont want to use langchain or openai. my preference is the code should be clean as aisdk
2025-10-16T11:36:51
https://www.reddit.com/r/LocalLLaMA/comments/1o83ro8/any_sdklibrary_equivalent_to_vercel_aisdk_fo/
DataScientia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o83ro8
false
null
t3_1o83ro8
/r/LocalLLaMA/comments/1o83ro8/any_sdklibrary_equivalent_to_vercel_aisdk_fo/
false
false
self
1
null
My local LLM app was insta-banned by Steam for "adult content" despite me declaring ZERO adult content in the survey. Here's the full story for fellow devs.
1
[removed]
2025-10-16T11:34:19
https://www.reddit.com/r/LocalLLaMA/comments/1o83pxa/my_local_llm_app_was_instabanned_by_steam_for/
Cultural_Ad896
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o83pxa
false
null
t3_1o83pxa
/r/LocalLLaMA/comments/1o83pxa/my_local_llm_app_was_instabanned_by_steam_for/
false
false
self
1
null
guys not goona lie but right now glm 4.6 3 usd plan is the best in the coding world right now im running the api 4 terminal for 4 project and its working so aesthetic and working good
0
claude are the vampire , go to the good guys glm 4.6 so easy to implement with crush cli
2025-10-16T11:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1o83fxx/guys_not_goona_lie_but_right_now_glm_46_3_usd/
Select_Dream634
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o83fxx
false
null
t3_1o83fxx
/r/LocalLLaMA/comments/1o83fxx/guys_not_goona_lie_but_right_now_glm_46_3_usd/
false
false
self
0
null
Qwen3 Next 80b FP8 with vllm on Pro 6000 Blackwell
33
GPU: NVIDIA RTX Pro 6000 Blackwell Edition (96GB VRAM) \- Driver: 580.95.05 \- CUDA: 13.0 \- Compute Capability: 9.0 (Blackwell) https://preview.redd.it/tf7qkz8ligvf1.png?width=7569&format=png&auto=webp&s=48977ab9548affc46738478260262bbf19184782 Software: \- vLLM: v0.11.1rc2.dev72+gf7d318de2 (nightly) \- Attention Backend: \*\*FlashInfer\*\* (with JIT autotuning) \- Quantization: FP8 W8A8 \- Python: 3.12.12 \- PyTorch with CUDA 12.4 backend (forward compatible with CUDA 13.0 driver)
2025-10-16T11:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1o83b2i/qwen3_next_80b_fp8_with_vllm_on_pro_6000_blackwell/
notaDestroyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o83b2i
false
null
t3_1o83b2i
/r/LocalLLaMA/comments/1o83b2i/qwen3_next_80b_fp8_with_vllm_on_pro_6000_blackwell/
false
false
https://b.thumbs.redditm…t14Sr270bH7A.jpg
33
null
DGX Spark with vLLM vs. Others -- any bechmarks?
2
It seems like DGX Spark has been declared DOA and RIP. I am wondering if we have any results that pit DGX Spark with batching against others (particularly, AMD and Macs) that don't seem to be able to take advantage of huge performance boosts that vLLM offers (even on single gpu).
2025-10-16T11:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1o836nx/dgx_spark_with_vllm_vs_others_any_bechmarks/
Chance-Studio-8242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o836nx
false
null
t3_1o836nx
/r/LocalLLaMA/comments/1o836nx/dgx_spark_with_vllm_vs_others_any_bechmarks/
false
false
self
2
null
Is it worth adding an rtx 4060 (8gb) to my current rtx 5080(16gb) setup?
0
My setup right now Rtx 5080 Ryzen 5 7600X 2x16gb ddr5 6000mhz Corsair RM850x 80+ gold Asus B650e max gaming wifi Case: Montech AIR 903 max Ive been messing around with LLMs on ollama and a complete begginer so far. Would it be a good idea to get 8gb more vram in a total of 24gb vram? OR, wait for the rumored 5080 super (24gb?), instead of buying an rtx 4060 and using that money to get the new gpu OR I don't really need it and im wasting money lol I don't really have any insane uses for the LLMs, just personal use. And small benefit on the side would be Physx support which isn't a big deal for me but its cool
2025-10-16T11:02:49
https://www.reddit.com/r/LocalLLaMA/comments/1o8359h/is_it_worth_adding_an_rtx_4060_8gb_to_my_current/
Substantial-Maybe358
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o8359h
false
null
t3_1o8359h
/r/LocalLLaMA/comments/1o8359h/is_it_worth_adding_an_rtx_4060_8gb_to_my_current/
false
false
self
0
null
Looking for a good agentic coding model that fits into Apple M1 Max, 32 GB
5
I am a huge fan of agentic coding using CLI (i.e., Gemini CLI). I want to create a local setup on Apple M1 Max 32 GB providing similar experience. Currently, my best setup is Opencode + llama.cpp + gpt-oss-20b. I have tried other models from HF marked as compatible with my hardware, but most of them failed to start: common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) ggml_metal_synchronize: error: command buffer 0 failed with status 5 error: Insufficient Memory (00000008:kIOGPUCommandBufferCallbackErrorOutOfMemory) /private/tmp/llama.cpp-20251013-5280-4lte0l/ggml/src/ggml-metal/ggml-metal-context.m:241: fatal error Any recommendation regarding the LLM and fine-tuning my setup is very welcome!
2025-10-16T10:55:14
https://i.redd.it/236l5p6ofgvf1.png
ThingRexCom
i.redd.it
1970-01-01T00:00:00
0
{}
1o8309h
false
null
t3_1o8309h
/r/LocalLLaMA/comments/1o8309h/looking_for_a_good_agentic_coding_model_that_fits/
false
false
default
5
{'enabled': True, 'images': [{'id': '236l5p6ofgvf1', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?width=108&crop=smart&auto=webp&s=a2b4f43dbde668f9046289a19340029e304f90cc', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?width=216&crop=smart&auto=webp&s=8f1c78039a592815911636bd83b1f8cc40497813', 'width': 216}, {'height': 106, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?width=320&crop=smart&auto=webp&s=a22f0140ddd0dd0e0f4483293807ac08eed8ed0d', 'width': 320}, {'height': 212, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?width=640&crop=smart&auto=webp&s=a063cc9464e783ed553ea182282f276ea5023727', 'width': 640}, {'height': 318, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?width=960&crop=smart&auto=webp&s=bad1bead77945234df09e323f0f97ee056f24bc4', 'width': 960}], 'source': {'height': 342, 'url': 'https://preview.redd.it/236l5p6ofgvf1.png?auto=webp&s=7c2f26cccb975d21c4f986d717dd8333d03c8beb', 'width': 1032}, 'variants': {}}]}
I got tired of OpenAI dependency. Built a multi-LLM control center instead.
0
I run an automation agency, and one recurring pain point with clients is **vendor lock-in**. Everyone builds around ChatGPT, then Claude drops a stronger reasoning model or Gemini smokes it on code—and you can’t easily switch. The friction is too high, and teams stay stuck. openRouter is too risky for many. That dependency problem bugged me enough to experiment with a different setup: * A chat interface that **routes tasks to the most suitable LLM automatically** (speed → Sonnet 3.5, deep reasoning → Opus, vision → Gemini, etc.) or you pick your favorite one. * Add in support for **self-hosted models** (for people who want EU hosting, GDPR compliance, or just full control). * And instead of just standard chat, connect directly into **500+ tools via MCP** and trigger n8n workflows. So a prompt like: > Some takeaways from building this: * **Routing is harder than it looks**: benchmarks are one thing, but real-world tasks require heuristics (speed vs. depth vs. cost vs. compliance). * **MCP is underrated**: once you connect workflows directly, LLMs stop feeling like isolated toys and start acting like actual assistants. * **GDPR/EU hosting matters**: lots of European companies are hesitant to push client data through US-only APIs. We built ours over 6 months with a distributed team (Egypt, Estonia, South Korea, Germany). Surprisingly, total build cost was only about $1k thanks to open-source infra + AI-assisted dev. I’d love to hear: * Has anyone else here tackled **multi-LLM routing**? * How do you decide which model to use for which task? * For those who run local models: do you combine them with API models, or go pure local? PS: I’m Paul, working on *keinsaas Navigator*. We’ll open a small beta next month: free credits, pay-as-you-go, no subscriptions. [**You can sign up for access here**](https://beta.keinsaas.com/).
2025-10-16T10:31:54
https://www.reddit.com/r/LocalLLaMA/comments/1o82lqp/i_got_tired_of_openai_dependency_built_a_multillm/
zakjaquejeobaum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o82lqp
false
null
t3_1o82lqp
/r/LocalLLaMA/comments/1o82lqp/i_got_tired_of_openai_dependency_built_a_multillm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog.png?width=108&crop=smart&auto=webp&s=768c5672b23a7b1d7f2603678023f2655a0774bf', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog.png?width=216&crop=smart&auto=webp&s=177f80beb21e71f6993021cb197b7461fae45e0e', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog.png?width=320&crop=smart&auto=webp&s=23cdfea7d921c185581bbb380a83436de9c4e7d2', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog.png?width=640&crop=smart&auto=webp&s=a37116ba0c96536cfc1ae2ff748b2bb06c3430ae', 'width': 640}], 'source': {'height': 294, 'url': 'https://external-preview.redd.it/sYct0CcKZ0MHl9L-5UUQhNxRZrMKqkcHOAXE8aeSHog.png?auto=webp&s=9edb8b720c6d7ec7e520dd0eae9828c4df51069e', 'width': 668}, 'variants': {}}]}
Thesis on AI acceleration — would love your advice!
1
**Hey everyone!** 👋 I’m an Electrical and Electronics Engineering student from Greece, just starting my thesis on “Acceleration and Evaluation of Transformer Models on Neural Processing Units (NPUs)**”**. It’s my first time working on something like this, so I’d really appreciate any tips, experiences, or recommendations from people who’ve done model optimization or hardware benchmarking before. Any advice on tools, resources, or just how to get started would mean a lot. Thanks so much, and hope you’re having an awesome day! 😊
2025-10-16T10:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1o82lk9/thesis_on_ai_acceleration_would_love_your_advice/
Beneficial_Air3381
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o82lk9
false
null
t3_1o82lk9
/r/LocalLLaMA/comments/1o82lk9/thesis_on_ai_acceleration_would_love_your_advice/
false
false
self
1
null
NVIDIA DGX Spark – A Non-Sponsored Review (Strix Halo Comparison, Pros & Cons)
65
**NVIDIA DGX Spark – A Non-Sponsored Review (Strix Halo Comparison, Pros & Cons)** [https://www.youtube.com/watch?v=Pww8rIzr1pg](https://www.youtube.com/watch?v=Pww8rIzr1pg)
2025-10-16T10:30:22
https://www.reddit.com/r/LocalLLaMA/comments/1o82kta/nvidia_dgx_spark_a_nonsponsored_review_strix_halo/
Corylus-Core
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o82kta
false
null
t3_1o82kta
/r/LocalLLaMA/comments/1o82kta/nvidia_dgx_spark_a_nonsponsored_review_strix_halo/
false
false
self
65
{'enabled': False, 'images': [{'id': 'Vy-W-D7bZJQeTGkCvJFpbecaeVVPF9uwlOEJ6cLYl10', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Vy-W-D7bZJQeTGkCvJFpbecaeVVPF9uwlOEJ6cLYl10.jpeg?width=108&crop=smart&auto=webp&s=f70aaa879be85f3a328d22eca1a7339408898a68', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Vy-W-D7bZJQeTGkCvJFpbecaeVVPF9uwlOEJ6cLYl10.jpeg?width=216&crop=smart&auto=webp&s=f5a88ce11f5f16ec67ff1d66e6856406ea78e113', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Vy-W-D7bZJQeTGkCvJFpbecaeVVPF9uwlOEJ6cLYl10.jpeg?width=320&crop=smart&auto=webp&s=8c021647a23d9c3051412f54b4911f8abde9a055', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Vy-W-D7bZJQeTGkCvJFpbecaeVVPF9uwlOEJ6cLYl10.jpeg?auto=webp&s=b873a235384e12205640d05105c22fa22212ce8d', 'width': 480}, 'variants': {}}]}
Google C2S-Scale 27B (based on Gemma) built with Yale generated a novel hypothesis about cancer cellular behavior - Model + resources are now on Hugging Face and GitHub
208
Blog post: How a Gemma model helped discover a new potential cancer therapy pathway - We’re launching a new 27 billion parameter foundation model for single-cell analysis built on the Gemma family of open models.: [https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/](https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/) Hugging Face: [https://huggingface.co/vandijklab/C2S-Scale-Gemma-2-27B](https://huggingface.co/vandijklab/C2S-Scale-Gemma-2-27B) Scientific preprint on bioRxiv: [https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2](https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2) Code on GitHub: [https://github.com/vandijklab/cell2sentence](https://github.com/vandijklab/cell2sentence)
2025-10-16T09:41:04
https://www.reddit.com/gallery/1o81rvs
Nunki08
reddit.com
1970-01-01T00:00:00
0
{}
1o81rvs
false
null
t3_1o81rvs
/r/LocalLLaMA/comments/1o81rvs/google_c2sscale_27b_based_on_gemma_built_with/
false
false
https://b.thumbs.redditm…S7YxS0LFbuxA.jpg
208
null
Claude Haiku 4.5 vs. Chinese models: Is it better?
7
I noticed that Claude just released their latest model — the Claude Haiku 4.5, which scored 73.3% on SWE-bench verified. This is really impressive, considering it’s the beginner model in the Claude series. Claude has always been strong in coding, but its previous models have also been quite expensive. Now they have Haiku 4.5 — a cheaper option that still delivers solid performance. Models like Qwen 3 Coder, GLM 4.6, KIMI K2, and DeepSeek V3.2 are undoubtedly its main competitors. I haven’t had the chance to try Haiku 4.5 in depth yet. For those who have, what’s your first impression? How does it perform compared to these cheap Chinese models?
2025-10-16T09:13:54
https://www.reddit.com/r/LocalLLaMA/comments/1o81cpp/claude_haiku_45_vs_chinese_models_is_it_better/
hemokwang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o81cpp
false
null
t3_1o81cpp
/r/LocalLLaMA/comments/1o81cpp/claude_haiku_45_vs_chinese_models_is_it_better/
false
false
self
7
null
Anyone found a Open Source Voice Changer (not Voice Cloner like Vibevoice or Chatterbox) ?
7
Hi, Elevenlabs has a voice changer but I havent found anything open source where I can record myself and then just changed that piece to a new voice. What I am after is emotion and rhythm - It takes a long time to create dialogue with the TTS models including voice cloning and it might take me 5-20 tries before I am happy with one line of voice. Creating dialogue becomes very difficult- however with voice cloning (reference audio) and then adding a prerecorded voice recording and changing voice would be a game changer. Has anyone ran into something that can do this?
2025-10-16T08:51:18
https://www.reddit.com/r/LocalLLaMA/comments/1o810id/anyone_found_a_open_source_voice_changer_not/
quantier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o810id
false
null
t3_1o810id
/r/LocalLLaMA/comments/1o810id/anyone_found_a_open_source_voice_changer_not/
false
false
self
7
null
Temp benchmarks about DGX Spark
0
So hi everybody. I work in a University in Italy and for our research projects we were considering buying a DGX Spark. We've seen the benchmarks about the speed (expected, due to the low bandwitdth), but i cannot seem to find any benchmark whatsoever about how hot it gets and how much thermal bottleneck there is with heavy and durable workloads. Do you happen to have any? Thank you
2025-10-16T08:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1o80kdj/temp_benchmarks_about_dgx_spark/
MeGuaZy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o80kdj
false
null
t3_1o80kdj
/r/LocalLLaMA/comments/1o80kdj/temp_benchmarks_about_dgx_spark/
false
false
self
0
null
How can I implement key frame selection using object detection and tracking for a Vision-Language Model on an edge device?
3
Hi everyone, I'm currently working on a project that uses a Vision-Language Model (VLM) for risk monitoring on the edge. I want to integrate key frame selection to reduce redundant video frames before sending them to the VLM for reasoning. My idea is to use object detection and object tracking to identify frames that contain significant changes or new objects (e.g., anomaly events). I have a few questions: 1. What would be the best approach or algorithm for key frame selection in this kind of setup? 2. Can object detection and tracking (e.g., YOLOv8, SORT, DeepSORT, ByteTrack) run efficiently on edge devices like a Raspberry Pi 5 with an AI HAT (26 TOPS)? 3. Are there any optimized lightweight models or frameworks you'd recommend for running this pipeline in real-time? Any practical insights, papers, or example projects would be greatly appreciated. Thanks!
2025-10-16T08:02:26
https://www.reddit.com/r/LocalLLaMA/comments/1o80ack/how_can_i_implement_key_frame_selection_using/
Wraithraisrr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o80ack
false
null
t3_1o80ack
/r/LocalLLaMA/comments/1o80ack/how_can_i_implement_key_frame_selection_using/
false
false
self
3
null
I fine-tuned Qwen3-VL (4B & 8B) on a free Colab instance using TRL (SFT and GRPO)!
34
I've created a couple of notebook that work for free on Colab (T4 GPU) to fine-tune the new Qwen3-VL small and dense vision-language models (4B and 8B). Both the Instruct and Thinking variants are supported. They use **TRL**, which handles most of the training complexity so you can focus entirely on the specific task you want to fine-tune for. * **SFT** notebook**:** fine-tunes with a dataset to refine the model's response style: [https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft\_qwen\_vl.ipynb](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_qwen_vl.ipynb) * **GRPO** notebook**:** includes two reward functions to make the non-reasoning model learn to reason ([https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo\_qwen3\_vl.ipynb](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_qwen3_vl.ipynb)): 1. A tag-based reward that checks for `<think>` and `<answer>` sections. 2. A length-based reward that discourages overthinking and checks correctness. Both notebooks can be run on a free Colab instance, but can also be scaled up for more advanced setups. The notebooks can also be accessed here: [https://github.com/huggingface/trl/tree/main/examples/notebooks](https://github.com/huggingface/trl/tree/main/examples/notebooks) Feedback and experiments are welcome!!
2025-10-16T07:58:52
https://www.reddit.com/r/LocalLLaMA/comments/1o808av/i_finetuned_qwen3vl_4b_8b_on_a_free_colab/
External-Rub5414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o808av
false
null
t3_1o808av
/r/LocalLLaMA/comments/1o808av/i_finetuned_qwen3vl_4b_8b_on_a_free_colab/
false
false
self
34
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
PSA: Scam ads on reddit
0
I just came across an obvious investment scam ad via the gamedev reddit, though I remember seeing versions of it more commonly here as well. It links to a site naming itself (this time) fuentoro.ai (though it doesn't even have an actual .ai address because that's too expensive for the scammers, and the name is probably also made up.), trying to wrestle people out of money with some 'cryptocurrency and AI' investment scheme and rates of return that are blatantly too good to be true, I'm talking about 32x monthly or the equivalent of a yearly percentage of 1152000000000000000% returns. Really it's just going to take your money and run. Two Domains involved with the scam are spain-time.dailyaitech.digital and heuces04.com. The first is a phishing site impersonating (at the time) El Pais, filled with fake AI generated news articles, one of which will just be a thinly veiled promotion of the second, the second an 'investment platform', again filled with some AI generated drivel to make you 'invest', or throw away your money to these criminals. Another thing that gives it away is that every link in the article links to the scam site, even those that ostensibly lead to other articles. What's happening with Reddit's vetting of advertisements that this is getting through? It takes me two seconds to realize this is a false promise. It's not just reddit; there's a couple news sites that have also been fooled that have copied the trend and used 'AI' to generate their article. This might become troublesome... with it getting harder and harder to recognise AI content; it might become much easier to fool people with fake investment scams by having this veneer of professionalism covering the money pit. Since Reddit's reporting system only allows reporting content they're not being paid to host, it's not allowing people to Report specific ads. This one is disguised as a reddit post... but it's not technically always visible. I missed out on linking to it. If anyone comes across one of these, could you add a (non-clickable) link? We should be reporting this garbage. It's crazy to think a mainstream site is literally promoting investment fraud. Anyway; the number one rule continues to apply: If someone's proposing an investment, and it's not something you can understand how it can become very succesful, assume any rate of much above 10% is a lie. If it's an indirect investment, all rates of much above 10%, if promised outright, are a lie.
2025-10-16T07:39:51
https://www.reddit.com/r/LocalLLaMA/comments/1o7zy3b/psa_scam_ads_on_reddit/
Aphid_red
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7zy3b
false
null
t3_1o7zy3b
/r/LocalLLaMA/comments/1o7zy3b/psa_scam_ads_on_reddit/
false
false
self
0
null
Anyone working on English repo of Xiaozhi
1
Hi , been experimenting with this repo and it seems very nicely done ! But mostly in Chinese , and was hoping if anyone is working on English fork of the same or can recommend similar project Client side: https://github.com/78/xiaozhi-esp32 Server side: https://github.com/xinnan-tech/ xiaozhi-esp32-server
2025-10-16T07:37:54
https://www.reddit.com/r/LocalLLaMA/comments/1o7zx2b/anyone_working_on_english_repo_of_xiaozhi/
Plane_Ad9568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7zx2b
false
null
t3_1o7zx2b
/r/LocalLLaMA/comments/1o7zx2b/anyone_working_on_english_repo_of_xiaozhi/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=108&crop=smart&auto=webp&s=72d34b2757a01d0d6e2e758ee408bf5f2bfc9ec1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=216&crop=smart&auto=webp&s=7b89372e14ed4ade05c1586a5ec9eba3978b8948', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=320&crop=smart&auto=webp&s=9a118ad4b60d7959b48fd4dc5e8b41f461b2312f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=640&crop=smart&auto=webp&s=691fdb8d66460097d928a85e78550e094c8a5e78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=960&crop=smart&auto=webp&s=81e9e0cd28843980a37ca2ddc48a20002ab4a872', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?width=1080&crop=smart&auto=webp&s=147900e5eacc00023e824f12320d8e0ffddf47d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bgr5bVK9Erykr7TZb-1HM7jsJsusM6uHHrCvkBtLSmM.png?auto=webp&s=379c6b3fd6aeb2224e5811504deab28103f826b3', 'width': 1200}, 'variants': {}}]}
Can someone explain how to actually use the C2S Scale model for cancer research?
2
I keep seeing headlines about Google and Yale's "C2S Scale" AI model that can analyze cells, but I'm completely lost on the practical steps. If I'm a researcher, what do I actually *do* with the C2S Scale model? Do I feed it microscope images? A spreadsheet of numbers? A specific type of genetic data? And what kind of computer power is needed to run this 27B parameter model locally? A simple explanation of the input and hardware would be incredibly helpful.
2025-10-16T07:25:37
https://www.reddit.com/r/LocalLLaMA/comments/1o7zqm5/can_someone_explain_how_to_actually_use_the_c2s/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7zqm5
false
null
t3_1o7zqm5
/r/LocalLLaMA/comments/1o7zqm5/can_someone_explain_how_to_actually_use_the_c2s/
false
false
self
2
null
DGX spark website stuck after I click on add to cart
0
I click on add to cart and it takes me to a page with a loading screen. And it's like that for a couple hours. I expected more people to have this problem but I haven't seen anyone else talk about this. Is it just me?
2025-10-16T07:00:56
https://www.reddit.com/r/LocalLLaMA/comments/1o7zdg6/dgx_spark_website_stuck_after_i_click_on_add_to/
realbrokenlantern
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7zdg6
false
null
t3_1o7zdg6
/r/LocalLLaMA/comments/1o7zdg6/dgx_spark_website_stuck_after_i_click_on_add_to/
false
false
self
0
null