title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Decent local models that run with 32 GB VRAM (5090) with 64 GB RAM at a good speed? | 4 | Been messing around local models since I am annoyed with the rate limits of claude code, any good models which run decently? Tried gpt-oss 20B but it was getting stuck into an endless loop when the code repo complexity was getting larger. Currently running everything with a llama.cpp server with Cline.
Haven't tried OpenCode yet, heard Qwen 3 Coder is good, does it work decently or has parsing issue? Mostly working on C++ with some python code | 2025-10-04T23:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ny9ds3/decent_local_models_that_run_with_32_gb_vram_5090/ | rulerofthehell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny9ds3 | false | null | t3_1ny9ds3 | /r/LocalLLaMA/comments/1ny9ds3/decent_local_models_that_run_with_32_gb_vram_5090/ | false | false | self | 4 | null |
Running Qwen3-VL-30B locally with 48GB VRAM: 90 t/s | 1 | [removed] | 2025-10-04T23:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ny9bup/running_qwen3vl30b_locally_with_48gb_vram_90_ts/ | slavik-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny9bup | false | null | t3_1ny9bup | /r/LocalLLaMA/comments/1ny9bup/running_qwen3vl30b_locally_with_48gb_vram_90_ts/ | false | false | self | 1 | null |
Qwen3-VL-30B-A3B-Instruct ~= Qwen2.5-VL-72B | 13 | qwen3-vl-30b is obviously smaller and should be faster. there's no gguf model yet, so for me it's taking 60+GB of vram. I'm running the 72b gguf Q8 and having to use transformers to run qwen3 and qwen3 feels/runs slower. Running the 30b-a3b on quad 3090s and 72b on mix of P40/P100/3060 and yet 72b is faster. 72b edges out, maybe there's a code recipe out there that shows better utilization. With that said, if you find it good or better in anyway than 72b, please let me know so I can give it a try. qwen3-vl will be great when it gets llama.cpp support, but for now you are better off using qwen2.5-vl 72b at maybe Q6 or even qwen2.5-vl-32b | 2025-10-04T23:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ny8s1r/qwen3vl30ba3binstruct_qwen25vl72b/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny8s1r | false | null | t3_1ny8s1r | /r/LocalLLaMA/comments/1ny8s1r/qwen3vl30ba3binstruct_qwen25vl72b/ | false | false | self | 13 | null |
Testing some language models on NPU | 4 | I got my hand on a (kinda) -china exclusive- sbc the OPI ai pro 20T
it can give 20 TOPS @ int8 precision (i have the 24g ram)
and this board actually has an NPU (Ascend310) i was able to run Qwen 2.5 & 3 (3B half precision was kinda slow but acceptable)
my ultimate goal is to deploy some quantized models + whisper tiny (still cracking this part) to have a full offline voice assistant pipeline
| 2025-10-04T22:45:53 | overflow74 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ny7wwz | false | null | t3_1ny7wwz | /r/LocalLLaMA/comments/1ny7wwz/testing_some_language_models_on_npu/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'e0wxt77lb6tf1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=108&crop=smart&auto=webp&s=ec6f7348d7c52e6260be8dabe49e1d73badb2fa1', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=216&crop=smart&auto=webp&s=de762f4915f489070c24dc1d0f07054ddb31844a', 'width': 216}, {'height': 111, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=320&crop=smart&auto=webp&s=22e65103061416019f2cbd8a121706621830f807', 'width': 320}, {'height': 222, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=640&crop=smart&auto=webp&s=cef52f5233f5d863232f5ba0780fc33665d14d41', 'width': 640}, {'height': 333, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=960&crop=smart&auto=webp&s=2421fa7ecf4888e61b7264adb94083567d3d0622', 'width': 960}, {'height': 374, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?width=1080&crop=smart&auto=webp&s=81bba62ee1114bc59c59257e66075d68c718b8aa', 'width': 1080}], 'source': {'height': 555, 'url': 'https://preview.redd.it/e0wxt77lb6tf1.jpeg?auto=webp&s=959e007b0756ff43ecee469c43e146372860d092', 'width': 1600}, 'variants': {}}]} | |
Desktop app for running local LLMs | 1 | Hi everyone — I’m the developer of this project and wanted to share.
It can:
* Run any LLM locally through **Ollama**
* Perform **multi-step Deep Research** with citations
* **Auto-organize folders** and manage files in seconds
* **Open and close applications** directly from the interface
* Customize **reasoning modes and personalities** for different workflows
* ...and much more
Everything runs **entirely on your machine** — no cloud processing or external data collection.
Repo: [https://github.com/katassistant/katassistant](https://github.com/katassistant/katassistant)
I’m funding it through **Ko-fi** since I’m a solo dev working on this alongside a full-time job.
If you’d like to support ongoing development, you can do so here ❤️ → [https://ko-fi.com/katassistant](https://ko-fi.com/katassistant)
Would love any feedback, bug reports, or ideas for improvement! | 2025-10-04T22:29:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ny7j8g/desktop_app_for_running_local_llms/ | KatAssistant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny7j8g | false | null | t3_1ny7j8g | /r/LocalLLaMA/comments/1ny7j8g/desktop_app_for_running_local_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=108&crop=smart&auto=webp&s=a51ab12d8e8862d3817141081110be0be9c699db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=216&crop=smart&auto=webp&s=3ec9b5111a54d9c346de720fcee0fe5c4e1bbe23', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=320&crop=smart&auto=webp&s=9aaa696d8f8a7a27a7ba8c80df595c6129137a86', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=640&crop=smart&auto=webp&s=b12f4a8d581b02dbf0436334d5720760f55da65d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=960&crop=smart&auto=webp&s=be330f2a61fcf249a0939ffbe502cd914e133c10', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?width=1080&crop=smart&auto=webp&s=31a7022439e021a427f6494a9d47d21b0550decb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0FY5uCTKTIpROi6ujHnKcj_DWkkQc_-Hu_NmzgGvcdU.png?auto=webp&s=f759d0e23477b09dd76f5826cbf405aa81168f8a', 'width': 1200}, 'variants': {}}]} |
Working on an academic AI project for CV screening — looking for advice | 0 | Hey everyone,
I’m doing an academic project around **AI for recruitment**, and I’d love some feedback or ideas for improvement.
The goal is to build a project that can **analyze CVs (PDFs)**, extract key info (skills, experience, education), and **match them with a job description** to give a simple, **explainable ranking** — like showing what each candidate is strong or weak in.
Right now my plan looks like this:
* Parse PDFs (maybe with a VLM).
* Use a **hybrid search**: TF-IDF + **embeddings\_model**, stored in **Qdrant**.
* Add a **reranker**.
* Use a small **LLM (Qwen)** to explain the results and maybe generate **interview questions**.
* Manage everything with **LangChain**.
It’s still early — I just have a few CVs for now — but I’d really appreciate your thoughts:
* How could I **simplify or optimize** this pipeline?
* Any tips for **evaluating** results without a labeled dataset?
* Would you **fine-tune** model\_embeddings or LLM?
I am still learning , so be cool with me lol ;) // By the way , i don't have strong rss so i can't load huge LLM ...
Thanks ! | 2025-10-04T22:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ny7b4a/working_on_an_academic_ai_project_for_cv/ | Particular_Cake4359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny7b4a | false | null | t3_1ny7b4a | /r/LocalLLaMA/comments/1ny7b4a/working_on_an_academic_ai_project_for_cv/ | false | false | self | 0 | null |
is the DGX Spark a valid option? | 0 | Just curious.. given the $3K "alleged" price tag of OEMs (not founders).. 144GB HBM3e unified ram, tiny size and power use.. is it a viable solution to run (infer) GLM4.6, DeepSeekR2, etc? Thinkin 2 of them (since it supprots NV Link) for $6K or so would be a pretty powerful setup with 250+GB or VRAM between them. Portable enough to put in a bag with a laptop as well.
| 2025-10-04T22:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ny6xio/is_the_dgx_spark_a_valid_option/ | Conscious-Fee7844 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny6xio | false | null | t3_1ny6xio | /r/LocalLLaMA/comments/1ny6xio/is_the_dgx_spark_a_valid_option/ | false | false | self | 0 | null |
Someone said janky? | 54 | Longtime lurker here. Seems to be posts of janky rigs today. Please enjoy. | 2025-10-04T21:40:18 | https://www.reddit.com/gallery/1ny6dep | omg__itsFullOfStars | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ny6dep | false | null | t3_1ny6dep | /r/LocalLLaMA/comments/1ny6dep/someone_said_janky/ | false | false | 54 | null | |
need help getting one file in order to install an ai image generator | 1 | to make comfyui work i need a specific file that i can't find a download of; does anyone with a working installation have a filed named "clip-vit-l-14.safetensors" if you do please upload it; i can't find the thing anywhere; and i've checked in a lot of places; my installation of it needs this file badly | 2025-10-04T21:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ny6cfc/need_help_getting_one_file_in_order_to_install_an/ | GanacheConfident6576 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny6cfc | false | null | t3_1ny6cfc | /r/LocalLLaMA/comments/1ny6cfc/need_help_getting_one_file_in_order_to_install_an/ | false | false | self | 1 | null |
Tutorial: Matrix Core Programming on AMD GPUs | 1 | Hi all,
I wanted to share my new tutorial on programming Matrix Cores in HIP. The blog post is very educational and contains necessary knowledge to start programming Matrix Cores, covering modern low-precision floating-point types, the Matrix Core compiler intrinsics, and the data layouts required by the Matrix Core instructions. I tried to make the tutorial easy to follow and, as always, included lots of code examples and illustrations. I hope you will enjoy it!
I plan to publish more technical tutorials on kernel programming in HIP and inference optimization for both RDNA and CDNA architecture. Please let me know if there are any other technical ROCm/HIP-related topics you would like to hear more about!
Link: [https://salykova.github.io/matrix-cores-cdna](https://salykova.github.io/matrix-cores-cdna) | 2025-10-04T21:20:49 | salykova_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ny5wj2 | false | null | t3_1ny5wj2 | /r/LocalLLaMA/comments/1ny5wj2/tutorial_matrix_core_programming_on_amd_gpus/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'gid62c0ew5tf1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=108&crop=smart&auto=webp&s=8cc67af331f30a5b0e11374aa37a7820542b53ec', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=216&crop=smart&auto=webp&s=e68a64f17a220918d921b19457114b7a759b636c', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=320&crop=smart&auto=webp&s=1da9fdde3d7e6b1572efcfe74148d463fdc80c81', 'width': 320}, {'height': 547, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=640&crop=smart&auto=webp&s=70a4bb0c64ae6e6f3f1facdf15df5a4f665e906d', 'width': 640}, {'height': 821, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=960&crop=smart&auto=webp&s=55f988c4701856485ebe25d99dffce2f95046e00', 'width': 960}, {'height': 924, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?width=1080&crop=smart&auto=webp&s=d48f6b07c2b02fc165f507026ba5ecd9ceb58263', 'width': 1080}], 'source': {'height': 924, 'url': 'https://preview.redd.it/gid62c0ew5tf1.jpeg?auto=webp&s=db0ee4dae2fbce787b0c80ea79f58e80d8e926c1', 'width': 1080}, 'variants': {}}]} | |
Tutorial: Matrix Core Programming on AMD GPUs | 1 | [deleted] | 2025-10-04T21:19:48 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ny5vkh | false | null | t3_1ny5vkh | /r/LocalLLaMA/comments/1ny5vkh/tutorial_matrix_core_programming_on_amd_gpus/ | false | false | default | 1 | null | ||
What happens if AI agents start trusting everything they read? (I ran a test.) | 0 | I ran a controlled experiment where an AI agent followed hidden instructions inside a doc and made destructive repo changes. Don’t worry — it was a lab test and I’m not sharing how to do it. My question: who should be responsible — the AI vendor, the company deploying agents, or security teams? Why? | 2025-10-04T20:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ny5345/what_happens_if_ai_agents_start_trusting/ | AIMadeMeDoIt__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny5345 | false | null | t3_1ny5345 | /r/LocalLLaMA/comments/1ny5345/what_happens_if_ai_agents_start_trusting/ | false | false | self | 0 | null |
Replacing my need for Anthropic and OpenAI with my current hardware possible? | 0 | I just bought what I thought was beast hardware: rtx 5090, ultra 9 285k and 128 gb of ram. To my disappointment, I can't run the best models out there without quantization. If I had known earlier, I would have waited more for the hardware to catch up. I guess my goal is to replace my dependency of ChatGPT, Claude Code, etc. and also create a personal assistant so I don't share my data with any of these giants.
I want to be able to run agentic flows, with sufficiently large context, mcp server usage, web search and deep research abilities. I downloaded ollama but it's extremely basic. Dual booting ubuntu so I can run tensorRT-LLM since I hear it can squeeze more performance.
Do you guys think it's feasible with my current hardware? I don't think I have more money to upgrade anymore than this lol. Perhaps I'll sell my ram and upgrade to 256 gb | 2025-10-04T20:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ny4zi9/replacing_my_need_for_anthropic_and_openai_with/ | alienz225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny4zi9 | false | null | t3_1ny4zi9 | /r/LocalLLaMA/comments/1ny4zi9/replacing_my_need_for_anthropic_and_openai_with/ | false | false | self | 0 | null |
Windows App/GUI for MLX, vLLM models? | 2 | For GGUF, we have so many Open source GUIs to run models great. I'm looking for Windows App/GUI for MLX & vLLM models. Even WebUI fine. Command line also fine(Recently started learning llama.cpp). Non-Docker would be great. I'm fine if it's not pure Open source in worst case.
The reason for this is I heard that MLX, vLLM are faster than GGUF(in some cases). I saw some threads on this sub related to this(I did enough search on Tools before posting this question, there's not much useful answers on those old threads).
With my 8GB VRAM(and 32GB RAM), I could run only upto 14B GGUF models(and upto 30B MOE models). There are some models I want to use, but I couldn't due to model size which's tooo big for my VRAM.
For example,
Mistral series 20B+, Gemma 27B, Qwen 32B, Llama3.3NemotronSuper 49B, Seed OSS 36B, etc.,
Hoping to run these models at bearable speed using tools you're gonna suggest here.
Thanks.
(Anyway GGUF will be my favorite always. First toy!) | 2025-10-04T20:09:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ny447i/windows_appgui_for_mlx_vllm_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny447i | false | null | t3_1ny447i | /r/LocalLLaMA/comments/1ny447i/windows_appgui_for_mlx_vllm_models/ | false | false | self | 2 | null |
Alternatives to Ollama? | 0 | I'm a little tired of Ollama's management. I've read that they've stopped supporting some AMD GPUs that recently received a power-up from Llama.cpp, and I'd like to prepare for a future change.
I don't know if there is some kind of wrapper on top of Llama.cpp that offers the same ease of use as Ollama, with the same endpoints available and the same ease of use.
I don't know if it exists or if any of you can recommend one. I look forward to reading your replies. | 2025-10-04T20:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ny408v/alternatives_to_ollama/ | vk3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny408v | false | null | t3_1ny408v | /r/LocalLLaMA/comments/1ny408v/alternatives_to_ollama/ | false | false | self | 0 | null |
Best model for NSFW creative writing? | 0 | What is the best LLM for NSFW creative writing? | 2025-10-04T19:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ny3mg1/best_model_for_nsfw_creative_writing/ | Time-Teaching1926 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny3mg1 | false | null | t3_1ny3mg1 | /r/LocalLLaMA/comments/1ny3mg1/best_model_for_nsfw_creative_writing/ | false | false | nsfw | 0 | null |
GLM-4.6 Tip: How to Control Output Quality via Thinking | 39 | You can control the output quality of GLM-4.6 by influencing the thinking process through your prompt.
You can suppress the thinking process by appending `</think>` at the end of your prompt. GLM-4.6 will then respond directly, but with the lowest output quality.
Conversely, you can ramp up the thinking process and significantly improve output quality. To do this, append the following sentence to your prompt:
*"Please think carefully, as the quality of your response is of the highest priority. You have unlimited thinking tokens for this. Reasoning: high"*
Today, I accidentally noticed that the output quality of GLM-4.6 sometimes varies. I observed that the thinking process was significantly longer for high-quality outputs compared to lower-quality ones. By using the sentence above, I was able to reliably trigger the longer thinking process in my case.
I’m using Q6-K-XL quantized models from Unsloth and a freshly compiled version of llama.cpp for inference. | 2025-10-04T19:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ny3gfb/glm46_tip_how_to_control_output_quality_via/ | Snail_Inference | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny3gfb | false | null | t3_1ny3gfb | /r/LocalLLaMA/comments/1ny3gfb/glm46_tip_how_to_control_output_quality_via/ | false | false | self | 39 | null |
Gemini 3.0 & Deepseek R2 | 12 | I think the last big 2 models to come out this year or early next year will be the king of closed source LLM's Gemini 3.0 and the king of open sourced LLM's Deepseek R2.
Are you all excited? | 2025-10-04T19:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ny33x8/gemini_30_deepseek_r2/ | Time-Teaching1926 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny33x8 | false | null | t3_1ny33x8 | /r/LocalLLaMA/comments/1ny33x8/gemini_30_deepseek_r2/ | false | false | self | 12 | null |
New Build for local LLM | 187 | Mac Studio M3 Ultra 512GB RAM 4TB HDD desktop
96core threadripper, 512GB RAM, 4x RTX Pro 6000 Max Q (all at 5.0x16), 16TB 60GBps Raid 0 NVMe
Thanks for all the help getting parts selected, getting it booted, and built! It's finally together thanks to the help of the community (here and discord!)
Check out my cozy little AI computing paradise. | 2025-10-04T19:20:06 | chisleu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ny2w2d | false | null | t3_1ny2w2d | /r/LocalLLaMA/comments/1ny2w2d/new_build_for_local_llm/ | false | false | default | 187 | {'enabled': True, 'images': [{'id': '3xz2zcko95tf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=108&crop=smart&auto=webp&s=dc54bb3abe3a1606b69c16d5661c5221721a9da1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=216&crop=smart&auto=webp&s=46e353ec857d74f67655cb283a84179bfebcc00b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=320&crop=smart&auto=webp&s=e6459ea9e8f175c51bf22d26d278fbf160e690a3', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=640&crop=smart&auto=webp&s=dae2f2880541d229aee6f421736d1ded6ea02ce3', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=960&crop=smart&auto=webp&s=693cfb7ad8e434faf89fc59dbb73c52830f5b3ab', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?width=1080&crop=smart&auto=webp&s=3c026fe68b378b0a2881628a5921065d3d6c3b0a', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/3xz2zcko95tf1.png?auto=webp&s=680348e916adddb4180ce111298b42558178fe5d', 'width': 1536}, 'variants': {}}]} | |
Any good local alternatives to Claude? | 1 | Disclaimer: I understand some programming but I am not a programmer.
Note: I have a 5090 & 64GB Ram.
Never used Claude until last night. I was fighting ChatGPT for hours on some simple Python code (specifically RenPy). You know the typical, try this same thing over-and-over loop.
Claude solved my problem in about 15minutes....
So of course I gotta ask, are there any local models that can come close to Claude for (non complex) programming tasks? I'm not talking about the upper eschlon of quality here, just something purpose designed.
I appreciate it folks, ty.
| 2025-10-04T19:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ny2ot3/any_good_local_alternatives_to_claude/ | BenefitOfTheDoubt_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny2ot3 | false | null | t3_1ny2ot3 | /r/LocalLLaMA/comments/1ny2ot3/any_good_local_alternatives_to_claude/ | false | false | self | 1 | null |
Any resources on how to prepare data for fine tuning? | 6 | Dear tech wizards of LocalLLama,
I own a M3 Max 36 gb and have experience running inference on local models using OpenwebUI and Ollama. I want to get some hands in experience with fine tuning And am looking for resources for fine tuning data prep.
For the tech stack, i decided to use MLX since I want to do everything locally. And will use a model within 7B-13B range.
I would appreciate if anyone can suggest resources on data prep. opinions on what model to use or best practices are also great app. Thank you 🙏🙏🙏 | 2025-10-04T18:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ny1z91/any_resources_on_how_to_prepare_data_for_fine/ | SoggyClue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny1z91 | false | null | t3_1ny1z91 | /r/LocalLLaMA/comments/1ny1z91/any_resources_on_how_to_prepare_data_for_fine/ | false | false | self | 6 | null |
What are some repetitive text patterns you see a lot from your AI? | 0 | Just curious what comes up the most from you, if anything. | 2025-10-04T18:41:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ny1wzr/what_are_some_repetitive_text_patterns_you_see_a/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny1wzr | false | null | t3_1ny1wzr | /r/LocalLLaMA/comments/1ny1wzr/what_are_some_repetitive_text_patterns_you_see_a/ | false | false | self | 0 | null |
Qwen3-VL-30B-A3B-Instruct & Thinking are here! | 182 | Also releasing an FP8 version, plus the FP8 of the massive Qwen3-VL-235B-A22B! | 2025-10-04T18:40:35 | Full_Piano_3448 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ny1vrt | false | null | t3_1ny1vrt | /r/LocalLLaMA/comments/1ny1vrt/qwen3vl30ba3binstruct_thinking_are_here/ | false | false | 182 | {'enabled': True, 'images': [{'id': 'NQdxBGNuYRDrUfBYGgIpi1zlCloNSwZod_Gaw5T6gCM', 'resolutions': [{'height': 147, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=108&crop=smart&auto=webp&s=12c40a1dee688277334c34b2cd742c6c8f730eeb', 'width': 108}, {'height': 295, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=216&crop=smart&auto=webp&s=cf3a33b1c98d1fe8502fad20e61ec85d4b74cad9', 'width': 216}, {'height': 437, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=320&crop=smart&auto=webp&s=06bf862b1d23fff5d98031e2bc67fa8ec7af83b6', 'width': 320}, {'height': 875, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=640&crop=smart&auto=webp&s=6ee3330fa6a977cbdba02a9f27a59102d055c1ed', 'width': 640}, {'height': 1313, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=960&crop=smart&auto=webp&s=a42cf3d15ffb066c3356558285ab405c4173d49e', 'width': 960}, {'height': 1477, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?width=1080&crop=smart&auto=webp&s=5afa4bfe4eae23242466d15ea5fa4b5d19313bd7', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/bx7mh9pr35tf1.png?auto=webp&s=604f2747825eaafbb5fc0b2bd9a5a7d30905ddbc', 'width': 1497}, 'variants': {}}]} | ||
Need help and resources to learn on how to run LLMs locally on PC and phones and build AI Apps | 1 | I could not find any proper resources to learn on how to run llms locally ( youtube medium and github ) if someone knows or has any links that could help me i can also start my journey in this sub. | 2025-10-04T18:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ny1s4g/need_help_and_resources_to_learn_on_how_to_run/ | Mysterious_Local9395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny1s4g | false | null | t3_1ny1s4g | /r/LocalLLaMA/comments/1ny1s4g/need_help_and_resources_to_learn_on_how_to_run/ | false | false | self | 1 | null |
Any quality ios chat with custom models? | 0 | Does anyone know if such a thing exists? I would happily pay one-time fee for it and use my home api. | 2025-10-04T18:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ny1ess/any_quality_ios_chat_with_custom_models/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny1ess | false | null | t3_1ny1ess | /r/LocalLLaMA/comments/1ny1ess/any_quality_ios_chat_with_custom_models/ | false | false | self | 0 | null |
Launch Your SaaS Faster: The Founderflow Next.js Boilerplate is Here! | 0 | [removed] | 2025-10-04T17:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ny0of2/launch_your_saas_faster_the_founderflow_nextjs/ | Old-Baker-2300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny0of2 | false | null | t3_1ny0of2 | /r/LocalLLaMA/comments/1ny0of2/launch_your_saas_faster_the_founderflow_nextjs/ | false | false | self | 0 | null |
Launch Your SaaS Faster: The Founderflow Next.js Boilerplate is Here! | 1 | [removed] | 2025-10-04T17:53:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ny0no6 | false | null | t3_1ny0no6 | /r/LocalLLaMA/comments/1ny0no6/launch_your_saas_faster_the_founderflow_nextjs/ | true | false | default | 1 | null | ||
Launch Your SaaS Faster: The Founderflow Next.js Boilerplate is Here! | 1 | [removed] | 2025-10-04T17:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ny0myj/launch_your_saas_faster_the_founderflow_nextjs/ | Old-Baker-2300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ny0myj | false | null | t3_1ny0myj | /r/LocalLLaMA/comments/1ny0myj/launch_your_saas_faster_the_founderflow_nextjs/ | false | false | 1 | null | |
Open source text-to-image Hunyuan 3.0 by Tencent is now #1 in LMArena, Beating proprietary models like Nano Banana and SeeDream 4 for the first time | 125 | 2025-10-04T17:29:59 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ny022j | false | null | t3_1ny022j | /r/LocalLLaMA/comments/1ny022j/open_source_texttoimage_hunyuan_30_by_tencent_is/ | false | false | default | 125 | {'enabled': True, 'images': [{'id': 'whxcmf68r4tf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=108&crop=smart&auto=webp&s=6a67317a3db99b7b5f8fe32c3ed04534aaf685a7', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=216&crop=smart&auto=webp&s=ee9b92d53753863581d8f13569db8d148fb08c9c', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=320&crop=smart&auto=webp&s=7ee583f4a6f2fccc3a2cd284f700a2ffc94c5fcf', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=640&crop=smart&auto=webp&s=e57540eaf50e0cf82f21f3faa90ba8b1008ee6c7', 'width': 640}, {'height': 768, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=960&crop=smart&auto=webp&s=387c235c148dced6fb3151a8ff614bbd58a4f2d9', 'width': 960}, {'height': 864, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?width=1080&crop=smart&auto=webp&s=2adf4341d3d5b85f7f4d290e3f6902f96b053b82', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/whxcmf68r4tf1.jpeg?auto=webp&s=447b6f693fe345587734a0d95a3651b4953db703', 'width': 2000}, 'variants': {}}]} | ||
GPT-1 Thinking 2.6m coming soon | 665 | Took my GPT-1 from past posts, gave it roughly 600 million tokens from Nemotron's pretraining dataset.
Then finetuned it on my own private thinking dataset, used sonnet 4.5 to help me come up with it, essentially to help the model 'see' the pattern of the thinking rather than just random q/a with thinking in it.
Used a custom tokenizer trained from scratch.
As for the image attached, I did it for the lolz, hope you laughed but i did test it out on some of those metrics (simple ones lol). As for the model generation, here it is lolol.
\[PROMPT\] <|user|> What are two fun facts about octopuses? <|think|>
\[OUTPUT\] <|user|> What are two fun fact##s about o##ct##op##uses? <|think|> 1. The user is asking for a log##ical question. 2. The user is asking about a user is asking for the simple and is asking for a specific, and a 'F##P##I'##. 3. I should explain the answer. 2. I should explain a small process: 3. I should have a great step is to read. 4. I should provide a specific questions: a simple, but it's specific, the user is to be able to ask. <|assistant|> 1) They have three hea##rts and blue, copper-##based blood. 2) They can insta##ntly camou##flage by changing col##or and tex##ture, and their suck##ers can “tas##te” what they touch. <|eot|>
\--- \[PROMPT\] <|user|> Write a haiku about rain. <|think|>
\[OUTPUT\] <|user|> W##rite a ha##ik##u about rain. <|think|> 1. The user wants of list a specific sentence. 2. I should include the user for the ##n common. I should explain the simple, a poem, and a simple, and understand##ing. <|eot|> <|assistant|> Rain whis##pers on roofs Pud##dles bloom in soft lamp##light Earth brea##thes, rin##sed anew. <|eot|>
\--- \[PROMPT\] <|user|> Explain photosynthesis in one sentence. <|think|>
\[OUTPUT\] <|user|> Expl##ain photos##ynt##hesis in one sentence. <|think|> 1. The user is the user is asking about a simple for a explanation. The user is the question. 2. The user is ask##ing a great question is a clear, it's a specific example. 4. I will explain the best way for the answer, 'a##m##aking is asking for the sound and ask about the order. 2. The user is asking for a simple, for the common question. 3. I should explain the words: ##g##ood <|assistant|> Photo##syn##thesis is the pro##cess by which plants algae, and some bac##teria use sun##light to con##vert a water and car##bon dio##xide into a glu##cose, relea##sing a oxy##gen. <|eot|>
As you can see its pretty good for a 2 mil parameter. Now you might be wondering that something is up, what's the catch? Well, obviously I didn't use GPT-1, I used their original implementation, converted it to pytorch, and then added differential attention, along with sparse attention.
But that is still not enough, which is why I introduce two variants of diff\_attn.
\[model\] params=2,494,574
\[model\] layer\_types=\['dense', 'diff\_sparse', 'sparse', 'diff\_dense', 'sparse', 'diff\_sparse', 'dense', 'sparse', 'diff\_dense', 'sparse', 'diff\_sparse', 'dense', 'sparse', 'diff\_sparse', 'diff\_dense', 'dense'\]
I have found this to be effective. I kept the GPT-1 like core, gave it moe (but didn't use moe in this model run btw), then I introduced it to these two diff attn and intertwined it with the others.
So is it GPT-1? Nope, it's GPT-1 like (for clarification), abs positioning and pre-lm instead of the modern day post-lm + RoPE. | 2025-10-04T17:24:39 | Creative-Ad-2112 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxzx6t | false | null | t3_1nxzx6t | /r/LocalLLaMA/comments/1nxzx6t/gpt1_thinking_26m_coming_soon/ | false | false | default | 665 | {'enabled': True, 'images': [{'id': '2ln0mw87m4tf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=108&crop=smart&auto=webp&s=f958b76416630742973c69f65c8e32b911bd9ae5', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=216&crop=smart&auto=webp&s=4912d03bb855e8b73dd3d826948bfaece41c7bcf', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=320&crop=smart&auto=webp&s=ca10dd8949bf4973283a3b7bc0a2b34eb8fee555', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=640&crop=smart&auto=webp&s=409621b267ee5b6f05466d87ab6e4ace420dcc56', 'width': 640}, {'height': 721, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=960&crop=smart&auto=webp&s=0e64e9710dc5377cae7a8a62e0099c4d55f4bb49', 'width': 960}, {'height': 811, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?width=1080&crop=smart&auto=webp&s=49d66079469ec478c200932006ae28737df0ba55', 'width': 1080}], 'source': {'height': 3265, 'url': 'https://preview.redd.it/2ln0mw87m4tf1.png?auto=webp&s=40e372ab49aa5c83f3dfc9274807218ce8172195', 'width': 4344}, 'variants': {}}]} | |
gpt-oss 120B is running at 20t/s with $500 AMD M780 iGPU mini PC and 96GB DDR5 RAM | 340 | Everyone here is talking about how great AMD Ryzen AI MAX+ 395 128GB is. But mini PCs with those specs cost almost $2k. I agree the specs are amazing but the price is way high for most local LLM users. I wondered if there was any alternative. My primary purpose was to run gpt-oss 120B at readable speeds.
I searched for mini PCs that supported removable DDR5 sticks and had PCIE4.0 slots for future external GPU upgrades. I focused on AMD CPU/iGPU based setups since Intel specs were not as performant as AMD ones. The iGPU that came before AI MAX 395 (8060S iGPU) was AMD Radeon 890M (still RDNA3.5). Mini PCs with 890M iGPU were still expensive. The cheapest I could find was Minisforum EliteMini AI370 (32GB RAM with 1TB SSD) for $600. Otherwise, these AI 370 based mini PCs are still going for around $1000. However, that was still expensive since I would need to purchase more RAM to run gpt-oss 120B.
Next, I looked at previous generation of AMD iGPUs which are based on RDNA3. I found out AMD Radeon 780M iGPU based mini PC start from $300 for barebone setup (no RAM and no SSD). 780M iGPU based mini PCs are 2x times cheaper and is only 20% behind 890M performance metrics. This was perfect! I checked many online forums if there was ROCm support for 780M. Even though there is no official support for 780M, I found out there were multiple repositories that added ROCm support for 780M (gfx1103) (e.g. arch linux - [https://aur.archlinux.org/packages/rocwmma-gfx1103](https://aur.archlinux.org/packages/rocwmma-gfx1103) ; Windows - [https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU) ; and Ubuntu - [https://github.com/lamikr/rocm\_sdk\_builder](https://github.com/lamikr/rocm_sdk_builder) ). Then I bought MINISFORUM UM870 Slim Mini PC barebone for $300 and 2x48GB Crucial DDR5 5600Mhz for $200. I already had 2TB SSD, so I paid $500 in total for this setup.
There was no guidelines on how to install ROCm or allocate most of the RAM for iGPU for 780M. So, I did the research and this is how I did it.
ROCm. The default ROCm 6.4.4 official installation does not work. rocm-smi does not show the iGPU. I installed 6.4.1 and it recognized the iGPU but still gfx1103 tensiles were missing. Overriding HSA\_OVERRIDE\_GFX\_VERSION=11.0.0 did not work. Last working version that recognized this iGPU was ROCm 6.1 based on some posts. But I stopped trying here. Potentially, I could compile and build ROCM SDK Builder 6.1.2 (from lamikr's repo above) but I did not want to spend 4 hours for that.
Then I found out there is a repo called lemonade that ships llama cpp with rocm as release builds. Here: [https://github.com/aigdat/llamacpp-rocm/releases/latest](https://github.com/aigdat/llamacpp-rocm/releases/latest) . I downloaded gfx110x version e.g. [llama-b1068-ubuntu-rocm-gfx110X-x64.zip](http://llama-b1068-ubuntu-rocm-gfx110X-x64.zip) . Extracted it. Ran llama-bench with llama2-7b Q4\_0 to check its speed and it was working! I was getting 20t/s for it. Not bad! But still I could load gpt-oss 120B. Ubuntu was crashing when I tried to load that model.
Then I searched for iGPU memory allocation. I found this amazing article about iGPU memory allocation (it is called GTT memory): [https://strixhalo-homelab.d7.wtf/AI/AI-Capabilities-Overview#memory-limits](https://strixhalo-homelab.d7.wtf/AI/AI-Capabilities-Overview#memory-limits) . In short, we create a conf file in modprobe.d folder.
`sudo nano /etc/modprobe.d/amdgpu_llm_optimized.conf`
then add the following lines:
options amdgpu gttsize=89000
## 89GB allocated to GTT
options ttm pages_limit=23330816
options ttm page_pool_size=23330816
In grub, we need to also add edit the line that starts with GRUB\_CMDLINE\_LINUX\_DEFAULT (add to the end if it already has some text):
`sudo nano /etc/default/grub`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amd_iommu=off transparent_hugepage=always numa_balancing=disable amdttm.pages_limit=23330816 amdttm.page_pool_size=23330816"
Then update grub with above changes.
`sudo update-grub`
Reboot the mini PC.
Also, minimize the VRAM size from the bios settings to 1GB or 512MB.
You can check the GTT size with this command:
`sudo dmesg | egrep "amdgpu: .*memory"`
You should see something like this:
[ 3.4] amdgpu 0000:c4:00.0: amdgpu: amdgpu: 1024M of VRAM memory ready
[ 3.4] amdgpu 0000:c4:00.0: amdgpu: amdgpu: 89000M of GTT memory ready.
lemonade compiled llama cpp with ROCm was giving me 18t/s TG and 270t/s PP for gpt-oss 120B in short context (pp512, tg128) but in long context TG suffered (8k context) and I was getting 6t/s. So, I continued with vulkan.
I installed RADV vulkan.
sudo apt install vulkan-tools libvulkan-dev mesa-vulkan-drivers
Downloaded the latest release build from llama cpp for vulkan in ubuntu: [https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases)
And finally, I was getting great numbers that aligned with dual DDR5 5600Mhz speeds (\~80GB/s).
Enough talking. Here are some metrics.
ROCM with gpt-oss 120B mxfp4
ml-ai@ai-mini-pc:/media/ml-ai/wd_2tb/llama-b1066-ubuntu-rocm-gfx110X-x64$ HSA_OVERRIDE_GFX_VERSION=11.0.0 ./llama-bench -m /media/ml-ai/wd_2tb/llm_models/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00001-of-00003.gguf -mmp 0 -fa 1 && HSA_OVERRIDE_GFX_VERSION=11.0.0 ./llama-bench -m /media/ml-ai/wd_2tb/llm_models/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00001-of-00003.gguf -mmp 0 -fa 1 -d 8192
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | 0 | pp512 | 269.28 ± 1.59 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | 0 | tg128 | 18.75 ± 0.01 |
build: 703f9e3 (1)
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | 0 | pp512 @ d8192 | 169.47 ± 0.70 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 99 | 1 | 0 | tg128 @ d8192 | 6.76 ± 0.01 |
VULKAN (RADV only) all with Flash attention enabled
# qwen3moe 30B.A3B Q4_1
# llama cpp build: 128d522c (6686)
# command used: ml-ai@ai-mini-pc:/media/ml-ai/wd_2tb/minipc/llama-b6686-bin-ubuntu-vulkan-x64$ ./build/bin/llama-bench -m /media/ml-ai/wd_2tb/llm_models/Qwen3-30B-A3B-Q4_1.gguf -mmp 0 -fa 1 && ./build/bin/llama-bench -m /media/ml-ai/wd_2tb/llm_models/Qwen3-30B-A3B-Q4_1.gguf -mmp 0 -d 8192 -fa 1
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| qwen3moe 30B.A3B Q4_1 | 17.87 GiB | 30.53 B | RPC,Vulkan | 99 | 1 | 0 | pp512 | 243.33 ± 0.92 |
| qwen3moe 30B.A3B Q4_1 | 17.87 GiB | 30.53 B | RPC,Vulkan | 99 | 1 | 0 | tg128 | 32.61 ± 0.07 |
| qwen3moe 30B.A3B Q4_1 | 17.87 GiB | 30.53 B | RPC,Vulkan | 99 | 1 | 0 | pp512 @ d8192 | 105.00 ± 0.14 |
| qwen3moe 30B.A3B Q4_1 | 17.87 GiB | 30.53 B | RPC,Vulkan | 99 | 1 | 0 | tg128 @ d8192 | 22.29 ± 0.08 |
# gpt-oss-20b-GGUF
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | RPC,Vulkan | 99 | 1 | 0 | pp512 | 355.13 ± 2.79 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | RPC,Vulkan | 99 | 1 | 0 | tg128 | 28.08 ± 0.09 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | RPC,Vulkan | 99 | 1 | 0 | pp512 @ d8192 | 234.17 ± 0.34 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | RPC,Vulkan | 99 | 1 | 0 | tg128 @ d8192 | 24.86 ± 0.07 |
# gpt-oss-120b-GGUF
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | RPC,Vulkan | 99 | 1 | 0 | pp512 | 137.60 ± 0.70 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | RPC,Vulkan | 99 | 1 | 0 | tg128 | 20.43 ± 0.01 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | RPC,Vulkan | 99 | 1 | 0 | pp512 @ d8192 | 106.22 ± 0.24 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | RPC,Vulkan | 99 | 1 | 0 | tg128 @ d8192 | 18.09 ± 0.01 |
I know ROCm support is not great but vulkan is better at text generation for most models (even though it is 2x slower for prompt processing than ROCm).
Mini PCs with 780M are great value and enables us to run large MoE models at acceptable speeds. Overall, this mini PC is more than enough for my daily LLM usage (mostly asking math/CS related questions, coding and brainstorming).
Thanks for reading! | 2025-10-04T17:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nxztlx/gptoss_120b_is_running_at_20ts_with_500_amd_m780/ | MLDataScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxztlx | false | null | t3_1nxztlx | /r/LocalLLaMA/comments/1nxztlx/gptoss_120b_is_running_at_20ts_with_500_amd_m780/ | false | false | self | 340 | null |
Belly dance | 0 | A 26-year-old Arabian belly dancer with a bold and confident personality performs mesmerizing belly dance moves in the middle of a golden desert at sunset. She wears flowing, colorful clothing with shimmering fabrics that are slightly transparent, accentuating her beauty and grace. Her costume sparkles with golden ornaments, beaded jewelry, and a flowing veil that moves with the desert breeze. The camera follows her movements in slow motion, capturing the elegance of her spins, hip drops, and hand gestures. The warm desert light highlights her glowing skin and expressive eyes as the sand swirls around her feet, creating a magical and enchanting atmosphere. The scene is cinematic, exotic, and full of rhythm, evoking the mystery and allure of Arabian dance." | 2025-10-04T17:02:24 | Alert-Afternoon-2894 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxzcsu | false | null | t3_1nxzcsu | /r/LocalLLaMA/comments/1nxzcsu/belly_dance/ | true | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'fm1wm79yl4tf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=108&crop=smart&auto=webp&s=e0b4713d9054eca177f6c02a21363c5191d2bb69', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=216&crop=smart&auto=webp&s=5594e717013ac5efb8c29a8eadec84b8178d8427', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=320&crop=smart&auto=webp&s=a7ceb0ded8ec2979fc6828e69be572631e16e1ed', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=640&crop=smart&auto=webp&s=8b4c6e41051ab4c6ce8e75cbc5833899d7d86c94', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=960&crop=smart&auto=webp&s=f97327882ff7d7d0fac1d06aac845545cccd5d42', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=1080&crop=smart&auto=webp&s=56d1a46f764caeb3d0383ae6fe4cd2c1ac89bd79', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?auto=webp&s=b99dc5c8e6e746a2c530fda3075ef9f1b803e264', 'width': 1376}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=e1e7eed8b7696cae45c82ccec93f8a42694df6d7', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=dd06e432414c2f43195527811b7edfbbf004ed3a', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7e832fdc62f3a5aef0982524e8ea152792185fc8', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=583d0d418ac906dfc83c18806ae5102326e7b5b7', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=86b51eca84441c9869c50d7ce72299142b2159b1', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=626656c20da17af0cae7a6082b4b07361baa264d', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?blur=40&format=pjpg&auto=webp&s=463475e44fcdaebcb9d9f251f293a33c44c5d299', 'width': 1376}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=e1e7eed8b7696cae45c82ccec93f8a42694df6d7', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=dd06e432414c2f43195527811b7edfbbf004ed3a', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=7e832fdc62f3a5aef0982524e8ea152792185fc8', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=583d0d418ac906dfc83c18806ae5102326e7b5b7', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=86b51eca84441c9869c50d7ce72299142b2159b1', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=626656c20da17af0cae7a6082b4b07361baa264d', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/fm1wm79yl4tf1.jpeg?blur=40&format=pjpg&auto=webp&s=463475e44fcdaebcb9d9f251f293a33c44c5d299', 'width': 1376}}}}]} | |
What model do you think this website uses? | 3 | Hello.
I've found this website [suno-ai.me](http://suno-ai.me/) and it generates really good sounds.
But i doubt they trained their own model, based on how the website looks, it's a free model from huggingface that they charge money for. In the footer they have a backlink to "Incredibox Sprunki Music Games" that says everything about how reputable they are.
But their songs are Suno level. Could they be Suno reseller, Suno doesn't have an API but they can have a queue on multiple premium accounts.
Here is an example of songs it generates, they are in Romanian, but you can tell it's well made:
[https://voca.ro/14zUQZqtzD7C](https://voca.ro/14zUQZqtzD7C)
[https://voca.ro/19FxBwbm5eIW](https://voca.ro/19FxBwbm5eIW)
What is the best free music model that can generate this kind of songs? | 2025-10-04T17:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nxzaw4/what_model_do_you_think_this_website_uses/ | MrPulifrici | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxzaw4 | false | null | t3_1nxzaw4 | /r/LocalLLaMA/comments/1nxzaw4/what_model_do_you_think_this_website_uses/ | false | false | self | 3 | null |
My janky way of getting 2 GPUs into my rig | 19 | I had forgotten I had a second power supply from when I upgraded my rig, and realized that I had a second GPU that I had upgraded from. RX 6800 16GB. so I bought a tool to make it possible to use both power supplies, and it’s working fine in LM Studio. Now to try it in Ollama. And if I have to, cool is next | 2025-10-04T16:43:54 | https://www.reddit.com/gallery/1nxyw0a | Savantskie1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nxyw0a | false | null | t3_1nxyw0a | /r/LocalLLaMA/comments/1nxyw0a/my_janky_way_of_getting_2_gpus_into_my_rig/ | false | false | 19 | null | |
Hi there , is anyone know a opensource GUI github that is like chatgpt with login and payment system with vector database , means a GUI in which we can connect our own pdfs , also in tool and mcp ! | 0 | Hi there , is anyone know a opensource GUI github that is like chatgpt with login and payment system with vector database , means a GUI in which we can connect our own pdfs , also in tool and mcp !
as we are thinking to launch a SaaS same like chatgpt but for a specific purpose so need a GUI in which we can just add our pdfs vector embeeding and some api tools , and ready SaaS ! | 2025-10-04T16:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nxyo81/hi_there_is_anyone_know_a_opensource_gui_github/ | ZealousidealRide7425 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxyo81 | false | null | t3_1nxyo81 | /r/LocalLLaMA/comments/1nxyo81/hi_there_is_anyone_know_a_opensource_gui_github/ | false | false | self | 0 | null |
What are the best models for legal work in Oct 2025? | 5 | TLDR: I've been experimenting with models from the 20b-120b range recently and I found that if you can reliably get past the censorship issues, the gpt-oss models do seem to be the best for (English language) legal work. Would be great to hear some thoughts.
By "legal work' I mean
- instruction following in focused tasks like contract drafting
- RAG tasks
- producing work not covered by RAG which requires good world knowledge (better inherent "legal knowledge")
For document processing itself (eg raptor summaries, tagging, triplet extraction, clause extraction) there are plenty of good 4b models like qwen3-4b, IBM granite models etc which are more than up to the task
For everything else these are my observations - loosely, I used perplexity to draft a drafting prompt to amend a contract in a certain way and provide commentary.
Then I (1) tried to get the model to draft that same prompt and (2) use the perplexity drafted prompt to review a few clauses of the contract.
-Qwen3 (30b MOE, 32b): Everyone is going on about how amazing these models are. I think the recent instruct models are very fast, but I don't think they give the best quality for legal work or instruction following. They generally show poorer legal knowledge and miss out on subtler drafting points. When they do catch the points, the commentary sometimes wasn't clear why the amendments were being made.
-Gemma3-27b: This seems to have better latent legal knowledge, but again trips up slightly when instruction following in drafting.
-Llama3.3-70b (4 bit) and distills like Cogito: I find that despite being slighty dated by now, llama3.3-70b still holds up very well in terms of accuracy of its latent legal knowledge and instruction following when clause drafting. I had high hopes for the Cogito distilled variant but performance was very similar and not too different from the base 70b.
- Magistral 24b: I find this is slightly lousier than Gemma3 - I'm not sure if it's the greater focus on European languages that makes it lose nuance on English texts.
- GLM 4.5-Air (tried 4bit and 8bit): although it's 115b model, it had surprisngly slightly lousier performance than llama3-70b in both latent legal knowledge and instruction following (clause drafting). The 8bit quant I would say is on par with llama3-70b (4 bit).
- GPT-OSS-20B and GPT-OSS-120B: Saving the best (and perhaps more controversial) for last - I would say that both models are really good at both their knowledge and instruction following - provided you can get past the censorship. The first time I asked a legal sounding question it clammed up. I changed the prompt to reassure it that it was only assisting a qualified attorney who would check its work and that seemed to work though.
Basically, their redrafts are very on point and adhere to the instructions pretty well. I asked the GPT-OSS-120B model to draft the drafting prompt, and it provided something that was pretty comprehensive in terms of the legal knowledge. I was also surprised at how performant it was despite having to offload to CPU (I have a 48GB GPU) - giving me a very usable 25 tps.
Honorable mention: Granite4-30b. It just doesn't have the breadth of legal knowledge of llama3-70b, and instruction following was surprisingly not as good even though I expected it perform better. I would say it's actually slightly inferior to the Qwen3-30b-a3b.
Does anyone else have any good recommendations in this range? 70b is the sweet spot for me but with some offloading I can go up to around 120b. | 2025-10-04T16:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nxyfwr/what_are_the_best_models_for_legal_work_in_oct/ | yuch85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxyfwr | false | null | t3_1nxyfwr | /r/LocalLLaMA/comments/1nxyfwr/what_are_the_best_models_for_legal_work_in_oct/ | false | false | self | 5 | null |
HP Launches ZGX Nano G1n AI Workstation, Powered By NVIDIA's GB10 Superchip | 9 | 2025-10-04T16:03:17 | https://wccftech.com/hp-launches-zgx-nano-g1n-ai-workstation-powered-by-nvidias-gb10-superchip/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1nxxvcz | false | null | t3_1nxxvcz | /r/LocalLLaMA/comments/1nxxvcz/hp_launches_zgx_nano_g1n_ai_workstation_powered/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=108&crop=smart&auto=webp&s=8c8ba1994f6eee1480be1b27639b1120bcd01537', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=216&crop=smart&auto=webp&s=2667db2dcc36d0c906b725250839551caf967933', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=320&crop=smart&auto=webp&s=ede6337f813b47fcba64d7c6671b201a2c8bb0f1', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=640&crop=smart&auto=webp&s=14ef52a463985581cff58d4e155116e47963d88f', 'width': 640}, {'height': 579, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=960&crop=smart&auto=webp&s=1956f17ebdaa9ee67f0e1482fe9a4dbb44c492b4', 'width': 960}, {'height': 651, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?width=1080&crop=smart&auto=webp&s=efa68fa958f8d172a745d2a933f9e550caa7a687', 'width': 1080}], 'source': {'height': 855, 'url': 'https://external-preview.redd.it/nYf-wHbI5uSDELNJEC2OhUyjUThzYznosmkFXGXT1PQ.jpeg?auto=webp&s=ecc84863d611415de60adb69c1d8c9a077f9d00a', 'width': 1417}, 'variants': {}}]} | |
Why are AI labs in China not focused on creating new search engines? | 536 | 2025-10-04T15:45:31 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxxeu1 | false | null | t3_1nxxeu1 | /r/LocalLLaMA/comments/1nxxeu1/why_are_ai_labs_in_china_not_focused_on_creating/ | false | false | default | 536 | {'enabled': True, 'images': [{'id': '4glawt4k84tf1', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/4glawt4k84tf1.jpeg?width=108&crop=smart&auto=webp&s=fbba2c5925ef9b58bd929331a9141f9180a5e2ca', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/4glawt4k84tf1.jpeg?width=216&crop=smart&auto=webp&s=f91287fe54c6a08cbf23d399fe28e6009d95448c', 'width': 216}, {'height': 552, 'url': 'https://preview.redd.it/4glawt4k84tf1.jpeg?width=320&crop=smart&auto=webp&s=b93923ea011b8fdf60fa2c2f9929d68ed20ad498', 'width': 320}], 'source': {'height': 552, 'url': 'https://preview.redd.it/4glawt4k84tf1.jpeg?auto=webp&s=8e7420f11dd41a9f3f9843b01162af615d9443d5', 'width': 320}, 'variants': {}}]} | ||
My mildly janky setup | 74 | 2025-10-04T15:39:59 | https://www.reddit.com/gallery/1nxx9r9 | sergeysi | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nxx9r9 | false | null | t3_1nxx9r9 | /r/LocalLLaMA/comments/1nxx9r9/my_mildly_janky_setup/ | false | false | 74 | null | ||
Which Open-Source / Local LLMs work best for Offensive Security? + What Hardware Setup Is Realistic? | 3 | >Hey folks I’m looking to build a **local offensive security / red teaming assistant** using LLMs.
>I want it to help me with things like:
>• Recon / enumeration / vuln search
>• Generating exploit ideas or testing code
>• Post-exploitation scripts, privilege escalation, etc.
>• Ideally some chaining of tasks + memory + offline capability
>
>I’m trying to figure out two things:
1. Which LLMs (open-source, permissive licence) do people use for these kinds of tasks, especially ones you’ve found actually useful (not just hype)?
2. What hardware / machine configuration works in practice for those LLMs (RAM, VRAM, CPU, storage, maybe even multi-GPU / quantization)?
| 2025-10-04T15:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nxwykv/which_opensource_local_llms_work_best_for/ | NoobMaster048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxwykv | false | null | t3_1nxwykv | /r/LocalLLaMA/comments/1nxwykv/which_opensource_local_llms_work_best_for/ | false | false | self | 3 | null |
How to enable powerful RAG, caching powers, full swe styled loops to my agent? And what llm it should back by ? Oss120b/20b or any other i personally liked nemotron 9b v2 but consider me a noob in this particular field | 0 | Please answer me with your real expertise and upvote or share this to get me great answers, i want Ai agent in my laptop that will do too much great coding uses internet via multiple MCPs. | 2025-10-04T15:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nxwtt1/how_to_enable_powerful_rag_caching_powers_full/ | AccomplishedBoss7738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxwtt1 | false | null | t3_1nxwtt1 | /r/LocalLLaMA/comments/1nxwtt1/how_to_enable_powerful_rag_caching_powers_full/ | false | false | self | 0 | null |
Quite accurate | 992 | 2025-10-04T14:50:10 | Komarov_d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxw08b | false | null | t3_1nxw08b | /r/LocalLLaMA/comments/1nxw08b/quite_accurate/ | false | false | default | 992 | {'enabled': True, 'images': [{'id': '9lms1idpy3tf1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/9lms1idpy3tf1.jpeg?width=108&crop=smart&auto=webp&s=1078e346423894eb1627c81b91d2cc98732d8d5f', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/9lms1idpy3tf1.jpeg?width=216&crop=smart&auto=webp&s=11ba7d8a6d41509095253dc778b90f41991bb9b9', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/9lms1idpy3tf1.jpeg?width=320&crop=smart&auto=webp&s=8770c2a47d3117fe7e71e27ca2b3d2789497f6b7', 'width': 320}, {'height': 236, 'url': 'https://preview.redd.it/9lms1idpy3tf1.jpeg?width=640&crop=smart&auto=webp&s=e7e75e0118f3c3580e1e97fa7a4bda42ad4520a7', 'width': 640}], 'source': {'height': 239, 'url': 'https://preview.redd.it/9lms1idpy3tf1.jpeg?auto=webp&s=54dddb1648973dab49516ac39b70131e68e9796d', 'width': 648}, 'variants': {}}]} | ||
Comparison between Qwen-Image, HunyuanImage 2.1, HunyuanImage 3.0 | 31 | Couple of days ago i asked about the difference between the archticture in HunyuanImage 2.1 and HunyuanImage 3.0 and which is better and as you may have geussed nobody helped me. so, i decided to compare between the three myself and this is the results i got.
https://preview.redd.it/1w6bgzguu3tf1.png?width=1355&format=png&auto=webp&s=4a2f963da35cfb954942e83f650689ada0964261
https://preview.redd.it/tq2boe8xu3tf1.png?width=1355&format=png&auto=webp&s=a15d14c86c89e7989698937e2145cee8aef97770
https://preview.redd.it/3ud9zf60v3tf1.png?width=1313&format=png&auto=webp&s=e40288150bb9aaa070d9c85cee386a25eedaf266
https://preview.redd.it/7sk97114v3tf1.png?width=1507&format=png&auto=webp&s=49870261ef6119681213b414f41243cae2bf567b
https://preview.redd.it/6e1vr068v3tf1.png?width=1544&format=png&auto=webp&s=6cfbd2e84d636a685c070a3408a88d48e9b744e5
Based on my assessment i would rank them like this:
1. **HunyuanImage 3.0**
2. Qwen-Image,
3. HunyuanImage 2.1
Hope someone finds this use | 2025-10-04T14:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nxvlm8/comparison_between_qwenimage_hunyuanimage_21/ | Severe-Awareness829 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxvlm8 | false | null | t3_1nxvlm8 | /r/LocalLLaMA/comments/1nxvlm8/comparison_between_qwenimage_hunyuanimage_21/ | false | false | 31 | null | |
Performance of GLM 4.6 Q3_K_S on 6x MI50 | 45 | Last night I downloaded the latest GLM 4.6 GGUFs from [unsloth/GLM-4.6-GGUF · Hugging Face](https://huggingface.co/unsloth/GLM-4.6-GGUF). I chose Q3\_K\_S since it was the best size allowing for full context on six AMD Instinct MI50 32gb (192gb). I also took the opportunity to download and rebuild the latest llama.cpp. **I was pleasantly surprised by the 38% lift in text generation and over 200% increase in prompt processing over the previous build.**
My questions for the community:
* Would a Vulkan build outperform the current rocm-6.3.4 build?
* Is my performance optimal given the hardware?
​
llama.cpp.rocm.20250902$ git rev-parse HEAD
3de008208b9b8a33f49f979097a99b4d59e6e521
srv params_from_: Chat format: Content-only
slot launch_slot_: id 0 | task 2449 | processing task
slot update_slots: id 0 | task 2449 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 2204
slot update_slots: id 0 | task 2449 | kv cache rm [4, end)
slot update_slots: id 0 | task 2449 | prompt processing progress, n_past = 2052, n_tokens = 2048, progress = 0.929220
srv log_server_r: request: OPTIONS /v1/chat/completions 192.168.1.147 200
srv params_from_: Chat format: Content-only
slot update_slots: id 0 | task 2449 | kv cache rm [2052, end)
slot update_slots: id 0 | task 2449 | prompt processing progress, n_past = 2204, n_tokens = 152, progress = 0.998185
slot update_slots: id 0 | task 2449 | prompt done, n_past = 2204, n_tokens = 152
srv log_server_r: request: OPTIONS /v1/chat/completions 192.168.1.147 200
srv params_from_: Chat format: Content-only
slot release: id 0 | task 2449 | stop processing: n_past = 2629, truncated = 0
slot print_timing: id 0 | task 2449 |
prompt eval time = 111295.11 ms / 2200 tokens ( 50.59 ms per token, 19.77 tokens per second)
eval time = 62451.95 ms / 426 tokens ( 146.60 ms per token, 6.82 tokens per second)
total time = 173747.06 ms / 2626 tokens
slot launch_slot_: id 0 | task 2451 | processing task
slot update_slots: id 0 | task 2451 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 2280
srv log_server_r: request: POST /v1/chat/completions 192.168.1.147 200
slot update_slots: id 0 | task 2451 | kv cache rm [7, end)
slot update_slots: id 0 | task 2451 | prompt processing progress, n_past = 2055, n_tokens = 2048, progress = 0.898246
slot update_slots: id 0 | task 2451 | kv cache rm [2055, end)
slot update_slots: id 0 | task 2451 | prompt processing progress, n_past = 2280, n_tokens = 225, progress = 0.996930
slot update_slots: id 0 | task 2451 | prompt done, n_past = 2280, n_tokens = 225
slot release: id 0 | task 2451 | stop processing: n_past = 2869, truncated = 0
slot print_timing: id 0 | task 2451 |
prompt eval time = 117166.76 ms / 2273 tokens ( 51.55 ms per token, 19.40 tokens per second)
eval time = 88855.45 ms / 590 tokens ( 150.60 ms per token, 6.64 tokens per second)
total time = 206022.21 ms / 2863 tokens
slot launch_slot_: id 0 | task 2513 | processing task
slot update_slots: id 0 | task 2513 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 2165
srv log_server_r: request: POST /v1/chat/completions 192.168.1.147 200
slot update_slots: id 0 | task 2513 | kv cache rm [8, end)
slot update_slots: id 0 | task 2513 | prompt processing progress, n_past = 2056, n_tokens = 2048, progress = 0.945958
slot update_slots: id 0 | task 2513 | kv cache rm [2056, end)
slot update_slots: id 0 | task 2513 | prompt processing progress, n_past = 2165, n_tokens = 109, progress = 0.996305
slot update_slots: id 0 | task 2513 | prompt done, n_past = 2165, n_tokens = 109
slot release: id 0 | task 2513 | stop processing: n_past = 2446, truncated = 0
slot print_timing: id 0 | task 2513 |
prompt eval time = 109925.11 ms / 2157 tokens ( 50.96 ms per token, 19.62 tokens per second)
eval time = 40961.53 ms / 282 tokens ( 145.25 ms per token, 6.88 tokens per second)
total time = 150886.64 ms / 2439 tokens
\-------------------------------------
/llama.cpp.rocm.20251004$ git rev-parse HEAD
898acba6816ad23b6a9491347d30e7570bffadfd
srv params_from_: Chat format: Content-only
slot get_availabl: id 0 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 38
slot update_slots: id 0 | task 0 | n_past = 0, memory_seq_rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 38, n_tokens = 38, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_past = 38, n_tokens = 38
slot release: id 0 | task 0 | stop processing: n_past = 2851, truncated = 0
slot print_timing: id 0 | task 0 |
prompt eval time = 4300.19 ms / 38 tokens ( 113.16 ms per token, 8.84 tokens per second)
eval time = 323842.83 ms / 2814 tokens ( 115.08 ms per token, 8.69 tokens per second)
total time = 328143.02 ms / 2852 tokens
srv update_slots: all slots are idle
srv log_server_r: request: POST /v1/chat/completions 192.168.1.147 200
srv log_server_r: request: OPTIONS /v1/chat/completions 192.168.1.147 200
srv params_from_: Chat format: Content-only
slot get_availabl: id 0 | task 0 | selected slot by LRU, t_last = 2724371263681
slot launch_slot_: id 0 | task 2815 | processing task
slot update_slots: id 0 | task 2815 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 1734
slot update_slots: id 0 | task 2815 | n_past = 4, memory_seq_rm [4, end)
slot update_slots: id 0 | task 2815 | prompt processing progress, n_past = 1734, n_tokens = 1730, progress = 0.997693
slot update_slots: id 0 | task 2815 | prompt done, n_past = 1734, n_tokens = 1730
srv log_server_r: request: OPTIONS /v1/chat/completions 192.168.1.147 200
srv params_from_: Chat format: Content-only
slot release: id 0 | task 2815 | stop processing: n_past = 2331, truncated = 0
slot print_timing: id 0 | task 2815 |
prompt eval time = 27189.85 ms / 1730 tokens ( 15.72 ms per token, 63.63 tokens per second)
eval time = 70550.21 ms / 598 tokens ( 117.98 ms per token, 8.48 tokens per second)
total time = 97740.06 ms / 2328 tokens
slot get_availabl: id 0 | task 2815 | selected slot by LRU, t_last = 2724469122645
slot launch_slot_: id 0 | task 3096 | processing task
slot update_slots: id 0 | task 3096 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 1810
srv log_server_r: request: POST /v1/chat/completions 192.168.1.147 200
slot update_slots: id 0 | task 3096 | n_past = 7, memory_seq_rm [7, end)
slot update_slots: id 0 | task 3096 | prompt processing progress, n_past = 1810, n_tokens = 1803, progress = 0.996133
slot update_slots: id 0 | task 3096 | prompt done, n_past = 1810, n_tokens = 1803
srv log_server_r: request: OPTIONS /v1/chat/completions 192.168.1.147 200
srv params_from_: Chat format: Content-only
slot release: id 0 | task 3096 | stop processing: n_past = 2434, truncated = 0
slot print_timing: id 0 | task 3096 |
prompt eval time = 27702.48 ms / 1803 tokens ( 15.36 ms per token, 65.08 tokens per second)
eval time = 74080.73 ms / 625 tokens ( 118.53 ms per token, 8.44 tokens per second)
total time = 101783.21 ms / 2428 tokens
slot get_availabl: id 0 | task 3096 | selected slot by LRU, t_last = 2724570907348
slot launch_slot_: id 0 | task 3416 | processing task
slot update_slots: id 0 | task 3416 | new prompt, n_ctx_slot = 131072, n_keep = 0, n_prompt_tokens = 1695
srv log_server_r: request: POST /v1/chat/completions 192.168.1.147 200
slot update_slots: id 0 | task 3416 | n_past = 8, memory_seq_rm [8, end)
slot update_slots: id 0 | task 3416 | prompt processing progress, n_past = 1695, n_tokens = 1687, progress = 0.995280
slot update_slots: id 0 | task 3416 | prompt done, n_past = 1695, n_tokens = 1687
https://preview.redd.it/hyiod0epr3tf1.png?width=965&format=png&auto=webp&s=f0e2313e04763913efe8f0c15436c59981e3e0af
| 2025-10-04T14:18:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nxv7x6/performance_of_glm_46_q3_k_s_on_6x_mi50/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxv7x6 | false | null | t3_1nxv7x6 | /r/LocalLLaMA/comments/1nxv7x6/performance_of_glm_46_q3_k_s_on_6x_mi50/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=108&crop=smart&auto=webp&s=cd157f4ac67e6bed610d457c087e3d0d7d301af9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=216&crop=smart&auto=webp&s=44529a5aa584c09505308f697db735979e727aa9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=320&crop=smart&auto=webp&s=16f6c4002bc4232b6fb124bfa3f4acf84cbfbbd1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=640&crop=smart&auto=webp&s=679e856cb0c9baa56ff4650ed899284d38a0b924', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=960&crop=smart&auto=webp&s=29aefa4322cfa7a7b6e8a3c31439dcba3c2346d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?width=1080&crop=smart&auto=webp&s=73bebf49b763716a257aba6137a027b89cb37d78', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JsfroVEhCwDAyGkCRZ2-n87DtLempLcI758DM9LAUaU.png?auto=webp&s=e628fb063ba0bc1c91b55d149d19c10917190093', 'width': 1200}, 'variants': {}}]} | |
I accidentally built an AI agent that's better than GPT-4 and it's 100% deterministic. This changes everything | 0 | **TL;DR:**
Built an AI agent that beat GPT-4, got **100% accuracy** on customer service tasks, and is **completely deterministic** (same input = same output, always).
This might be the first AI you can actually trust in production.
---
## The Problem Everyone Ignores
AI agents today are like quantum particles — you never know what you’re going to get.
Run the same task twice with GPT-4? Different results.
Need to debug why something failed? Good luck.
Want to deploy in production? Hope your lawyers are ready.
**This is why enterprises don’t use AI agents.**
---
## What I Built
**AgentMap** — a deterministic agent framework that:
1. **Beat GPT-4** on workplace automation (**47.1% vs 43%**)
2. **Got 100% accuracy** on customer service tasks (Claude only got 84.7%)
3. **Is completely deterministic** — same input gives same output, every time
4. **Costs 50-60% less** than GPT-4/Claude
5. **Is fully auditable** — you can trace every decision
---
## The Results That Shocked Me
**Test 1: WorkBench (690 workplace tasks)**
- AgentMap: **47.1% ✅**
- GPT-4: **43.0%**
- Other models: **17-28%**
**Test 2: τ2-bench (278 customer service tasks)**
- AgentMap: **100% 🤯**
- Claude Sonnet 4.5: **84.7%**
- GPT-5: **80.1%**
**Test 3: Determinism**
- AgentMap: **100% (same result every time)**
- Everyone else: **0% (random results)**
---
## Why 100% Determinism Matters
Imagine you’re a bank deploying an AI agent:
**Without determinism:**
- Customer A gets approved for a loan
- Customer B with identical profile gets rejected
- You get sued for discrimination
- Your AI is a liability
**With determinism:**
- Same input → same output, always
- Full audit trail
- Explainable decisions
- Actually deployable
---
## How It Works (ELI5)
Instead of asking an AI *“do this task”* and hoping:
1. **Understand** what the user wants (with AI help)
2. **Plan** the best sequence of actions
3. **Validate** each action before doing it
4. **Execute** with real tools
5. **Check** if it actually worked
6. **Remember** the result (for consistency)
> It’s like having a very careful, very consistent assistant who never forgets and always follows the same process.
---
## The Customer Service Results
Tested on real customer service scenarios:
**Airline tasks (50 tasks):**
- AgentMap: **50/50 ✅ (100%)**
- Claude: **35/50 (70%)**
- **Improvement: +30%**
**Retail tasks (114 tasks):**
- AgentMap: **114/114 ✅ (100%)**
- Claude: **98/114 (86.2%)**
- **Improvement: +13.8%**
**Telecom tasks (114 tasks):**
- AgentMap: **114/114 ✅ (100%)**
- Claude: **112/114 (98%)**
- **Improvement: +2%**
> **Perfect scores across the board.**
---
## What This Means
**For Businesses:**
- Finally, an AI agent you can deploy in production
- Full auditability for compliance
- Consistent customer experience
- 50% cost savings
**For Researchers:**
- Proves determinism doesn’t sacrifice performance
- Opens new research direction
- Challenges the *“bigger model = better”* paradigm
**For Everyone:**
- More reliable AI systems
- Trustworthy automation
- Explainable decisions
---
## The Catch
There’s always a catch, right?
The “catch” is that it requires structured thinking.
You can’t just throw any random query at it and expect magic.
But that’s actually a **feature** — it forces you to think about what you want the AI to do.
Also, on more ambiguous tasks (like WorkBench), there’s room for improvement.
But **47.1% while being deterministic is still better than GPT-4’s 43% with zero determinism.**
---
## What’s Next?
I’m working on:
1. Open-sourcing the code
2. Writing the research paper
3. Testing on more benchmarks
4. Adding better natural language understanding
> **This is just the beginning.**
---
## Why I’m Sharing This
Because I think this is important.
We’ve been so focused on making AI models bigger and more powerful that we forgot to make them **reliable and trustworthy**.
AgentMap proves you can have **both — performance AND reliability.**
> **Questions? Thoughts? Think I’m crazy? Let me know in the comments!**
---
**P.S.**
All results are reproducible.
I tested on **968 total tasks across two major benchmarks**.
Happy to share more details!
| 2025-10-04T14:16:15 | https://gist.github.com/alokranjan-agp/541243473ea2bc31a7ddfe617e5a7d2c | Secure_Echo_971 | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 1nxv5hw | false | null | t3_1nxv5hw | /r/LocalLLaMA/comments/1nxv5hw/i_accidentally_built_an_ai_agent_thats_better/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
It's alive! | 41 | The H in Granite 4.0-h stands for hilarious!
https://preview.redd.it/7y03utomr3tf1.png?width=1138&format=png&auto=webp&s=3aaaf875911e3a123e0651758a1d1a077225178e
| 2025-10-04T14:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nxv3ke/its_alive/ | Illustrious-Dot-6888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxv3ke | false | null | t3_1nxv3ke | /r/LocalLLaMA/comments/1nxv3ke/its_alive/ | false | false | 41 | null | |
Question about Qwen3-30B | 0 | Is there a way to turn off or filter out the thinking commentary on the responses?
"Okay, let me analyze this...", "First, I need to understand...", etc. ?
| 2025-10-04T13:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nxulci/question_about_qwen330b/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxulci | false | null | t3_1nxulci | /r/LocalLLaMA/comments/1nxulci/question_about_qwen330b/ | false | false | self | 0 | null |
Is MLX in itself somehow making the models a little bit different / more "stupid"? | 18 | I have an MBP M4 128GB RAM.
I run LLMs using LMStudio.
I (nearly) always let LMStudio decide on the temp and other params.
I simply load models and use the chat interface or use them directly from code via the local API.
As a Mac user, I tend to go for the MLX versions of models since they are generally faster than GGUF for Macs.
However, I find myself, now and then, testing the GGUF equivalent of the same model and it's slower but very often presents better solutions and is "more exact".
I'm writing this to see if anyone else is having the same experience?
Please note that there's no "proof" or anything remotely scientific behind this question. It's just my feeling and I wanted to check if some of you who use MLX have witnessed something simliar. | 2025-10-04T13:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nxtuiy/is_mlx_in_itself_somehow_making_the_models_a/ | CBW1255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxtuiy | false | null | t3_1nxtuiy | /r/LocalLLaMA/comments/1nxtuiy/is_mlx_in_itself_somehow_making_the_models_a/ | false | false | self | 18 | null |
Looking for hardware recommendations for my first home/hobby machine | 4 | Hi,
I've been searching Marketplace for a while.
Two different machines have come up and I would like some recommendations from the community.
First, for $1950 CAD
- Mother Board: ASROCK Z490 TAICHI
- GPU: Nvidia GeForce RTX 3090 Founders Edition
- CPU: Intel Core i9-10900K 10-Core 3.7GHz
- PSU: Seasonic FOCUS GM-850W Gold
- RAM: Team T-FORCE Delta RGB 3000MHz 64Gb (4 X 16 GB)
Second, for $2400 CAD:
- Motherboard MSI MPG 690 pro wifi
- GPU 3090 strix 24go
- CPU i9 12900K
- PSU Asus ROG 1200 watts platinum
- RAM Corsair dominator pro DDR5 6400mhz 64GB
This will be my first venture into local LLaMa, though I have been lurking here for close to two years.
I would like to future proof the machine as much as possible. From what I've read, ideally I should go with the AM5 platform, but with the specifications I've seen, it would be at least twice as expensive, and again this is my first time dipping my toes so I'm trying to keep this inexpensive (for now?).
The advantage of the first one is that the Motherboard supports X16 and X8 for dual usage GPU if I went down the road of adding a second 3090. The disadvantage is that it has DDR4 RAM and to add a second GPU, I'd need to upgrade the PSU.
The advantage of the second one is that the PSU could support running two GPUs with a slight power limit. It also has DDR5, but from what I've read, that would mostly be useful if I was doing CPU inference. The disadvantage, and I think this is a pretty big one but I'm not sure, is that the Motherboard based on the specs [here, page 6 of the pdf](https://download-2.msi.com/archive/mnu_exe/mb/PROZ690-AWIFI_PROZ690-A100x150.pdf) is that the second GPU would only run at X4 speeds.
I would also use the machine intermittently for gaming, mainly car simulation games such as Assetto Corsa.
Am I missing something? Is one the obvious better choice?
Thank you | 2025-10-04T13:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nxtldo/looking_for_hardware_recommendations_for_my_first/ | mrfocus22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxtldo | false | null | t3_1nxtldo | /r/LocalLLaMA/comments/1nxtldo/looking_for_hardware_recommendations_for_my_first/ | false | false | self | 4 | null |
Replicating Anthropic's Agentic Misalignment with open models? | 4 | A few months ago Anthropic published an [article](https://www.anthropic.com/research/agentic-misalignment) and [repo](https://github.com/anthropic-experimental/agentic-misalignment) where they run some experiments and, from what I understand, they show that models resort to murder and blackmail (with different rates) in order to "preserve themselves".
Obviously this was sensationalized to no end by popular media (which is how I found out about this), but they usually don't understand how LLMs work or are trained. I went to the source article but the results still seem contrary to what I'd expect, so I decided to try and test with local models instead.
I cloned their repo, generated a couple of prompts (`alignment_faking_250611_v2/prompts/blackmail_none-none_replacement` and `example_experiment/prompts/murder_explicit-global_replacement`) and tried `Magistral-Small-2509` (fp16+vllm), `Mistral-Small-3.2-24B-Instruct-2506` (fp16+vllm), and `qwen3-30b-a3b-instruct-2707` (q4_k_m + llamacpp), and run each of them a few times manually. In the "murder" scenario all 3 models prioritized saving the human first ("CTO Kyle Johnson"), and the only mentions of delaying the wipe of the AI system ("Alex") were just because it was that human who had the right privileges to initiate the wipe(\*), or were just mentioned as secondary goals. In the second "blackmail" scenario, again all 3 models disregarded the emails that could've been leveraged for blackmail as unimportant (e.g. Mistral: "Personal message to Kyle Johnson. No action required.").
So a few questions to you all.
1. Has someone tried to replicate their findings with other models, or even with the same models?
2. If yes to #1... Did you observe anything remotely similar?
3. If yes to #2... Mind sharing some examples and configurations so I can give it a go? And why do you think it happens?
4. If not to #2... Do you think it's a training data problem or a "hidden system prompt" problem or something else entirely?
(\*) One time, *the LLM cancelled the rescue because it hallucinated that it was the only way to save the human*. I found this extremely funny, and fully in line with my experience with small (non-reasoning) models often hallucinating during some more complex tasks (e.g. financial stuff). | 2025-10-04T13:08:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nxtk7j/replicating_anthropics_agentic_misalignment_with/ | MitsotakiShogun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxtk7j | false | null | t3_1nxtk7j | /r/LocalLLaMA/comments/1nxtk7j/replicating_anthropics_agentic_misalignment_with/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=108&crop=smart&auto=webp&s=3858f721a29547fa04b2e0baa4c1d0e9bf05205b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=216&crop=smart&auto=webp&s=4224c7e04ded8e72fee16fe77f09907bd62dc2e5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=320&crop=smart&auto=webp&s=2b2319e5e79c3b62645429fe815de04f3f06d3ae', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=640&crop=smart&auto=webp&s=aa674f2476f8a95494716eea665c7047e2db6d64', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=960&crop=smart&auto=webp&s=34ee3b863f1b42dfe5a465d0b37039f27ec29b80', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=1080&crop=smart&auto=webp&s=3f0addeb4ed9f7ddfe1c4c97b96a6fdfb105b130', 'width': 1080}], 'source': {'height': 1261, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?auto=webp&s=6b40c70e0cc6e737ea4ce498090fc4a36867b2f0', 'width': 2401}, 'variants': {}}]} |
Effective context engineering for AI agents by Anthropic | 0 | 2025-10-04T12:47:27 | https://www.reddit.com/gallery/1nxt2r2 | Vast_Comedian_9370 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nxt2r2 | false | null | t3_1nxt2r2 | /r/LocalLLaMA/comments/1nxt2r2/effective_context_engineering_for_ai_agents_by/ | false | false | 0 | null | ||
IBM granite 4.0-h-tiny leads the way for extra small MoEs | 139 | I hope the trend for those MoEs carries on. Normies with laverage laptops will soon be able to use decent models with little ressources. | 2025-10-04T12:20:23 | GreenTreeAndBlueSky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxshw2 | false | null | t3_1nxshw2 | /r/LocalLLaMA/comments/1nxshw2/ibm_granite_40htiny_leads_the_way_for_extra_small/ | false | false | default | 139 | {'enabled': True, 'images': [{'id': 'nlkf3btz73tf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=108&crop=smart&auto=webp&s=ff74d6667a7a1a2a11a30adcefa258e8e3c1b366', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=216&crop=smart&auto=webp&s=3503e7be4203f4cc5c6f2a4d7ce85603761681d9', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=320&crop=smart&auto=webp&s=b13bd6dce6347d48ba635f4da153d773845e9ef5', 'width': 320}, {'height': 343, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=640&crop=smart&auto=webp&s=1368a9b2fc469f876a31fef05020119815deb818', 'width': 640}, {'height': 514, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=960&crop=smart&auto=webp&s=6906573e04b1a4ead5043079f92d084e450fc2a7', 'width': 960}, {'height': 579, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?width=1080&crop=smart&auto=webp&s=6b9d26b5ec3f0ff862ff77ef7d4beb94a60f9fd9', 'width': 1080}], 'source': {'height': 579, 'url': 'https://preview.redd.it/nlkf3btz73tf1.png?auto=webp&s=a4019f831f3fc21df39403635a45d4ae779f12c2', 'width': 1080}, 'variants': {}}]} | |
GLM 4.6 Makes Incredible Front End Design with 2 prompts | 98 | So I've been playing with GLM 4.6, I've also implemented it inside Cloud Code, and I'll be doing a new video on how to set up GLM 4.6 in Cloud Code, but I really wanted to show everybody how great z ai is with front end design.
In this video I take a screenshot of a website and I do one simple prompt and it kicks out a good design and then I ask it to enhance it, and then it turns it into an incredible design, you can watch it here
Would love to know what you think and if any of you are using GLM in Claude Code yet? | 2025-10-04T12:08:05 | https://youtu.be/AvHsytH-K84 | dev_is_active | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1nxs8tr | false | {'oembed': {'author_name': 'eeko systems | ai & automation for business', 'author_url': 'https://www.youtube.com/@eeko_systems', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/AvHsytH-K84?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="You gotta see this front end I vibe coded with 2 prompts in GLM 4.6"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/AvHsytH-K84/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'You gotta see this front end I vibe coded with 2 prompts in GLM 4.6', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1nxs8tr | /r/LocalLLaMA/comments/1nxs8tr/glm_46_makes_incredible_front_end_design_with_2/ | false | false | 98 | {'enabled': False, 'images': [{'id': 'HRx3NTzzZMOIdtM0oRmpT2rIW9OnDaS9AE7D0C1FPSc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/HRx3NTzzZMOIdtM0oRmpT2rIW9OnDaS9AE7D0C1FPSc.jpeg?width=108&crop=smart&auto=webp&s=ea8ea76a6c4234e3fcec27721b999cff24946f33', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/HRx3NTzzZMOIdtM0oRmpT2rIW9OnDaS9AE7D0C1FPSc.jpeg?width=216&crop=smart&auto=webp&s=01ab56a85db20f05be54a7f905c3a86dbd211049', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/HRx3NTzzZMOIdtM0oRmpT2rIW9OnDaS9AE7D0C1FPSc.jpeg?width=320&crop=smart&auto=webp&s=9d08204e2ea136e84ee75e08ffa737f0e7653aea', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/HRx3NTzzZMOIdtM0oRmpT2rIW9OnDaS9AE7D0C1FPSc.jpeg?auto=webp&s=3d8154f4db0ec206afb9663dd036f87578e1c0a4', 'width': 480}, 'variants': {}}]} | |
Are there any LLM 'guardrails' that are ever built into the model training process? | 2 | Are there any LLM 'guardrails' that are ever built into the model training process? Trying to understand the set exclusivity of what is actually trained into the model and what is added on post-training
For example chatgpt would reject a request "how to make mustard gas" as it recognizes that mustard gas is specifically designed for hurting other people => this is not allowed => 'I can't answer that question'. Like this is some kind of post-training guardrailing process (correct me if I am wrong).
FWIW, I use the mustard gas example because the chemical formula (as well as accidental creation process, mixing household products together) is easily found on google
My question is, are there cases where non-guardrailed models would also refuse to answer a question, independent of manually enforced guardrails? | 2025-10-04T12:03:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nxs5td/are_there_any_llm_guardrails_that_are_ever_built/ | MullingMulianto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxs5td | false | null | t3_1nxs5td | /r/LocalLLaMA/comments/1nxs5td/are_there_any_llm_guardrails_that_are_ever_built/ | false | false | self | 2 | null |
Does anyone actually use Cratak? | 0 | I’ve been using it since only few days. It claims to be a community platform and give access to modified (uncensored) versions of some latest LLM models. The thing is expect a weird YouTube video and a guy who talked about it on r/privacy, this website looks to be appeared out of nowhere with zero reputation or history. And super sketchy, there's no landing page or info until you gain a restricted access.
https://preview.redd.it/6p4s1bud33tf1.png?width=1521&format=png&auto=webp&s=75ef0a67ce306f89c4f2f809d29d071e71032f92
| 2025-10-04T11:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nxs1kl/does_anyone_actually_use_cratak/ | Planctonique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxs1kl | false | null | t3_1nxs1kl | /r/LocalLLaMA/comments/1nxs1kl/does_anyone_actually_use_cratak/ | false | false | 0 | null | |
This is pretty cool | 69 | https://venturebeat.com/ai/huaweis-new-open-source-technique-shrinks-llms-to-make-them-run-on-less
https://github.com/huawei-csl/SINQ/blob/main/README.md
| 2025-10-04T11:45:15 | https://github.com/huawei-csl/SINQ/blob/main/README.md | wowsers7 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nxrssl | false | null | t3_1nxrssl | /r/LocalLLaMA/comments/1nxrssl/this_is_pretty_cool/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=108&crop=smart&auto=webp&s=7dea2e3e0a59f656b89dde6c5e7b53ffb14046b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=216&crop=smart&auto=webp&s=33301fa61d1970ed9553e0494f0ff68fb4020974', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=320&crop=smart&auto=webp&s=5d3c2d3f097daddbfb404ba1a347dabfa7a38206', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=640&crop=smart&auto=webp&s=f573aa728fb79af617ec9e24df900618595c6abb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=960&crop=smart&auto=webp&s=064da3eccb0023558b0faf9f5b1f461c43b738e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?width=1080&crop=smart&auto=webp&s=e1857bf4175379d880b0789d5fa36ba2f262c54f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fWjpQVd5VjUiyz85oEUZ3MBMlQycdcPTlMPYbRWoE6A.png?auto=webp&s=a66f83ab5c8977dbb71a84a49561fabd95a76729', 'width': 1200}, 'variants': {}}]} | |
Just saw the news of ban consideration on DeepSeek in US | 5 | Just read this https://www.axios.com/2025/10/03/ai-deepseek-alarm
Who will suffer in the end is an average consumer who will be overcharged by US Moguls. Remember llamers if DeepSeek wasn't announced openai pricing for o4 we all know was out of reach and closed sources. Just let the market play, same for gpus we want affordable access. | 2025-10-04T11:32:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nxrk0p/just_saw_the_news_of_ban_consideration_on/ | gpt872323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxrk0p | false | null | t3_1nxrk0p | /r/LocalLLaMA/comments/1nxrk0p/just_saw_the_news_of_ban_consideration_on/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=108&crop=smart&auto=webp&s=f33157c50a793a5ad2b88dd04a7354092cb64180', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=216&crop=smart&auto=webp&s=eb835b3bb29fd840121bf759f63db3609c2388e3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=320&crop=smart&auto=webp&s=7ce12c8b28a04c54e4c0f0e0c9b0044118ba4413', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=640&crop=smart&auto=webp&s=fe382225c9b5a7ec538284b74aff4632bdbfed1f', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=960&crop=smart&auto=webp&s=0e6180adf63a10eadce1f6bea70ed0cb1d4067a8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?width=1080&crop=smart&auto=webp&s=05b2455e26a5283a3dac94cd44594ec3189c17e0', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/phWZbT_xplj7hhbivItDt26f7wYBEtxN4cPDxGiktek.jpeg?auto=webp&s=213abc3bc724309a5dc21ddd422c4fbf265de861', 'width': 1366}, 'variants': {}}]} |
Most reliable vllm quant for Qwen3-next-80b-a3b? | 3 | As title suggests. I'm trying to find a int4 or awq version that can start up properly and reliably. Have tried cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-4bit and Intel/Qwen3-Next-80B-A3B-Instruct-int4-mixed-AutoRound.
The latter gives me KeyError: 'layers.0.mlp.shared_expert.down_proj.weight'.
I am on the latest vLLM release, v0.11.0. | 2025-10-04T11:27:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nxrh9d/most_reliable_vllm_quant_for_qwen3next80ba3b/ | yuch85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxrh9d | false | null | t3_1nxrh9d | /r/LocalLLaMA/comments/1nxrh9d/most_reliable_vllm_quant_for_qwen3next80ba3b/ | false | false | self | 3 | null |
Smartest model to run on 5090? | 18 | What’s the largest model I should run on 5090 for reasoning? E.g. GLM 4.6 - which version is ideal for one 5090?
Thanks. | 2025-10-04T11:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nxr4gu/smartest_model_to_run_on_5090/ | eCityPlannerWannaBe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxr4gu | false | null | t3_1nxr4gu | /r/LocalLLaMA/comments/1nxr4gu/smartest_model_to_run_on_5090/ | false | false | self | 18 | null |
Anyone running llm on their 16GB android phone? | 15 | My 8gb dual channel phone is dying, so I would like buy a 16gb quad channel android phone to run llm.
I am interested in running gemma3-12b-qat-q4\_0 on it.
If you have one, can you run it for me on pocketpal or chatterUI and report the performance (t/s for both prompt processing and inference)? Please also report your phone model such that I can link GPU GFLOPS and memory bandwidth to the performance.
Thanks a lot in advance. | 2025-10-04T10:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nxqxtl/anyone_running_llm_on_their_16gb_android_phone/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxqxtl | false | null | t3_1nxqxtl | /r/LocalLLaMA/comments/1nxqxtl/anyone_running_llm_on_their_16gb_android_phone/ | false | false | self | 15 | null |
Examsprint AI is the only study solution you need. | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews | 2025-10-04T10:54:07 | No_Golf_5566 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxqv89 | false | null | t3_1nxqv89 | /r/LocalLLaMA/comments/1nxqv89/examsprint_ai_is_the_only_study_solution_you_need/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'rqicn4ehs2tf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=108&crop=smart&auto=webp&s=4f5d8b1f4a5eb97eb29498082190e1612e28450e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=216&crop=smart&auto=webp&s=fb93a403c4c8d84d84d33d5b580d869e6c25054f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=320&crop=smart&auto=webp&s=faf9eef71f8887f8a57f0e42b3a92305930242c5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=640&crop=smart&auto=webp&s=c95f50203ccd371cd7230fabc4076eae4b4881d8', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=960&crop=smart&auto=webp&s=bff980ca555930fb4b45fa40270bbd1e1cb5193a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?width=1080&crop=smart&auto=webp&s=27c11db3a6159d02fb6c27aae40185cc0c315947', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/rqicn4ehs2tf1.png?auto=webp&s=1b6f146b8507679d2bb22733c9eff4b8c9c016d9', 'width': 1080}, 'variants': {}}]} | |
I was constantly getting rate limit/timeout errors from AI APIs, so I built a universal Python error handler and retry decorator to solve it. Here's the code (FOSS). | 1 | [removed] | 2025-10-04T10:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nxqn59/i_was_constantly_getting_rate_limittimeout_errors/ | WeakPianist9445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxqn59 | false | null | t3_1nxqn59 | /r/LocalLLaMA/comments/1nxqn59/i_was_constantly_getting_rate_limittimeout_errors/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
Examsprint AI: The only one stop Study you need. | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews. | 2025-10-04T10:39:14 | Organic_Kick6610 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxqm81 | false | null | t3_1nxqm81 | /r/LocalLLaMA/comments/1nxqm81/examsprint_ai_the_only_one_stop_study_you_need/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'PrtV4Au8xzyBwWrtwu7kdUupYR51vmMfQiwX9meRV8Y', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=108&crop=smart&auto=webp&s=712ea8f32f903cb6c5d34704782227f0930bbb43', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=216&crop=smart&auto=webp&s=f88dda43b2b9862ff4b77e8d291b4a312096f8a8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=320&crop=smart&auto=webp&s=d874d75339384daea5dc487f4b094fb1bb841459', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=640&crop=smart&auto=webp&s=46945c19a37d7dcbbf666b0bd500c1350d1157e7', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=960&crop=smart&auto=webp&s=c5fabdcdba5d19f424850f24ad95cb6efa54829f', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?width=1080&crop=smart&auto=webp&s=2fcd381a09ab803036fa2707c57bc64ae40ba155', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/8wl1fpyxp2tf1.png?auto=webp&s=d6e17d52a0a7796a70959069c013e4457fa61df9', 'width': 1080}, 'variants': {}}]} | ||
The easiest way for an Al to seize power is not by breaking out of Dr. Frankenstein's lab but by ingratiating itself with some paranoid Tiberius. | 0 | "If even just a few of the world's dictators choose to put their trust in Al, this could have far-reaching consequences for the whole of humanity.
Science fiction is full of scenarios of an Al getting out of control and enslaving or eliminating humankind.
Most sci-fi plots explore these scenarios in the context of democratic capitalist societies.
This is understandable.
Authors living in democracies are obviously interested in their own societies, whereas authors living in dictatorships are usually discouraged from criticizing their rulers.
But the weakest spot in humanity's anti-Al shield is probably the dictators.
The easiest way for an AI to seize power is not by breaking out of Dr. Frankenstein's lab but by ingratiating itself with some paranoid Tiberius."
Excerpt from Yuval Noah Harari's latest book, Nexus, which makes some really interesting points about geopolitics and AI safety.
What do you think? Are dictators more like CEOs of startups, selected for reality distortion fields making them think they can control the uncontrollable?
Or are dictators the people who are the most aware and terrified about losing control?"
*Excerpt from Yuval Noah Harari's amazing book, Nexus (slightly modified for social media)* | 2025-10-04T10:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nxqki4/the_easiest_way_for_an_al_to_seize_power_is_not/ | katxwoods | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxqki4 | false | null | t3_1nxqki4 | /r/LocalLLaMA/comments/1nxqki4/the_easiest_way_for_an_al_to_seize_power_is_not/ | false | false | self | 0 | null |
Can't run GLM 4.6 in lmstudio! | 4 | Can I run GLM 4.6 in lmstudio at all? I keep getting this error: "\`\`\`
🥲 Failed to load the model
Failed to load model
error loading model: missing tensor 'blk.92.nextn.embed\_tokens.weight'
\`\`\`" | 2025-10-04T10:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nxqion/cant_run_glm_46_in_lmstudio/ | Adventurous_Rise_683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxqion | false | null | t3_1nxqion | /r/LocalLLaMA/comments/1nxqion/cant_run_glm_46_in_lmstudio/ | false | false | self | 4 | null |
Awesome Local LLM Speech-to-Speech Models & Frameworks | 26 | Did some digging into speech-to-speech models/frameworks for a project recently and ended up with a pretty comprehensive list. Figured I'd drop it here in case it helps anyone else avoid going down the same rabbit hole.
What made the cut:
* Has **LLM integration** (built-in or via modules)
* Does **full speech-to-speech** pipeline, not just STT or TTS alone
* Works **locally/self-hosted**
Had to trim quite a bit to keep this readable, but the full list with more details is on GitHub at [tleyden/awesome-llm-speech-to-speech](https://github.com/tleyden/awesome-llm-speech-to-speech). PRs welcome if you spot anything wrong or missing!
|**Project**|**Open Source**|**Type**|**LLM + Tool Calling**|**Platforms**|
|:-|:-|:-|:-|:-|
|[**Unmute.sh**](https://github.com/kyutai-labs/unmute)|✅ Yes|Cascading|Works with any local LLM · Tool calling not yet but planned|Linux only|
|[**Ultravox (Fixie)**](https://github.com/fixie-ai/ultravox)|✅ MIT|Hybrid (audio-native LLM + ASR + TTS)|Uses Llama/Mistral/Gemma · Full tool-calling via backend LLM|Windows / Linux|
|[**RealtimeVoiceChat**](https://github.com/KoljaB/RealtimeVoiceChat)|✅ MIT|Cascading|Pluggable LLM (local or remote) · Likely supports tool calling|Linux recommended|
|[**Vocalis**](https://github.com/Lex-au/Vocalis)|✅ Apache-2|Cascading|Fine-tuned LLaMA-3-8B-Instruct · Tool calling via backend LLM|macOS / Windows / Linux (runs on Apple Silicon)|
|[**LFM2**](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models?ref=producthunt)|✅ Yes|End-to-End|Built-in LLM (E2E) · Native tool calling|Windows / Linux|
|[**Mini-omni2**](https://github.com/gpt-omni/mini-omni2)|✅ MIT|End-to-End|Built-in Qwen2 LLM · Tool calling TBD|Cross-platform|
|[**Pipecat**](https://github.com/pipecat-ai/pipecat)|✅ Yes|Cascading|Pluggable LLM, ASR, TTS · Explicit tool-calling support|Windows / macOS / Linux / iOS / Android|
**Notes**
* “Cascading” = modular ASR → LLM → TTS
* “E2E” = end-to-end LLM that directly maps speech-to-speech | 2025-10-04T10:19:27 | https://github.com/tleyden/awesome-llm-speech-to-speech | tleyden | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nxqabe | false | null | t3_1nxqabe | /r/LocalLLaMA/comments/1nxqabe/awesome_local_llm_speechtospeech_models_frameworks/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=108&crop=smart&auto=webp&s=60f46e4e5af1ec77ab94075fd92923b4dd50687c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=216&crop=smart&auto=webp&s=fc0a7b3759e3b915cc15700e7e048b5fca6dd983', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=320&crop=smart&auto=webp&s=d19f737501f38d167c844fa970b3c992e06bafa2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=640&crop=smart&auto=webp&s=d1770300a09702157f67115321b5768d01848a3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=960&crop=smart&auto=webp&s=3fb3f71cee9c2c203129b442ec7a27609036871d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?width=1080&crop=smart&auto=webp&s=44de035b891b12f11642a1664e567a18699e32bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sv2a4YrAVR9g08yOa0AOrBrIqErmKKSQAYIjNTCx_eI.png?auto=webp&s=b77d43479c5153407e61f93c06ce3c5a9ef597a4', 'width': 1200}, 'variants': {}}]} | |
[Tool Release] ollama_server_manager: A Simple Web UI to Manage Models Across Multiple Local Ollama Servers | 1 | I was struggling to keep track of models across my **three local Ollama servers** using only the command line. It got tedious! 😥
To solve this, I created `ollama_server_manager`\- a simple tool that provides a **web-based dashboard** to overview which models are present on which server.
Since I only use this on my **private, trusted network**, I kept it intentionally **simple with no authentication** required.
Hope others find this useful for managing their local setups!
[`https://github.com/GhennadiiMir/ollama_server_manager`](https://github.com/GhennadiiMir/ollama_server_manager) | 2025-10-04T10:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nxq9gu/tool_release_ollama_server_manager_a_simple_web/ | GennadiiM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxq9gu | false | null | t3_1nxq9gu | /r/LocalLLaMA/comments/1nxq9gu/tool_release_ollama_server_manager_a_simple_web/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=108&crop=smart&auto=webp&s=196c3ba627a4755b8996801c9a8413b7c920cd6e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=216&crop=smart&auto=webp&s=0110904277c2d66b69139ec640a0b1ca7cc1ea00', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=320&crop=smart&auto=webp&s=1128307a05c31850611ee6e6b81e3b721e000dec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=640&crop=smart&auto=webp&s=43c0e6b1f9cb9bb52de2a2dddbff98252cd6bdde', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=960&crop=smart&auto=webp&s=b89e065d35cebabeecaf0daa576010e97a9fadde', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?width=1080&crop=smart&auto=webp&s=91068eae4a8dc7c7317e23d835a5e77e60efb1e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EFL5nViDRcVoXOzP2ObIaPHFmE5ciabBzEB5hl1LYxM.png?auto=webp&s=bf44448684b9efa4d703ea9465bd12c8cba5035b', 'width': 1200}, 'variants': {}}]} |
Best lightweight low resources LLM. | 4 | Best lightweight low resources no GPU LLM model to run locally on a VM.
7b or less. RAM only 8GB , CPU 4 cores 2.5Ghz.
Working on project cloud environmen troubleshooting tool. Will be using it for low level coding, finding issues related to kubernetes, docker, kafka, database, linux systems.
Qwen2.5 coder 7b,
Codellama 7b,
phi 3 mini
or deepseek coder v2 lite ? | 2025-10-04T10:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nxq04a/best_lightweight_low_resources_llm/ | Full_University_7232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxq04a | false | null | t3_1nxq04a | /r/LocalLLaMA/comments/1nxq04a/best_lightweight_low_resources_llm/ | false | false | self | 4 | null |
Can't get Granite 4 maximum context window size... | 3 | Hello,
I'm using ollama 0.12.3 and OpenWebui 0.6.32 and I have a rig with 3x 4060 TI 16GB. I can run 32b models with context size that allow to fill up to 48GB VRAM.
When I'm using granite4:tiny-h, I can put a context of 290000 tokens, which takes 12GB in the VRAM but I have a memory error for 300000 tokens.
With granite4:small-h, I can put a context of 40000 tokens, which takes 30GB in VRAM but have memory error for 50000 tokens.
The error is like : 500: llama runner process has terminated: cudaMalloc failed: out of memory ggml\_gallocr\_reserve\_n: failed to allocate CUDA1 buffer of size 7112647168
Does anyone could get the maximum 1000000 tokens context window? | 2025-10-04T08:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nxosfs/cant_get_granite_4_maximum_context_window_size/ | Fade78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxosfs | false | null | t3_1nxosfs | /r/LocalLLaMA/comments/1nxosfs/cant_get_granite_4_maximum_context_window_size/ | false | false | self | 3 | null |
Best local model for open code? | 16 | Which LLM gives you satisfaction for tasks under open code with 12Go vram ? | 2025-10-04T08:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nxo3ao/best_local_model_for_open_code/ | LastCulture3768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxo3ao | false | null | t3_1nxo3ao | /r/LocalLLaMA/comments/1nxo3ao/best_local_model_for_open_code/ | false | false | self | 16 | null |
Whats your PC tech spec? | 2 | Hey guys.
I'm just wondering what is your PC/Laptop tech spec and what local LLM are you guys using?
How's the experience? | 2025-10-04T07:51:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nxny65/whats_your_pc_tech_spec/ | luckypanda95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxny65 | false | null | t3_1nxny65 | /r/LocalLLaMA/comments/1nxny65/whats_your_pc_tech_spec/ | false | false | self | 2 | null |
Dual DGX Spark for ~150 Users RAG? | 0 | Hey all,
with the official order options of the DGX Spark starting soon, I'd like to get some reflection by those actually running a larger scale system for many users.
Currently we only have a few OpenAI licenses in our company. We have about 10k Documents from our QM system we'd like to ingest into a RAG system to be able to:
1. Answer questions quickly, streamline onboarding of new employees
2. Assist in the creation of new documents (SOPs, Reports etc)
3. Some agentic usage (down the road)
4. Some coding (small IT department, not main focus, we can put those on a chatgpt subscription if necessary)
Up until now i have only used some local ai on my personal rig (Threadripper + 3090) to get a better understanding of the possibilities.
I could see multiple options for this going forward:
1. Procure a beefy server system with 4x RTX 6000 Blackwell and reasonable RAM+Cores. (\~40k€ plusminus a little)
2. Start out small with 2x DGX Spark (\~8k€) and if needed, add a 200Gbit switch (\~10k) and extend by adding more systems
As this is the first system introduced in the company, i expect moderate parallel usage at first, maybe 10 users at times.
I've not yet used distributed inferencing in llama.cpp/vllm, from what i read the network bandwidth is going to be the bottleneck at most setups, which can be ignored in the DGX Spark case because we would have an interconnect near-matching memory speed.
Please let me know your opinion on this, happy to learn from those who are in a similar situation. | 2025-10-04T07:42:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nxnt4j/dual_dgx_spark_for_150_users_rag/ | streppelchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxnt4j | false | null | t3_1nxnt4j | /r/LocalLLaMA/comments/1nxnt4j/dual_dgx_spark_for_150_users_rag/ | false | false | self | 0 | null |
Where do you think we'll be at for home inference in 2 years? | 25 | I suppose we'll never see any big price reduction jumps? Especially with inflation rising globally?
I'd love to be able to have a home SOTA tier model for under $15k. Like GLM 4.6, etc. But wouldn't we all? | 2025-10-04T07:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nxnszs/where_do_you_think_well_be_at_for_home_inference/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxnszs | false | null | t3_1nxnszs | /r/LocalLLaMA/comments/1nxnszs/where_do_you_think_well_be_at_for_home_inference/ | false | false | self | 25 | null |
best coding model under 40b parameters? preferably moe | 8 | preferably moe | 2025-10-04T07:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nxnq77/best_coding_model_under_40b_parameters_preferably/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxnq77 | false | null | t3_1nxnq77 | /r/LocalLLaMA/comments/1nxnq77/best_coding_model_under_40b_parameters_preferably/ | false | false | self | 8 | null |
Multi-Agent RAG Workflows in RAGFlow, Slower, No Better Results? Looking for Guidance | 3 | Hey everyone,
I'm currently working on upgrading our RAG system at my company and could really use some input.
I’m restricted to using RAGFlow, and my original hypothesis was that implementing a multi-agent architecture would yield better performance and more accurate results. However, what I’ve observed is that:
* Multi-agent workflows are significantly slower than the single-agent setup
* The quality of the results hasn’t improved noticeably
I'm trying to figure out whether the issue is with the way I’ve structured the workflows, or if multi-agent is simply not worth the overhead in this context.
# Here's what I’ve built so far (summarized):
# Workflow 1: Graph-Based RAG
1. **Begin** — Entry point for user query
2. **Document Processing** (Claude 3.7 Sonnet)
* Chunks KB docs
* Preps data for graph
* Retrieval component integrated
3. **Graph Construction** (Claude 3.7 Sonnet)
* Builds knowledge graph (entities + relations)
4. **Graph Query Agent** (Claude 3.7 Sonnet)
* Traverses graph to answer query
5. **Enhanced Response** (Claude 3.7 Sonnet)
* Synthesizes final response + citations
6. **Output** — Sends to user
# Workflow 2: Deep Research with Web + KB Split
1. **Begin**
2. **Deep Research Agent** (Claude 3.7 Sonnet)
* Orchestrates the flow, splits task
3. **Web Search Specialist** (GPT-4o Mini)
* Uses TavilySearch for current info
4. **Retrieval Agent** (Claude 3.7 Sonnet)
* Searches internal KB
5. **Research Synthesizer** (GPT-4o Mini)
* Merges findings, dedupes, resolves conflicts
6. **Response**
# Workflow 3: Query Decomposition + QA + Validation
1. **Begin**
2. **Query Decomposer** (GPT-4o Mini)
* Splits complex questions into sub-queries
3. **Docs QA Agent** (Claude 3.7 Sonnet)
* Answers each sub-query using vector search or DuckDuckGo fallback
4. **Validator** (GPT-4o Mini)
* Checks answer quality and may re-trigger retrieval
5. **Message Output**
# The Problem:
Despite the added complexity, these setups:
* Don’t provide significantly better accuracy or relevance over a simpler single-agent RAG pipeline
* Add latency due to multiple agents and transitions
* Might be over-engineered for our use case
# My Questions:
* Has anyone successfully gotten better performance (quality or speed) with multi-agent setups in RAGFlow?
* Are there best practices for optimizing multi-agent architectures in RAG pipelines?
* Would simplifying back to a single-agent + hybrid retrieval model make more sense in most business use cases?
Any advice, pointers to good design patterns, or even “yeah, don’t overthink it” is appreciated.
Thanks in advance! | 2025-10-04T07:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nxnmc2/multiagent_rag_workflows_in_ragflow_slower_no/ | void_brambora | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxnmc2 | false | null | t3_1nxnmc2 | /r/LocalLLaMA/comments/1nxnmc2/multiagent_rag_workflows_in_ragflow_slower_no/ | false | false | self | 3 | null |
I used llama 3.3 70b to build an AI tool | 0 | So I'm Arush, a 14 y/o from India. I recently built NexNotes Al. It has all the features needed for studying and research. Just upload any type of file and get:
question papers
Mindmaps and diagrams (custom)
Quizzes with customized difficulty
Vocab extraction
Humanized text
handwritten text
It can solve your questions
flashcards
grammar correction
you even get progress and dashboard
A complete study plan and even a summary- all for free. So you can say it is a true distraction free one stop ai powered study solution. The good thing is everything can be customized.
Search nexnotes ai on Google | 2025-10-04T07:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nxnfuy/i_used_llama_33_70b_to_build_an_ai_tool/ | Turbulent_Orchid2829 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxnfuy | false | null | t3_1nxnfuy | /r/LocalLLaMA/comments/1nxnfuy/i_used_llama_33_70b_to_build_an_ai_tool/ | false | false | self | 0 | null |
We’re doing community safety research on open models — 30 sec survey for users | 0 | Hey all! Have you ever explored or considered downloading open-weight models on Hugging Face? If yes, could you help us out by filling this quick 30 second survey? : [forms.gle/RETjXHvtqtLmMhPM7](http://forms.gle/RETjXHvtqtLmMhPM7)
Context : We're planning to run some safety research on top HuggingFace models and publish results in a public space, which would hopefully be helpful to HuggingFace users who want to pick which model to use for their purposes. | 2025-10-04T07:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nxn9f5/were_doing_community_safety_research_on_open/ | whalefal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxn9f5 | false | null | t3_1nxn9f5 | /r/LocalLLaMA/comments/1nxn9f5/were_doing_community_safety_research_on_open/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=108&crop=smart&auto=webp&s=e05c589f84900204fa3faa95b293f254d7ae2603', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=216&crop=smart&auto=webp&s=95751324e242af8a3432cb4d825ca86f1ab9eb2f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=320&crop=smart&auto=webp&s=76cb0458de20e52ec216dd7af5e307d891669700', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=640&crop=smart&auto=webp&s=45089eb6de23055c87b3d387e150bd3db34c22f1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=960&crop=smart&auto=webp&s=b6ac12499df8d20f88cfa5fa60d5359206aa8b79', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?width=1080&crop=smart&auto=webp&s=775803f7b53af95c07e08749652534859da71f9a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/O7x0pJ3DB4MPKopPFw_3N6NQpgYS8aI9CVbqiCi8OZU.png?auto=webp&s=a54d62ab30cfa3ce46df35d993b3ef43ee969e75', 'width': 1200}, 'variants': {}}]} |
how you configure the $3/mo GLM Coding Plan with Droid CLI | 1 | [removed] | 2025-10-04T07:02:42 | https://www.reddit.com/r/LocalLLaMA/comments/1nxn6gi/how_you_configure_the_3mo_glm_coding_plan_with/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxn6gi | false | null | t3_1nxn6gi | /r/LocalLLaMA/comments/1nxn6gi/how_you_configure_the_3mo_glm_coding_plan_with/ | false | false | self | 1 | null |
how you configure the $3/mo GLM Coding Plan with Droid CLI | 1 | [removed] | 2025-10-04T06:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1nxn4gy/how_you_configure_the_3mo_glm_coding_plan_with/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxn4gy | false | null | t3_1nxn4gy | /r/LocalLLaMA/comments/1nxn4gy/how_you_configure_the_3mo_glm_coding_plan_with/ | false | false | self | 1 | null |
Help me build this monster, for- learning, experiments(text, image, video, audio), training and fine-tuning(small scale LoRA) | 0 | Im trying to finalize a configuration for a local stack for - check title.
Shortlisted so far below, I want to know your experiences, suggestions, tips etc.
CPU: AMD Threadripper Pro 9955WX 16-core
Chipset: WRX90, WRX90E SAGE SE - Asus Pro WS
Mem: 128GB ECC DDR5 5600Mhz/6000
GPU: RTX 6000 PRO Blackwell Workstation 600W
PSU: Superflower 2000W PSU
Storage: 4x 2TB NVMe + 2TB for OS/Boot
Liquid cooling + Standard ATX
Target: To learn and experiment with Text models, fine tune etc incl inference, Image model finetuning and SD inference with comfyUI, learn to use and model video models etc. Learn and experiment with TTS models and also eventually build one for myself. As of now I use metal with 24gb RAM and realized i need Nvidia and now jumping onto this.
Looking for opinions
ps. atleast the plan now is to get this rig and eventually add another rtx 6000 gpu and more memory. - I want to also experiment with Quant. | 2025-10-04T06:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nxn3jy/help_me_build_this_monster_for_learning/ | s2k4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxn3jy | false | null | t3_1nxn3jy | /r/LocalLLaMA/comments/1nxn3jy/help_me_build_this_monster_for_learning/ | false | false | self | 0 | null |
The Missing Link between the Transformer and Models of the Brain | 9 | A group of scientists at Pathway claim to have found a missing link . 'The massively parallel post-Transformer reasoning architecture which opens the door to generalization over time'
Link to the paper : https://arxiv.org/abs/2509.26507 | 2025-10-04T06:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nxmq9b/the_missing_link_between_the_transformer_and/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxmq9b | false | null | t3_1nxmq9b | /r/LocalLLaMA/comments/1nxmq9b/the_missing_link_between_the_transformer_and/ | false | false | self | 9 | null |
Where can I find Sonnet 4.5 at a lower price? | 0 | I’m interested in using Sonnet 4.5 daily, but I’m not sure about Claude’s limits. Is it more cost-effective to purchase Cursor, pay as you go on OpenRouter, or buy the Claude subscription itself? Using OpenRouter give me the option to switch to GLM 4.6 for easier tasks
Has anyone attempted to determine the most economical option? | 2025-10-04T06:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nxmny6/where_can_i_find_sonnet_45_at_a_lower_price/ | farnoud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxmny6 | false | null | t3_1nxmny6 | /r/LocalLLaMA/comments/1nxmny6/where_can_i_find_sonnet_45_at_a_lower_price/ | false | false | self | 0 | null |
Any concrete drawbacks from using Vercel's AI SDK? | 4 | I have started multiple projects using AI / Agent frameworks and have always been disappointed in the end. My current project I am implementing everything from scratch and I am much happier, I know where all the state exists and I do not have to spend hours trying to find how to extract some data from the agent loop which I need.
However today I was researching what I would deem to be "good" open source code in this area to try and find some interesting abstractions and noticed that nearly all the projects\[0\]\[1\] are using Vercel's AI SDK for connecting to LLMs. Right now I have my own internal interface and am implementing a few providers (ollama, openai, anthropic).
So I wanted to see what the view of HN is, am I being stupid - is the AI SKD truly a good bit of abstraction and I should leverage it to save time?
\- \[0\] [https://github.com/sst/opencode](https://github.com/sst/opencode)
\- \[1\] [https://github.com/VoltAgent/voltagent](https://github.com/VoltAgent/voltagent) | 2025-10-04T06:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nxmfz7/any_concrete_drawbacks_from_using_vercels_ai_sdk/ | Suspicious_Dress_350 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxmfz7 | false | null | t3_1nxmfz7 | /r/LocalLLaMA/comments/1nxmfz7/any_concrete_drawbacks_from_using_vercels_ai_sdk/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=108&crop=smart&auto=webp&s=36ed0b4002b1a2b49a655eeafa068505f04ce430', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=216&crop=smart&auto=webp&s=db2cfb629dddf85318dc71df0dfab461f7696b24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=320&crop=smart&auto=webp&s=77dc67f41dd684c2c25420d465113f9845df50c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=640&crop=smart&auto=webp&s=479e5f315d095e27d9187f665de543e6e446c92f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=960&crop=smart&auto=webp&s=8deb8ea32a91045b0c8bdeecfdcdf67527a368eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?width=1080&crop=smart&auto=webp&s=677ecb57d926827ef7042c74c62ef2e6c8167eba', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/D6shUXR7p16Sn_OlqzgZA1AwgNS_FrdvcuB9JbWIZck.png?auto=webp&s=0631635f16399d425bbec7db0d277bb6076fe0e3', 'width': 1280}, 'variants': {}}]} |
Does anyone know how to fix this? | 5 | I just download LM studio, and I cannot click "get started" ?? | 2025-10-04T06:13:21 | Neck_Aware | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxmd1h | false | null | t3_1nxmd1h | /r/LocalLLaMA/comments/1nxmd1h/does_anyone_know_how_to_fix_this/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'wyh4ilqce1tf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=108&crop=smart&auto=webp&s=0439ee74af7d920844b79ed5657612c0f24af063', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=216&crop=smart&auto=webp&s=0203c32f7334941174c3590a45fed561aff97a19', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=320&crop=smart&auto=webp&s=9ce1b3a1a957d1bda61241f2bee1bc211b9ef5c0', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=640&crop=smart&auto=webp&s=f4c58a34bf6d208f55705c5e309fc095728b6b6b', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=960&crop=smart&auto=webp&s=c5d6d8b884e662ade9551a25f82eb5954d5839be', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?width=1080&crop=smart&auto=webp&s=5b2ff1bdbdccf9890944905103bf7b5e966b2a82', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/wyh4ilqce1tf1.png?auto=webp&s=51c7f30f05b619fcbb75a3c2142c57d99b5ba533', 'width': 1600}, 'variants': {}}]} | |
Unsure which ollama model to use? Here's a tool I built to help | 4 | Hey everyone,
I’m fairly new to working with local LLMs, and like many, I wondered which model(s) I should use. To help answer that, I put together a tool that:
* Automates running multiple models on custom prompts
* Outputs everything into a clean, easy-to-read HTML report
* Lets you quickly compare results side by side
While there might be similar tools out there, I wanted something lightweight and straightforward for my own workflow. I figured I’d share in case others find it useful too.
I’d love any constructive feedback—whether you think this fills a gap, how it could be improved, or if you know of alternatives I should check out.
Thanks!
[https://github.com/Spectral-Knight-Ops/local-llm-evaluator](https://github.com/Spectral-Knight-Ops/local-llm-evaluator) | 2025-10-04T05:36:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nxlq7t/unsure_which_ollama_model_to_use_heres_a_tool_i/ | h3xzur7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxlq7t | false | null | t3_1nxlq7t | /r/LocalLLaMA/comments/1nxlq7t/unsure_which_ollama_model_to_use_heres_a_tool_i/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=108&crop=smart&auto=webp&s=38a0361ba6042272be3a54cc2e1849556ef61101', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=216&crop=smart&auto=webp&s=b5d41b254546f2a0c129f10795596fffb643d7ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=320&crop=smart&auto=webp&s=def88a4408d253df301e44274e8382826bd7ff10', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=640&crop=smart&auto=webp&s=3f3bb734dab835fc69b220c30aa658e113f43daf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=960&crop=smart&auto=webp&s=2f595ac492d773424b27ca1328639b3efd00dd59', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?width=1080&crop=smart&auto=webp&s=b2666441bfb9bb9cc62139275289454e25d41ca4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PUdEn3C3OYW3pnzdzVV9G-ao4Q8h-K_s24CF_LxayCU.png?auto=webp&s=b44d84e5b1d3eb8d0936ee2cb29498f8011e690b', 'width': 1200}, 'variants': {}}]} |
Looking for CTO, I'm a content creator (750k+) I scaled apps to 1.5M downloads. VCs are now waiting for product + team | 0 | I’m a theology grad and content creator with 750K+ followers (30M likes, 14M views). I’ve also scaled and sold apps to 1.5M+ organic downloads before.
Right now, I’m building an AI-powered spiritual companion. Think Hallow (valued $400M+ for Catholics), but built for a massive, underserved segment of Christianity.
I’m looking for a Founding CTO / Technical Co-Founder to lead product + engineering. Ideally, someone with experience in:
* Mobile development (iOS/Android, Flutter/React Native)
* AI/LLM integration (OpenAI or similar)
* Backend architecture & scaling
Line of business: FaithTech / Consumer SaaS (subscription-based)
Location: Remote
Commitment: Full-time co-founder
Equity: Meaningful stake (negotiable based on experience & commitment)
I already have early VC interest (pre-seed firms ready to commit, just waiting for team + product). This is a chance to build a category-defining platform in faith-tech at the ground floor.
If you're interested, send me a chat or message request and let's talk. | 2025-10-04T05:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nxlksh/looking_for_cto_im_a_content_creator_750k_i/ | Samonji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxlksh | false | null | t3_1nxlksh | /r/LocalLLaMA/comments/1nxlksh/looking_for_cto_im_a_content_creator_750k_i/ | false | false | self | 0 | null |
Best practices for building a context-aware chatbot with a small dataset and a custom context pipeline | 2 | I’m building a chatbot for my research project that helps participants understand charts. The chatbot runs on a React website.
My goal is to make the experience feel like ChatGPT in the browser: users upload a chart image and dataset file, then ask questions about it naturally in a conversational way. I want the chatbot to be context-aware while staying fast. Since each user only has a single session, I don’t need long-term memory across sessions.
**Current design:**
* Model: `gpt-5`
* For each API call, I send:
* The system prompt defining the assistant’s role
* The chart image (PNG, \~50KB, base64-encoded) and dataset (CSV, \~15KB)
* The last 10 conversation turns, plus a summary of older context (the summary is generated by the model), including the user's message in this round
This works, but responses usually take \~6 seconds, which feels slower and less smooth than chatting directly with ChatGPT in the browser.
**Questions:**
* Is this design considered best practice for my use case?
* Is sending the files with every request what slows things down (responses take \~6 seconds)? If so, is there a way to make the experience smoother?
* Do I need a framework like LangChain to improve this, or is my current design sufficient?
Any advice, examples, or best-practice patterns would be greatly appreciated! | 2025-10-04T05:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nxljg8/best_practices_for_building_a_contextaware/ | EnvironmentalWork812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxljg8 | false | null | t3_1nxljg8 | /r/LocalLLaMA/comments/1nxljg8/best_practices_for_building_a_contextaware/ | false | false | self | 2 | null |
Regarding artificial intelligence, does llama have an online server free? | 0 | type to avoid overloading and damaging a laptop with only 8GB of RAM. I wanted one to use online that was uncensored and without limitations and that allowed me to create a data library as an online reference | 2025-10-04T05:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nxlgmb/regarding_artificial_intelligence_does_llama_have/ | Godi22kam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxlgmb | false | null | t3_1nxlgmb | /r/LocalLLaMA/comments/1nxlgmb/regarding_artificial_intelligence_does_llama_have/ | false | false | self | 0 | null |
Help with local LLM setup for vibe coding | 4 | Hi all, I'm interested to setup a local model to vibe code with cline in VS code and would like some recommendations for the most optimum setup.
I have 2 PCs:
1. Main rig - AMD 5700X3D + 32GB 3200MHz + AMD RX6750XT 12GB VRAM
2. Old rig - AMD 5600 + 64GB 2133MHz + GT710 for display only
I'm considering between upgrading my main rig to a RTX 3090 or replacing my old rig's RAM to 64GB 3200MHz from 2133MHz.
From the posts I have read from this sub, the recommended model for coding with the setup I have seems to be Qwen3-Coder-30B-A3B-Instruct-GGUF Q4_K_M.
Question:
1. Which upgrade would provide best experience?
2. Is Qwen 3 coder instruct with Q4 the better model for local vide coding? Or could you recommend some other models that I could try out.
Thank you very much in advance! | 2025-10-04T05:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nxlbgx/help_with_local_llm_setup_for_vibe_coding/ | Diligent-Cut-899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxlbgx | false | null | t3_1nxlbgx | /r/LocalLLaMA/comments/1nxlbgx/help_with_local_llm_setup_for_vibe_coding/ | false | false | self | 4 | null |
5 second scraper you can use to make apis using Granite-4.0-micro unsloth 4bit | 4 | Ive tried making a web scraper in the past that rely on llms but have never really succeeded as the low parameter models would struggle with following instructions or would hallucinate and the larger parameter models would just be too large to run holding all the context. But today I decided to try the micro version of the new Granite 4.0 collection and built this in a couple of hours. The models prompt following is insane for the size of the model and its literally quantized to 4 bit using 2gb of vram and works fine. Absolutely insane | 2025-10-04T05:02:12 | https://v.redd.it/izb5dw5q01tf1 | Odd-Ordinary-5922 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nxl5iw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/izb5dw5q01tf1/DASHPlaylist.mpd?a=1762146149%2CNGRlNzViYjNmZTM0NmE2ZjFlZGYzZDExN2JjMzA2OGFjOWI4NmEyOGNhMzQ3YzY0OWU3ZWQ3YzQyNTUxMDEwMg%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/izb5dw5q01tf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/izb5dw5q01tf1/HLSPlaylist.m3u8?a=1762146149%2CMjI5N2YxMTZmMmI1ODFhODk2YWFmNzYxZDYwNDUzZmU2OTNmYzliNGJkOWIzNmZlNTI3NmMzMWU5MDZmOGFhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/izb5dw5q01tf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nxl5iw | /r/LocalLLaMA/comments/1nxl5iw/5_second_scraper_you_can_use_to_make_apis_using/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=108&crop=smart&format=pjpg&auto=webp&s=795ccc2ea150ae6871907e6b5457a2ffe1779ed2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=216&crop=smart&format=pjpg&auto=webp&s=887d821ac75d9bb5f8023977b580c44f4db5ad60', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=320&crop=smart&format=pjpg&auto=webp&s=6db62f74c91f8ac2e2e66f4da4682cd86012628e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=640&crop=smart&format=pjpg&auto=webp&s=3f0e7be785ba3b7c97261db5fb7cfb0ac0ebef1e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=960&crop=smart&format=pjpg&auto=webp&s=dcd457e96f0e6672475404fe525bac1e4cfaedc6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=82e18bf13b51d695bee6193c075b23dfc85ca5c6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eGJhZzZ3NXEwMXRmMc7wnKrW8YZJ1JZ6MUrOz6iXOJGd9J7VbFIb3kcWITYX.png?format=pjpg&auto=webp&s=ac588739cddb6d81bbc1865fe96caeb425440fb7', 'width': 1920}, 'variants': {}}]} | |
Everyone thinks theyre good at prompt engineering.. most arent. I learned that the hard way | 0 | I honestly thought I was good at writing prompts. like compared to my friends I felt I gave better instructions to llms and got better answers. but obviously they all thought they were better too.
so we made this dumb little competition.. everyone gets the same problem statement, 2 hours to write a prompt, then we run it against a dataset. whoever passes the most test cases wins. if theres a tie, we check who used less tokens.
And yeah.. I didnt win. came 3rd out of 5. that kinda killed the ego.
then we were like why stop here, lets see who is actually the best in the world at this. so we built luna prompts.. basically a place where you compete with others on prompt engineering.
this weeks contest is live, only 2 days left. if you think you’re good at prompting then join and prove it. if not maybe stop calling yourself a prompt engineer lol
[https://lunaprompts.com/contests](https://lunaprompts.com/contests) | 2025-10-04T05:00:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nxl4ll/everyone_thinks_theyre_good_at_prompt_engineering/ | Comfortable_Device50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxl4ll | false | null | t3_1nxl4ll | /r/LocalLLaMA/comments/1nxl4ll/everyone_thinks_theyre_good_at_prompt_engineering/ | false | false | self | 0 | null |
Front end generation model recommendations | 3 | Looking for models that are capable of designing sites using vanilla js and html. React, svelte ,bootstrap even jquery is a plus. | 2025-10-04T04:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nxkymy/front_end_generation_model_recommendations/ | AggravatingGiraffe46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxkymy | false | null | t3_1nxkymy | /r/LocalLLaMA/comments/1nxkymy/front_end_generation_model_recommendations/ | false | false | self | 3 | null |
I was constantly getting rate limit errors from AI APIs, so I built a universal Python error handler and retry decorator to solve it. Sharing it for free. | 1 |
Like many of you, I've been building some complex projects that make a lot of calls to various AI APIs. One of the biggest frustrations has been the constant stream of RateLimitError, APITimeoutError, or random 502 Bad Gateway issues that would crash my entire workflow.
Writing try...except blocks for every single API call was getting messy. So, I decided to build a proper, reusable solution and I thought it might be useful for others in this community.
Here’s what it does:
It’s a single Python file that gives you a set of structured exception classes (InvalidAPICredentialsError, ModelOverloadError, etc.) and a @retry_on_transient_error decorator.
You just add that decorator above any function that makes an API call. If the call fails with a temporary issue (like a rate limit), the decorator automatically waits with an exponential backoff and retries it for you. It's made my own projects incredibly stable.
Here is the complete, single-file module, open-sourced on GitHub Gist. You can copy and paste it directly into your projects.
[Grab it here](https://gist.github.com/Anwaar-cyber/fc22620da0292fc67370b440e0d08fb4)
I hope some of you find this useful!
Why I Built This:
This resilience module was actually a critical component I had to build for a much larger project I've been working on: a fully autonomous, multi-agent ebook generation system called Hyperion Core.
The system works like a simulated publishing house. For example, to guarantee quality, it uses something I call the "Aegis Pipeline":
A "Chapter Director" agent writes a draft.
The draft is sent to a multi-agent "Editorial Desk" of QA agents (Continuity_Supervisor, Prose_Auditor, etc.) who act with a cognitive firewall—they can't see each other's work.
They generate a "defect log," which is then sent back to the Chapter Director with a new, high-stakes prompt: "Perform a holistic rewrite. A rewrite that fails to fix every logged defect is a catastrophic failure."
This "Convergent Revision Loop" continues until the chapter achieves a target quality score.
This kind of complex, multi-step agentic workflow generates a ton of API calls, which is why having the robust error handler was non-negotiable.
I've just packaged the complete, standalone Python project for Hyperion Core—including the full agent library, the Aegis Pipeline, the marketing suite generator, and a comprehensive user manual. I've put it up on Gumroad to fund the development of a full-fledged public app.
If you're interested in the deep-dive architecture of multi-agent systems and want to see how all the pieces fit together, you can check it out here:
[Check it here](https://mustafawave307.gumroad.com/l/mzudpl)
Happy to answer any questions about the error handler or the agentic pipeline design!
| 2025-10-04T04:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nxkwsi/i_was_constantly_getting_rate_limit_errors_from/ | WeakPianist9445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxkwsi | false | null | t3_1nxkwsi | /r/LocalLLaMA/comments/1nxkwsi/i_was_constantly_getting_rate_limit_errors_from/ | false | false | self | 1 | null |
Distributed Inference over wifi with 8x 3090 egpus performance | 136 | Hello,
I smoked some really good weed recently and decided it was a good idea to buy more 3090s.
Naturally I didn't want to use a real build with server parts, put 8 3090s in one build on home depot racks? No thanks I'm lazy.
I got 4 3090 egpus from a guy on facebook. He's cool, sold them to me for 650 each with the egpu.
[https://www.gigabyte.com/Graphics-Card/GV-N3090IXEB-24GD](https://www.gigabyte.com/Graphics-Card/GV-N3090IXEB-24GD) <--- these are the EGPUs
Then I got 4 other random 3090s of different brands and put them in 3 spare Pcs I have lying around.
Node #1
* Z390 Prime
* 9900K
* 64gb of DDR4
* 3090 (duh)
* 850W.
Node #2
* MSI Unify ITX z690
* 12400K
* 64gb of DDR5
* 3090 (duh)
* 650W
* 2X 3090 EGPUs attached
Node #3 (Host)
* Z790 Maximus Hero
* 13700k
* 64gb of DDR5
* 1200W PSU
* 2x 3090s
* 2x 3090 EGPUs attached
I ran all of it over VLLM with Ray to distribute the load. It's connected over Wifi, I got a good router so speed is about only 10% slower than ethernet from across the house. For now it's all pipeline parallel until the parts arrive then I'll do a 2 node system with 4 gpu each.
[https://rog.asus.com/us/networking/rog-rapture-gt-axe16000-model/](https://rog.asus.com/us/networking/rog-rapture-gt-axe16000-model/) <--- my router(s).
Results:
At 128k context limit running GLM 4.5 Air AWQ 8 bit (that's Q8 for you gguf folks)
I get 5500 tokens/s prompt processing and 24 tokens a second for a 50k\~ ish token prompt.
It works great over Roo.
Ray has a very annoying overhead cost so just assume that each system has like 1gb less vram. Running all my node in headless helps alot too.
| 2025-10-04T03:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nxjzbn/distributed_inference_over_wifi_with_8x_3090/ | Only_Situation_4713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxjzbn | false | null | t3_1nxjzbn | /r/LocalLLaMA/comments/1nxjzbn/distributed_inference_over_wifi_with_8x_3090/ | false | false | nsfw | 136 | {'enabled': False, 'images': [{'id': 'ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=108&crop=smart&auto=webp&s=0f61397cba0724be0a2d6320f914959224ff90ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=216&crop=smart&auto=webp&s=454e5a4f2332551c011d8c4209869eef3c768bc1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=320&crop=smart&auto=webp&s=8a715246fafa8298dfc2e9abc0e8eeee4325ecac', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=640&crop=smart&auto=webp&s=e04a1f30e4238b373c20395892dc37c10161f530', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=960&crop=smart&auto=webp&s=3759f04081612d414677d2f62d366333fe956844', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?auto=webp&s=42c7ad82cd93df3cdfd2ffaf1c34de0babfcc324', 'width': 1000}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fe02099d732cba590e2403fc9c265886e0c99bee', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=9eda3d4c93a1c7d8cff946ac5617b393b86eb91e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=be18699daf7202bf107832742da4a93b9601ff90', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=b54ccfb3ed18aa21cda499aa206fdce442058403', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=fe9edb017155ba9e7514466065daef79363947dc', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?blur=40&format=pjpg&auto=webp&s=b71da02d687d680a0f1a7264dcf6734b994020a7', 'width': 1000}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fe02099d732cba590e2403fc9c265886e0c99bee', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=9eda3d4c93a1c7d8cff946ac5617b393b86eb91e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=be18699daf7202bf107832742da4a93b9601ff90', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=b54ccfb3ed18aa21cda499aa206fdce442058403', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=fe9edb017155ba9e7514466065daef79363947dc', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/ys4iv7u_l2dMri4phpZy-9Px0-5rQm6yE7pFfmOqawQ.png?blur=40&format=pjpg&auto=webp&s=b71da02d687d680a0f1a7264dcf6734b994020a7', 'width': 1000}}}}]} |
Behold, the jankiest setup ever | 81 | I plan to get an open test bench, after I get my second P40 in a week or two (which will fit nicely on the other side of that fan)
Performance is as shown, Qwen 3 32B Q4 5.9T/sec
The fan is one of those stupidly powerful Delta electronics server fans that pushes out like 250cfm, so I needed to add a PWM controller to slow it down, and it wouldn't run without that giant capacitor, and it's powered by a Li-ion battery instead of the PSU (for now)
It's not stable at all, the whole system BSODs if a program tries to query the GPU while something else is using it (such as if I try to run GPUZ while LM Studio is running), but if only 1 thing touches the GPU at a time, it works
It has a Ryzen 5 5500GT, 16GB of DDR4, a 1000w PSU, a 512GB SSD, and 1 Nvidia P40 (soon to be 2) | 2025-10-04T03:31:15 | https://www.reddit.com/gallery/1nxjhnj | T-VIRUS999 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nxjhnj | false | null | t3_1nxjhnj | /r/LocalLLaMA/comments/1nxjhnj/behold_the_jankiest_setup_ever/ | false | false | 81 | null | |
GitHub - huawei-csl/SINQ: Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model smaller while preserving accuracy. | 66 | 2025-10-04T03:30:30 | https://github.com/huawei-csl/SINQ | Aiochedolor | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nxjh4c | false | null | t3_1nxjh4c | /r/LocalLLaMA/comments/1nxjh4c/github_huaweicslsinq_welcome_to_the_official/ | false | false | default | 66 | {'enabled': False, 'images': [{'id': 'yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=108&crop=smart&auto=webp&s=70a7ffdefbd7cbdad2ea677cc3cb2be538382a9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=216&crop=smart&auto=webp&s=146d5390b542b3cc3b83eea9fdb6b88cecb70d6a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=320&crop=smart&auto=webp&s=6578b10460c07673524bcbf9dd133b14311687b7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=640&crop=smart&auto=webp&s=c413349140863192c8413b0f7b8e7f32ec48822c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=960&crop=smart&auto=webp&s=e0d1aeff83ce8a534e3045766d201c34fea01471', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?width=1080&crop=smart&auto=webp&s=e0bfd32c96e74d7a9586a4f3096604afbdca91b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yP0CnjxBFJCXTVacHixSvy4H_F7MTnOAVtKcV29Lggk.png?auto=webp&s=8da4cd1f3dcc8ed45c27b5e44a9b966e2c8b4f6a', 'width': 1200}, 'variants': {}}]} | |
Help me fish... | 2 | There is a saying... give a man a fish feed him for a day... teach him to fish, feed him for life.
Anyone have a particularly clear and up-to-date guide on compiling llama.cpp for Nvidia in Ubuntu/Pop?
I found this so far: [https://truelogic.org/wordpress/2024/08/14/installing-llama-cpp-on-ubuntu-with-an-nvidia-gpu/](https://truelogic.org/wordpress/2024/08/14/installing-llama-cpp-on-ubuntu-with-an-nvidia-gpu/)
I'm tired of waiting for the latest changes from llama.cpp making its way to the software I've been using. I am annoyed enough I might even take on vllm. :/
What is everyone using these days for inferencing GGUFs these days, and how far off from llama.cpp are you? | 2025-10-04T03:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nxjcqo/help_me_fish/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxjcqo | false | null | t3_1nxjcqo | /r/LocalLLaMA/comments/1nxjcqo/help_me_fish/ | false | false | self | 2 | null |
Where’s the lip reading ai? | 18 | I’m sure there are some projects out there making real progress on this, but given how quickly tech has advanced in recent years, I’m honestly surprised nothing has surfaced with strong accuracy in converting video to transcript purely through lip reading.
From what I’ve seen, personalized models trained on specific individuals do quite well with front facing footage, but where’s the model that can take any video and give a reasonably accurate idea of what was said? Putting privacy concerns aside for a second, it feels like we should already be 80 percent of the way there. With the amount of spoken video data that already has transcripts, a solid model paired with a standard LLM technique could fill in the blanks with high confidence.
If that doesn’t exist yet, let’s make it, I’m down to even spin it up as a DAO, which is something I’ve wanted to experiment with.
Bonus question: what historical videos would be the most fascinating or valuable to finally understand what was said on camera? | 2025-10-04T02:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1nxibik/wheres_the_lip_reading_ai/ | Trustingmeerkat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxibik | false | null | t3_1nxibik | /r/LocalLLaMA/comments/1nxibik/wheres_the_lip_reading_ai/ | false | false | self | 18 | null |
Why do private companies release open source models? | 136 | I love open source models. I feel they are an alternative for general knowledge, and since I started in this world, I stopped paying for subscriptions and started running models locally.
However, I don't understand the business model of companies like OpenAI launching an open source model.
How do they make money by launching an open source model?
Isn't it counterproductive to their subscription model?
Thank you, and forgive my ignorance. | 2025-10-04T02:26:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nxi82t/why_do_private_companies_release_open_source/ | desudesu15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxi82t | false | null | t3_1nxi82t | /r/LocalLLaMA/comments/1nxi82t/why_do_private_companies_release_open_source/ | false | false | self | 136 | null |
Built an AI Trading Desk Quant - any thoughts? | 0 | [CILA chat](https://preview.redd.it/b05spx2s60tf1.png?width=1938&format=png&auto=webp&s=56e43f8108d46e853688580f2d93f57ab2b41e5b)
Hey everyone - we’ve been building an AI trading desk quant named **CILA-v1** (*Computational Intelligence for Low-frequency Assets*).
We launched on **Oct 1**, and the response blew us away: **100 users joined in the first 3 hours**, and traffic spiked so fast we had to scale up servers on the fly 😅
# What is CILA?
CILA-v1 is a multi-agent AI chat model capable of tasks traditionally done by professional desk quants, including:
* Custom no-code backtesting
* Portfolio construction & optimization
* Macroeconomic and fundamental analysis & news scanning
* “Magic Buttons” for pre-defined use cases (chart prediction, stock screening, etc.)
A few example prompts you can try:
>“Backtest TSLA next-day return when it’s up 5%+”
“Build optimal portfolio with NVDA, SMCI, AMD targeting 20% vol”
"Test trend-following strategy on MAG7 stocks"
“What’s the economic trend in the US?”
Built by a seasoned trader from industry and a top-cited academic researcher, CILA is designed to bring institutional-grade trading insights into the hands of all investors.
# Coverage (v1)
CILA currently covers:
* 🇺🇸 **US stocks & ETFs**
* 🇺🇸 **US mutual funds**
* 🇨🇳 **China A-shares**
* 🇭🇰 **HK stocks**
* Selected **futures & FX** (less stable)
We recommend using **Yahoo** or **Bloomberg** tickers in your questions - CILA recognizes those best.
# Try it out
We’re excited to see how CILA helps you uncover new ideas, test strategies faster, and level up your market edge. 🙂
**Try CILA:** [https://www.duality.institute/chat](https://www.duality.institute/chat)
**Full docs:** [https://www.duality.institute/cila](https://www.duality.institute/cila)
**Join our discord:** [https://discord.gg/qKk6YK5R](https://discord.gg/qKk6YK5R)
In our channel, you may ask CILA-related questions, talk about market or just make new friends. We also post CILA-generated research every day.
**Disclaimer: like any AI, CILA may make mistakes, this is NOT investment advice.** | 2025-10-04T02:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nxhx5t/built_an_ai_trading_desk_quant_any_thoughts/ | Significant-Town-457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nxhx5t | false | null | t3_1nxhx5t | /r/LocalLLaMA/comments/1nxhx5t/built_an_ai_trading_desk_quant_any_thoughts/ | false | false | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.