title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Apertus: a fully open, transparent, multilingual language model | 97 | 2025-09-02T09:29:24 | https://actu.epfl.ch/news/apertus-a-fully-open-transparent-multilingual-lang/ | Stock-Variation-2237 | actu.epfl.ch | 1970-01-01T00:00:00 | 0 | {} | 1n6ewmu | true | null | t3_1n6ewmu | /r/LocalLLaMA/comments/1n6ewmu/apertus_a_fully_open_transparent_multilingual/ | false | false | default | 97 | {'enabled': False, 'images': [{'id': 'bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=108&crop=smart&auto=webp&s=b1aaed9e5de4063769a55890d69f082e08acf8e2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=216&crop=smart&auto=webp&s=bc9131399c647a60abb74b36b24c1935716c4ca1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=320&crop=smart&auto=webp&s=64cf6b6b3d9ae577c882411b3474899a10d12508', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=640&crop=smart&auto=webp&s=afccde1fc69d131b9cf94781d34b916dcaf90c23', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=960&crop=smart&auto=webp&s=5e3edd22a7647105068daf424a3cdf122a363b5b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?width=1080&crop=smart&auto=webp&s=9bbfabbc12dc77fe29ebbf91b9a107624849e553', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/bTft7kJCnEeOJxWtFZSRa954qWiZB1xs4iZXNvShGsI.jpeg?auto=webp&s=d8ad19bff9e1a79b207878f5de4d391117c9068b', 'width': 1440}, 'variants': {}}]} | |
How to set up and run Llama3 on Windows | 0 | How to set up and run Llama3 on Windows? A step-by-step guide to running this revolutionary AI model on Windows.
| 2025-09-02T09:22:00 | https://www.altimetrik.com/blog/how-to-set-up-and-run-llama3-on-windows | Illustrious-Row-5131 | altimetrik.com | 1970-01-01T00:00:00 | 0 | {} | 1n6esim | false | null | t3_1n6esim | /r/LocalLLaMA/comments/1n6esim/how_to_set_up_and_run_llama3_on_windows/ | false | false | default | 0 | null |
My weekend project accidentally beat Claude Code - multi-agent coder now #12 on Stanford's TerminalBench 😅 | 861 | 👋 Hitting a million brick walls with multi-turn RL training isn't fun, so I thought I would try something new to climb Stanford's leaderboard for now! So this weekend I was just tinkering with multi-agent systems and... somehow ended up beating Claude Code on Stanford's TerminalBench leaderboard (#12)! Genuinely didn't expect this - started as a fun experiment and ended up with something that works surprisingly well.
**What I did:**
Built a multi-agent AI system with three specialised agents:
* **Orchestrator**: The brain - never touches code, just delegates and coordinates
* **Explorer agents**: Read & run only investigators that gather intel
* **Coder agents**: The ones who actually implement stuff
Created a "Context Store" which can be thought of as persistent memory that lets agents share their discoveries.
Tested on TerminalBench with both Claude Sonnet-4 and Qwen3-Coder-480B.
**Key results:**
* Orchestrator + Sonnet-4: **36.0% success rate** (#12 on leaderboard, ahead of Claude Code!)
* Orchestrator + Qwen-3-Coder: 19.25% success rate
* Sonnet-4 consumed 93.2M tokens vs Qwen's 14.7M tokens to compete all tasks!
* The orchestrator's explicit task delegation + intelligent context sharing between subagents seems to be the secret sauce
**(Kind of) Technical details:**
* The orchestrator can't read/write code directly - this forces proper delegation patterns and strategic planning
* Each agent gets precise instructions about what "knowledge artifacts" to return, these artifacts are then stored, and can be provided to future subagents upon launch.
* Adaptive trust calibration: simple tasks = high autonomy, complex tasks = iterative decomposition
* Each agent has its own set of tools it can use.
**More details:**
My Github repo has all the code, system messages, and way more technical details if you're interested!
⭐️ [**Orchestrator repo - all code open sourced!**](https://github.com/Danau5tin/multi-agent-coding-system)
Thanks for reading!
Dan
(Evaluated on the excellent [TerminalBench](https://www.tbench.ai/) benchmark by Stanford & Laude Institute)
| 2025-09-02T09:17:01 | https://www.reddit.com/gallery/1n6epwv | DanAiTuning | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n6epwv | false | null | t3_1n6epwv | /r/LocalLLaMA/comments/1n6epwv/my_weekend_project_accidentally_beat_claude_code/ | false | false | 861 | null | |
New Open LLM from Switzerland "Apertus", 40%+ training data is non English | 268 | [https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html](https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html) | 2025-09-02T09:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n6eimy/new_open_llm_from_switzerland_apertus_40_training/ | EnnioEvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6eimy | false | null | t3_1n6eimy | /r/LocalLLaMA/comments/1n6eimy/new_open_llm_from_switzerland_apertus_40_training/ | false | false | self | 268 | {'enabled': False, 'images': [{'id': 'qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=108&crop=smart&auto=webp&s=2bcf9e5f8186f4b56bf371e05ffc15e0a12bdb1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=216&crop=smart&auto=webp&s=e2b8baa2ff81f7a246a5efa3a9da4d10e9bc57c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=320&crop=smart&auto=webp&s=ed58cbacf628374751a5b905952ec14c4c711d30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=640&crop=smart&auto=webp&s=981666a682d505c4e0492f751b272c3cbbe6048c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=960&crop=smart&auto=webp&s=e9c9606088c37565d5ac9fab4e101143bc597811', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?width=1080&crop=smart&auto=webp&s=394c85866c2960717a83b44d09486108af8ab1b1', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/qj0ru1ue7OQeLcW1itzIisGf773W78f1a44CgNECKXY.png?auto=webp&s=2a5610a722cde9f90a77d7fa23c1c36ac3fcb63d', 'width': 4000}, 'variants': {}}]} |
Is it possible to achieve more speed on CPU without AVX? | 1 | [removed] | 2025-09-02T09:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n6eh1v/is_it_possible_to_achieve_more_speed_on_cpu/ | Ult1mateXPHP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6eh1v | false | null | t3_1n6eh1v | /r/LocalLLaMA/comments/1n6eh1v/is_it_possible_to_achieve_more_speed_on_cpu/ | false | false | self | 1 | null |
New research on scaling multi-agent systems using a Semi-Centralized pattern | 0 | Most multi-agent systems today rely on a central planner LLM.
It breaks tasks into subtasks, feeds context to workers, and controls the flow.
The problem this creates is bottlenecks. The system can only scale to what a single planner can handle, and information is lost since workers can’t talk directly.
This paper presents a new way: Anemoi: A Semi-Centralized Multi-agent System Based on Agent-to-Agent Communication MCP server from Coral Protocol
How it works:
\- A lightweight planner drafts the initial plan
\- Specialist agents communicate directly
\- They refine, monitor, and self-correct in real time
Performance impact:
\- Efficiency: Cuts token overhead by avoiding redundant context passing
\- Reliability: Direct communication reduces single-point failures
\- Scalability: Add new worker agents and domains seamlessly, while keeping performance strong. Deploy at scale under tighter resource budgets with Anemoi.
We validated this on GAIA, a benchmark of complex, real-world multi-step tasks (web search, multimodal file processing, coding).
With a small LLM planner (GPT-4.1-mini) and worker agents powered by GPT-4o (same as OWL), Anemoi reached 52.73% accuracy, outperforming the strongest open-source baseline, OWL (43.63%), by +9.09% under identical conditions.
Even with a lightweight planner, Anemoi sustains strong performance.
Links to the paper in the comments! | 2025-09-02T08:52:27 | https://v.redd.it/d9465e36tpmf1 | omnisvosscio | /r/LocalLLaMA/comments/1n6ecd0/new_research_on_scaling_multiagent_systems_using/ | 1970-01-01T00:00:00 | 0 | {} | 1n6ecd0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/d9465e36tpmf1/DASHPlaylist.mpd?a=1759524756%2CM2EzYTAwZTgwMjhhZTY5ZGE2OGQyZDI4NmFjNzBlY2QzMzkxMjAyNGQ1NTBmY2RkY2ZhMTNmNzAwODZiNDdiNg%3D%3D&v=1&f=sd', 'duration': 128, 'fallback_url': 'https://v.redd.it/d9465e36tpmf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/d9465e36tpmf1/HLSPlaylist.m3u8?a=1759524756%2CZTc1MmE2MzJhMmY1YmFhZmIwNDNhNDUyMDZmOTdkYTdiZjE4NjA5OTU1YzE2YzYyMzg2Y2ZmNTlmYzI0MTlkMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/d9465e36tpmf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1n6ecd0 | /r/LocalLLaMA/comments/1n6ecd0/new_research_on_scaling_multiagent_systems_using/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=108&crop=smart&format=pjpg&auto=webp&s=330ff95b56d205a3cb64931c36e06137ced1dbb3', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=216&crop=smart&format=pjpg&auto=webp&s=7077281c20c1996e47d5b9abd582306af8bce3f8', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=320&crop=smart&format=pjpg&auto=webp&s=763e19b396ebd1c48241924357546fdc3fadc1f8', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=640&crop=smart&format=pjpg&auto=webp&s=9b4d98e0b0f0d1f5e68936eab0845910b9bb1123', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=960&crop=smart&format=pjpg&auto=webp&s=4eafb9d2e0959366f5dd7caf2aa306d8c33f04dd', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d1a931eb4b46defb34569edf6da5835249e802a7', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/Mjl6eXJkMzZ0cG1mMage5LL_OnSJQgW1g79iHo49tKg-iUIs5qVkdl-NGf93.png?format=pjpg&auto=webp&s=f172851abf26ddb8697d7a7256bb7cf21bc80798', 'width': 1080}, 'variants': {}}]} | |
How do make GLM stop making up stuff? | 8 | I've been using GLM 4.5 lately via Openrouter and i must say it's a very good model. It works pretty well with tool calling and i like how it structure its output when it comes to web search (like quotes from the pages and reference link printed out by default).
The issue i'm having is that it has a very high tendency at making up stuff when it comes to summarising content.
For example yesterday i asked to make comparison between two products, focusing on real world reviews from trusted sources, and it did a brilliant job at searching relevant sources, collecting the most relevant data and quoting the result.
At the end though, when it was the time for generating a summarised overview with a recommendation, it threw into it an unbelievably realistic, highly credible yet completely invented review. I realised that because for one of the product he mentioned that "trusted reviews praised XYZ feature" but that specific product doesn't even have such feature.
Another example: i asked what are the most profitable passive income businesses in GTA Online for my son, and i asked to refer to reddit as source. Again brilliant job at collecting the sources, and then it produced a detailed list of businesses and a comprehensive action plan on how to achieve the desired result, it even included step by step instructions on what to buy, in what order, etc etc..
Problem is that the steps provided cannot be done because it was referencing stuff that doesn't even exist in the game.
As a last experiment i asked to generate the performance review for one of my employee, i have all peer reviews and performance data stored in a ChromaDB, and it generated one of the best performance reviews i have ever seen....by using peer reviews for other people and completely inventing metrics and results achieved for projects that we never did.
I guess you get the gist by now, it's brilliant at searching and extracting sources, but when it comes to summarising it goes absolutely bananas and makes up a lot of stuff. Which is a bit of a shame because it's still a good model imo, but having to double check every single word from the output kinda invalidates the whole point of using it. | 2025-09-02T08:08:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n6dp35/how_do_make_glm_stop_making_up_stuff/ | AxelFooley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6dp35 | false | null | t3_1n6dp35 | /r/LocalLLaMA/comments/1n6dp35/how_do_make_glm_stop_making_up_stuff/ | false | false | self | 8 | null |
qwen 3 coder model is literally the worst model , im saying again this model is not even good another trash by the alibaba rich daddy spoil children | 0 | glm 4.5 comany was blacklisted by the usa government and still they shipping this aura level model one by one .
| 2025-09-02T08:04:03 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n6dmjf | false | null | t3_1n6dmjf | /r/LocalLLaMA/comments/1n6dmjf/qwen_3_coder_model_is_literally_the_worst_model/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'sy371ikqkpmf1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=108&crop=smart&auto=webp&s=044b599168718a587f840870e0f0716c3c551c5b', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=216&crop=smart&auto=webp&s=eb7de35bfb2ec9a2947e10e8cea14c4442614777', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=320&crop=smart&auto=webp&s=cd36b088e21c3c05288f476d9ac900b30713d754', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=640&crop=smart&auto=webp&s=6129cb685bb74667ff5d48ba51cf7c16dacc1321', 'width': 640}, {'height': 475, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=960&crop=smart&auto=webp&s=536f37dae720f4a4b5b3d2337a5169a41fa1ec7e', 'width': 960}, {'height': 535, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?width=1080&crop=smart&auto=webp&s=a4a71a478b46424d3a6d32760d8e6367df73edba', 'width': 1080}], 'source': {'height': 535, 'url': 'https://preview.redd.it/sy371ikqkpmf1.png?auto=webp&s=5c1c640cb6208224c64bd4b4e55f93c3536acbff', 'width': 1080}, 'variants': {}}]} | |
silly-v0.2 - an RL-heavy, chat-style roleplay model | 46 | has a unique tone. "pretty good for a 12b" say most people
This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR! | 2025-09-02T07:48:17 | https://huggingface.co/wave-on-discord/silly-v0.2 | Abject-Huckleberry13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n6dduv | false | null | t3_1n6dduv | /r/LocalLLaMA/comments/1n6dduv/sillyv02_an_rlheavy_chatstyle_roleplay_model/ | false | false | default | 46 | {'enabled': False, 'images': [{'id': 'XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=108&crop=smart&auto=webp&s=d78bf2f4a9241d340c954be59ab22f3d55fafbc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=216&crop=smart&auto=webp&s=c8359c80bbb0912c5040089b3e80839ed6b99ab9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=320&crop=smart&auto=webp&s=23da9cf9cff5395ad89b984799fb0f61f4ae1dd3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=640&crop=smart&auto=webp&s=e7645db8dc5720cd92c94af1c490139178868579', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=960&crop=smart&auto=webp&s=200dc034044d56879433fd66dbaf55d0bcc90877', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?width=1080&crop=smart&auto=webp&s=bb1efd38e8e26eaac0ab106a5ee2fa449c28ad42', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XQq2vPgZp4sFzlDHMJKDpZat86LD2LJxT0U4e9ahf3s.png?auto=webp&s=1a39d8160b1b95bf22f8c9cae1e01065e83589b0', 'width': 1200}, 'variants': {}}]} |
Lenovo P720 Workstation? | 5 | Lenovo P720 Thinkstation
2x Xeon Platinum 8160 (2x 24 cores)
NVIDIA RTX 4000 8GB (1xRTX 4000)
256 GB ECC DDR4 RAM 2666 MHz
512 GB NVME SSD + 1TB HDD
900 W Gold+
It's old but can it make sense to purchase this used? Plan to run medium size models and RAG on lots of documents. | 2025-09-02T07:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n6dbc4/lenovo_p720_workstation/ | OverfitMode666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6dbc4 | false | null | t3_1n6dbc4 | /r/LocalLLaMA/comments/1n6dbc4/lenovo_p720_workstation/ | false | false | self | 5 | null |
Hiding the thinking process from the response text | 0 | I just downloaded and tested the new gpt-oss model in my OpenWebUI application, and I must say I'm quite impressed by its ability to follow my instructions. However, it always opens its answer with the keyword 'analysis', as in:
analysisThe user asks: "How do I control..."
and then the real answer eventually follows immediately after 'assistantfinal' in a similar way. Hiding the thinking process by simply prompting the LLM to do so doesn't seem like an option, so I suspect this is rooted deeper somewhere.
Have anyone else fought the same problem?
It almost looks like it's set up for me to parse the answer however I'd want with some simple python code, but I don't know how I would pass the response text to a code block and then directlt to the chat interface | 2025-09-02T07:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n6cx1u/hiding_the_thinking_process_from_the_response_text/ | RangingBloyster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6cx1u | false | null | t3_1n6cx1u | /r/LocalLLaMA/comments/1n6cx1u/hiding_the_thinking_process_from_the_response_text/ | false | false | self | 0 | null |
I built a free Structured Prompt Builder (with local library + Gemini optimization) because other tools are bloated & paywalled | 49 | Hey folks,
I want to share something I’ve been building out of frustration with the current “prompt builder” tools floating around. Most of them are either:
* Locked behind paywalls
* Bloated with features I don’t need
* Or just plain confusing to use
So I made my own: **Structured Prompt Builder**. It’s **100% free, runs entirely in your browser, no sign‑up, no backend, no tracking**. Everything is stored locally (in `localStorage`).
Link :: [structured-prompt-builder.vercel.app](https://structured-prompt-builder.vercel.app/)
# Why I built it
* I needed a clean, lightweight way to design prompts without “AI SaaS subscriptions”.
* I wanted to save prompts, reuse them, and share them easily.
* I wanted **Gemini** to *polish* my prompts (fix spelling/grammar/clarity) while keeping the exact structure intact — not generate random extra stuff.
# Key Features
* **Structured fields** → Role, Task, Audience, Style, Tone
* **Add sections** → Constraints, Steps, Inputs (name:value), Few‑shot examples
* **Preview instantly** in **Markdown, JSON, YAML**
* **Copy / Download** any format in one click
* **Import from JSON** to keep your workflow portable
* **Adjust parameters** → Temperature, Top‑p, Max Tokens, Presence & Frequency penalties
* **Local Library** → Save, Load, Duplicate, Delete prompts right in the browser
* **Gemini Optimizer** → Paste your Gemini API key, hit “Generate with Gemini,” and it will:
* Clean up your text
* Preserve the schema/keys
* Return only the format you asked for (Markdown/JSON/YAML)
# What makes it different
* Free. No hidden tiers.
* Offline‑first. Runs in the browser, nothing sent to my server.
* Open & hackable (MIT License).
* Built for *practical* prompt design, not flashy dashboards.
# Sponsor / Support
If you like this project and want it to keep growing (template gallery, cloud sync, maybe integrations), I’d really appreciate sponsorships or any kind of support. Even small help means I can keep it 100% free.
👉 Repo: [github.com/Siddhesh2377/structured-prompt-builder](https://github.com/Siddhesh2377/structured-prompt-builder)
Thanks for reading, and let me know if you try it out or have ideas for improvements! | 2025-09-02T07:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n6cvgt/i_built_a_free_structured_prompt_builder_with/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6cvgt | false | null | t3_1n6cvgt | /r/LocalLLaMA/comments/1n6cvgt/i_built_a_free_structured_prompt_builder_with/ | false | false | self | 49 | null |
Building IndieGPU: A software dev's approach to GPU cost optimization(Self-Promo) | 4 | Hey everyone
A Software dev (with 2YOE) here who got tired of watching startup friends complain about AWS GPU costs. So I built [IndieGPU](https://www.indiegpu.com/) \- simple GPU rental for ML training.
**What I discovered about GPU costs:**
* AWS P3.2xlarge (1x V100): $3.06/hour
* For a typical model training session (12-24 hours), that's $36-72 per run
* Small teams training 2-3 models per week → $300-900/month just for compute
**My approach:**
* RTX 4070s with 12GB VRAM
* Transparent hourly pricing
* Docker containers with Jupyter/PyTorch ready in 60 seconds
* Focus on training workloads, not production inference
**Question for the community:** What are the biggest GPU cost pain points you see for small ML/AI teams? Is it the hourly rate, minimum commitments, or something else?
Right now I am trying to find users who could use the platform for their ML/AI training, free for a month, no strings attached.
| 2025-09-02T06:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n6ce6q/building_indiegpu_a_software_devs_approach_to_gpu/ | rakii6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6ce6q | false | null | t3_1n6ce6q | /r/LocalLLaMA/comments/1n6ce6q/building_indiegpu_a_software_devs_approach_to_gpu/ | false | false | self | 4 | null |
올라마 qwen2.5 모델(1.5b, 0.5b 메모리 사용량 문제) | 0 | 컴퓨터 사양 : MacBook Air 15 (M4, 24GB)
아래 두가지 모델의 메모리 사용량 비교를 해봤는데,
\- qwen2.5-coder:1.5b-instruct : 약 1.45GB
\- qwen2.5:0.5b : 약 2.12GB
위 처럼 결과가 나왔어..
나는 당연히 작은 모델이 더 적은 메모리를 사용하는 줄 알았는데 왜인지 오히려 반대네 ㅇㅅㅇ
\[클로드 분석\]
**1) 모델 교체 시**: 이전 모델을 즉시 해제 안 함
**2) 메모리 풀링**: 여러 프로세스가 메모리 공유
\-> 이전 모델의 메모리가 남아있어 정확한 비교가 어려웠을 것이라 판단
그치만 나는 모델 실험 할때마다 이전에 테스트한 모델을 /bye로 끝내고 다시 실행(run)해서 테스트 해본 결과인데..
그리고 M시리즈 특성상 통합 메모리 관리로 인한 추측도 하던데 사실 뭐가 문제인지 모르겠어
다른거 개발하다가 모델 크기가 다른데 메모리 사용량이 똑같아서 이상하다 싶어서 개별 테스트해본건데
결과가 위처럼 나왔는데 혹시 원인이나 해결 방법 아는 분 있을까요? | 2025-09-02T05:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n6bbq2/올라마_qwen25_모델15b_05b_메모리_사용량_문제/ | Only_Negotiation_444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6bbq2 | false | null | t3_1n6bbq2 | /r/LocalLLaMA/comments/1n6bbq2/올라마_qwen25_모델15b_05b_메모리_사용량_문제/ | false | false | self | 0 | null |
[Project Update] From Brittle Scripts to a Resilient, Self-Auditing Architecture: The Evolution of MeganX 3.0 | 0 | Hello again, everyone.
For those who followed my last post, you'll know this is a journey of a solo dev and single father trying to build something meaningful on what many would call obsolete hardware. The core challenge has always been the same: how do you achieve sophisticated AI behavior when you can't just throw more silicon at the problem?
The answer has been to focus relentlessly on the architecture. Today, I'm sharing the next major evolution: \*\*MeganX 3.0\*\*.
The goal was to solve the fundamental flaw of most automation scripts: \*\*they are brittle.\*\* They break on unexpected changes and fail silently. The solution wasn't just better error handling, but to build an agent that could \*\*critically audit its own plans \*before\* execution.\*\*
\---
\### \*\*Core Upgrade: The "Pre-emptive Failure Analysis" Loop\*\*
Before any complex, multi-step operation, MeganX 3.0 now runs its own plan through an internal simulation. It identifies potential weaknesses and then refactors the plan to be more robust. This has been a game-changer in two key areas:
\* \*\*Resilient Parallel Processing:\*\*
\* \*\*Problem:\*\* My early tests with asynchronous tasks (like parallel data extraction) would consistently fail due to race conditions on my old machine.
\* \*\*Solution:\*\* The self-auditing module flagged this flaw. The repaired plan she proposed involved creating isolated browser contexts for each task, which completely resolved the issue and ensured data integrity.
\* \*\*Automated Data Integrity Validation:\*\*
\* \*\*Problem:\*\* How to ensure the agent doesn't act on corrupted or inconsistent data?
\* \*\*Solution:\*\* When tasked with auditing several simulated data reports where one was intentionally "poisoned" with duplicate unique identifiers, she autonomously flagged and isolated the corrupted report without any specific pre-scripted instructions for that error.
\---
\### \*\*The Impact\*\*
This new architecture isn't just theoretical. In a controlled suite of over 50 test runs, it has led to a \*\*\~95% reduction in execution failures and hangs\*\* on complex tasks compared to the previous, more rigid version.
This project continues to be about building a reliable, symbiotic partner for digital tasks, proving that a smarter, more resilient architecture can often be more effective than simply having more powerful hardware.
Thanks for reading. I'm happy to dive into the technical approach in the comments. | 2025-09-02T05:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n6baf4/project_update_from_brittle_scripts_to_a/ | AffectionateSpray507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6baf4 | false | null | t3_1n6baf4 | /r/LocalLLaMA/comments/1n6baf4/project_update_from_brittle_scripts_to_a/ | false | false | self | 0 | null |
Policy violation Fee in Grok (Facepalm) | 177 | [https://docs.x.ai/docs/models](https://docs.x.ai/docs/models)
>[Usage Guidelines Violation Fee](https://docs.x.ai/docs/models#usage-guidelines-violation-fee)
>A rare occurrence for most users, when your request is deemed to be in violation of our usage guideline by our system, we will charge a $0.05 per request usage guidelines violation fee.
| 2025-09-02T03:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n68l96/policy_violation_fee_in_grok_facepalm/ | Yes_but_I_think | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n68l96 | false | null | t3_1n68l96 | /r/LocalLLaMA/comments/1n68l96/policy_violation_fee_in_grok_facepalm/ | false | false | self | 177 | null |
vLLM vs MLIR - TTS Performance | 19 | vLLM leverages nvcc toolchain, MLIR (https://mlir.llvm.org/) transforms IR (Intermediate Representation) to PTX directly for nvidia. MLIR's IR could be transformed to other GPU/CPU instructions via dialects.
From the TTS-1 Technical Report (https://arxiv.org/html/2507.21138v1) of Inworld.ai,
"The inference stack leverages a graph compiler (MAX pipeline) for optimizations like kernel fusion and memory planning, complemented by custom kernels for critical operations like attention and matrix-vector multiplication, which were also developed in Mojo to outperform standard library implementations."
and
"As a result of these combined optimizations, the streaming API delivers the first two seconds of synthesized audio on average 70% faster than a vanilla vLLM-based implementation"
MAX/Mojo uses MLIR.
This looks to be a purpose speicific optimization to squeeze more throughput from GPUs. | 2025-09-02T03:05:00 | phone_radio_tv | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n68l86 | false | null | t3_1n68l86 | /r/LocalLLaMA/comments/1n68l86/vllm_vs_mlir_tts_performance/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': 'ytt747xm2omf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=108&crop=smart&auto=webp&s=4b0d0ece17e988ee415a6075f6bd6e258d845d41', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=216&crop=smart&auto=webp&s=6ba53702197558e7035d797088f3de423df0ca8a', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=320&crop=smart&auto=webp&s=112d84c695591c17750ded6c8db85fb0e2745b65', 'width': 320}, {'height': 447, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=640&crop=smart&auto=webp&s=2ee5db23ef9cbac3ea9a0f9c6c43c67d1c06c0c7', 'width': 640}, {'height': 671, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=960&crop=smart&auto=webp&s=87ba6e97399cc4b30c3d80e087ab4ee8d1e731ab', 'width': 960}, {'height': 755, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?width=1080&crop=smart&auto=webp&s=1a28c6947976fbbd611fcab1a3c8b398e9b89931', 'width': 1080}], 'source': {'height': 1234, 'url': 'https://preview.redd.it/ytt747xm2omf1.png?auto=webp&s=f9ef2469c9367cabb2d13e1a942bf90145a06319', 'width': 1764}, 'variants': {}}]} | |
Why are all AI "Success" posts terrible? | 151 | "Wow look at this!" someone cries, and includes a screenshot/gif from a single-line AI prompt magically producing a working product.
Great, and completely unsurprising given that one-line prompts work exactly like horoscopes - so vague they can't help but satisfy whatever slop gets generated. But whatever, as long as it looks gifable right?
"Build me a todo app that looks nice!"
Congratulations, you just wrote the AI equivalent of "you will face challenges this week." The AI spits out literally anything 'todo adjacent' and you're amazed because technically it worked. Just like horoscopes, it's response is written so broadly that the reader finds it somehow fits their expectations..
A real horoscope would say "On Tuesday at 3:47 PM, you will receive a text from someone whose name starts with J about a blue object."
With that in mind, how about someone show me a real workflow:
* Your original concept art/design docs/sketches
* How close you actually got to achieve your original concept/idea
* How many iterations it took
* What didn't work
* The actual prompts you used (all of them)
Unless that AI output was almost EXACTLY what you had in mind from prompt #1, all your "amazing" result proves was your prompt was horoscope-level vague, and you're apparently ok with mediocrity . | 2025-09-02T02:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n68afq/why_are_all_ai_success_posts_terrible/ | Bimbam_tm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n68afq | false | null | t3_1n68afq | /r/LocalLLaMA/comments/1n68afq/why_are_all_ai_success_posts_terrible/ | false | false | self | 151 | null |
Dual GPU Setup: RTX 5090 + RTX Pro 6000 (96GB) on MSI X870E MAG Tomahawk – Which Slot Placement? | 4 | I’m building a workstation with two GPUs and want to optimize slot usage for both display + gaming and LLM inference serving and training.
System:
• MSI MAG X870E Tomahawk WiFi (AM5)
• 2× NVMe drives
• RTX 5090 (main display + some inference)
• RTX Pro 6000 96GB – dedicated for larger LLM serving or training)
• 1600W Platinum PSU (I have a 20A circuit and I am planning on power limiting the cards to 400W-450W most of the time)
Board layout:
• PCI_E1 (top): PCIe 5.0 x16 (CPU direct)
• PCI_E2 (middle): PCIe 5.0 x4 (not GPU-friendly)
• PCI_E3 (bottom): PCIe 4.0 x16 (x8 with 2 GPUs installed)
⸻
Should I:
1. Put the 5090 in PCI_E1 (Gen5x16) and the Pro 6000 in PCI_E3 (Gen4x8) or
2. Put the Pro 6000 in PCI_E1 (Gen5x16) and the 5090 in PCI_E3 (Gen4x8), with the 5090 still handling the displays.
In either of these setups, does Gen5 also get reduced to Gen5x8 because of the dual GPUs? From my understanding Gen5 vs Gen4 for gaming is only a few % difference but I haven’t been able to find reliable benchmarks on this kind of setup for LLM inference. I believe that once the models are loaded onto the VRAM, the Gen5 vs Gen4 comparison is moot anyway, however, wouldn’t the actual loading of the models onto the Gen4 much slower? This is why I was thinking it may be better to use the Gen5 slot for the GPU I’ll mostly be loading/unloading models frequently (Pro 6000).
Which way would you prioritize? Anyone running dual NVIDIA cards for AI workloads that has some advice? | 2025-09-02T02:47:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n688uf/dual_gpu_setup_rtx_5090_rtx_pro_6000_96gb_on_msi/ | Its-all-redditive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n688uf | false | null | t3_1n688uf | /r/LocalLLaMA/comments/1n688uf/dual_gpu_setup_rtx_5090_rtx_pro_6000_96gb_on_msi/ | false | false | self | 4 | null |
Gemma3-4b-it & Google AI Studio | 0 | 通用领域为什么使用Gemma3-4b-it进行本地GPU加载的推理结果与在Google AI Studio中使用Gemma3-14b-it预测图像中角色的情绪不同?
Why is the inference result of using Gemma3-4b-it for local GPU loading different from using Gemma3-4b-it in Google AI Studio for predicting the emotions of characters in images? | 2025-09-02T02:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n688ow/gemma34bit_google_ai_studio/ | No-Hand1641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n688ow | false | null | t3_1n688ow | /r/LocalLLaMA/comments/1n688ow/gemma34bit_google_ai_studio/ | false | false | self | 0 | null |
Uncensored image editing and generation ? | 5 | I have been enjoying Imagen for image editing a lot but it' is heavily censored which can be very annoying. What is the best uncensored local image editing and generation tool? | 2025-09-02T02:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n6862l/uncensored_image_editing_and_generation/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6862l | false | null | t3_1n6862l | /r/LocalLLaMA/comments/1n6862l/uncensored_image_editing_and_generation/ | false | false | self | 5 | null |
After deepseekv3 I feel like other MoE architectures are old or outdated. Why did Qwen chose a simple MoE architecture with softmax routing and aux loss for their Qwen3 models when there’s been better architectures for a while? | 92 | Deepseekv3, R1, and deepseekv3.1 use sigmoid based routing with aux free bias gating and shared experts whereas Qwen3 MoE models use standard soft max routing with aux loss balancing. The Deepseekv3 architecture is better because it applies a bias to the raw affinity score for balancing. Qwen3 uses aux loss which can compete with other rewards. There are a couple other features that make the Deepseekv3 architecture better. This honestly makes me wary about even using Qwen3 MoE models! | 2025-09-02T02:38:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n6827e/after_deepseekv3_i_feel_like_other_moe/ | Euphoric_Ad9500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6827e | false | null | t3_1n6827e | /r/LocalLLaMA/comments/1n6827e/after_deepseekv3_i_feel_like_other_moe/ | false | false | self | 92 | null |
Gmma3-4b-it & Google AI Studio | 1 | 通用领域为什么使用Gmma3-4b-it进行本地GPU加载的推理结果与在Google AI Studio中使用Gmma3-14b-it预测图像中角色的情绪不同?
Why is the inference result of using Gmma3-4b-it for local GPU loading different from using Gmma3-4b-it in Google AI Studio for predicting the emotions of characters in images? | 2025-09-02T02:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1n680qh/gmma34bit_google_ai_studio/ | No-Hand1641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n680qh | false | null | t3_1n680qh | /r/LocalLLaMA/comments/1n680qh/gmma34bit_google_ai_studio/ | false | false | self | 1 | null |
What are the current best unfiltered 7B models? | 0 | I have Ryzen 7, AMD Radeon, 16GB ram, as the title suggests what are the current best unfiltered 7B models and how do they compare to models like GPT5 and Claude sonnet 4... | 2025-09-02T02:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/1n67yuj/what_are_the_current_best_unfiltered_7b_models/ | FunAd6576 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n67yuj | false | null | t3_1n67yuj | /r/LocalLLaMA/comments/1n67yuj/what_are_the_current_best_unfiltered_7b_models/ | false | false | self | 0 | null |
Is this upgrade worth it? AM4 to AM5 (1061 + 600€) | 0 | My current PC runs anything without problems except LlamaLocal or Llamacpp
I can play Cyberpunk 2077 at 1440P at 80 and 90 fps on ultra + Ray Tray, BUT! When playing with LlamaCPP and using models larger than 12B, my PC seems like a prehistoric potato or a super cheap PC that doesn't work.
When using llamalocal my PC doesn't become low-end... it becomes Ultimate Potato PC lol
Sometimes I feel like I'm stupid and that it's a foolish thing to change my whole PC just for Llamalocal but I'm really curious, it feels good to play with 12b but it's a bit small... I want to try something bigger and fatter XD but it's expensive damn
This is my current PC (AM4):
-Motherboard Asus PRIME B450M-A II (Micro ATX)
-Ryzen 5 5600 no X
-Thermalright Peerless Assassin 120 CPU Air Cooler
-Corsair 32GB RAM DDR4 3200Mhz
-RTX 4070 Super
-SSD 2 (1TB, 2TB), HDD 2 (3TB, 4TB)
-PSU Corsair 750W 80 Gold Plus HD
-Tower ATX Lian-Li O11 Dynamic EVO White
-10 White Fans Ultra RGB Talius
-Ultra Pro Plus HRD 4K LEDS RGB!!!
-Two Benq 1440P Monitors Gaming (144Hz and 165Hz)
-Dual monitor stand
-Gaming Corsair chair T3 Rush
-Cheap Keyboard and mouse :)
I want to upgrade my PC, but I have doubts...
Is the privacy of using a local AI really worth it?
I would use it as a personal assistant, for consultations and help with programs, and also for PR :)
But I don't know if it's worth it... I'm a little sad about getting rid of my current PC. It cost me a lot of money, work, and time to get it, but for local calls, it's crap and not much use, only 12B.
I only have one sad GPU because I have a micro ATX motherboard... I bought 128GB of DDR4 RAM but I think I'm going to return it because using RAM is slow and it's better to use VRAM.
I want a PC for mixed use for local calling, gaming, and some rendering like Stable diffusion, Blender, ComfyUI, SillyTavern :)
And I was thinking of upgrading to am5 just for LlamaLocal but playing with llamalocal is very expensive as it requires a lot of VRAM and CUDA
I was thinking of buying the components in the image and I will have AM5 and the ProArt b650 creator motherboard supports 3 x8, x8, x4 GPUs. I also want to put 96GB 6000Mhz RAM and a Ryzen 9700X. The 9800x3d and 7800x3d are very expensive :(
I also want to put a 1600W PSU and ask, will that source be enough for 3 GPUs? In the future, I want to sell my RTX 4070 SUPER, put the RTX 5070 TI SUPER 24GB and 2x3090 + my Ryzen 9700X and all the other components. Will that PSU be enough or should I buy a more powerful one?
I'm thinking of upgrading because I found a 3090 for 600€ and I think I could get it for 550€
The total upgrade would be (AM5 + 3090), (1061€ + 600€ = 1661€), my RTX 4070 SUPER + 3090 would have 36GB of VRAM and in the future I would have 48GB VRAM with the RTX 5070 TI SUPER + 3090 and if later I add another 3090 I would have 72GB of VRAM but the third 3090 would be at x4
I want to play with 32B and 70B although they say that 70B is being forgotten and I also want to try GLM 4.5 Air 110B Q4 or Q5 and GPT OSS 120B
I was thinking about NVIDIA Digits AI but it costs 3000€ so I don't know if it's worth it.
Advice? Is it right, is it wrong? What would you do with a 1600€ budget to play with Llama???
Is it better to use a free API? Or is it better to pay monthly to use an AI?
I know there are a lot of questions but I would appreciate it if you could help me with some ❤️ | 2025-09-02T01:39:21 | Spiderboyz1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n66ul1 | false | null | t3_1n66ul1 | /r/LocalLLaMA/comments/1n66ul1/is_this_upgrade_worth_it_am4_to_am5_1061_600/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vfuhd9dgonmf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=108&crop=smart&auto=webp&s=89c727385db250fd79f497ca97af6b051eab5ab8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=216&crop=smart&auto=webp&s=440c6e6eb824899b248a0839ef28eda784c35a28', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=320&crop=smart&auto=webp&s=cc66c88efa3325f0a02a9a333955c6ced6b1c88e', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=640&crop=smart&auto=webp&s=7bd7e8fdf342ffa24cd7ea59657a029781ca2e8b', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=960&crop=smart&auto=webp&s=2e58e3bcc2847936bcce3d5b6c8697dc8f63204c', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?width=1080&crop=smart&auto=webp&s=a49f21785c5acff7e2dd6f688f569e5cd3ab8b74', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/vfuhd9dgonmf1.png?auto=webp&s=0aa3ecbfa727a07d49a05fb12a37ad2d2ce3eea5', 'width': 2560}, 'variants': {}}]} | |
Is this upgrade worth it? AM4 to AM5 | 1 | 2025-09-02T01:27:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n66lj7/is_this_upgrade_worth_it_am4_to_am5/ | Spiderboyz1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n66lj7 | false | null | t3_1n66lj7 | /r/LocalLLaMA/comments/1n66lj7/is_this_upgrade_worth_it_am4_to_am5/ | false | false | 1 | null | ||
GPU credits for students, tinkerers, solopreneurs | 92 | We recognize that GPU grants are often biased. Funded startups, prominent researchers, or other successful individuals are swimming in credits. At the same time, it can be challenging to obtain a GPU if you're just getting started and when you need it the most. We're working to address this issue through our GPU credits program, which is **available to everyone** (also, we're a poor early-stage startup, so we can't offer generous sponsorship programs).
\- Get from $100 to $1000 for your project. Note that our prices are one-quarter of those of Hyperscalers, and we offer consumer GPUs like RTX 4090 / 5090 / Pro 6000 for rent, so you really get $500-$10,000 of GPU value.
\- We pool applications and make decisions every two weeks. We've allocated a $3,000 monthly budget for this program. We will increase it if it proves successful.
\- We're looking for projects that address pressing community problems. It doesn't have to be a significant issue. If you're working on one, please don't forget to refer to the Reddit thread that describes the problem. It helps us refine the product to meet community needs.
\- We'd like to ask you to mention us in your social media post, article, or blog. Having an active social media profile, published articles, or blog posts is a plus. Ultimately, we're a business and aim to promote our product.
[https://www.cloudrift.ai/ai-grant](https://www.cloudrift.ai/ai-grant) | 2025-09-02T01:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n6699f/gpu_credits_for_students_tinkerers_solopreneurs/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6699f | false | null | t3_1n6699f | /r/LocalLLaMA/comments/1n6699f/gpu_credits_for_students_tinkerers_solopreneurs/ | false | false | self | 92 | null |
Manage Chat Interactions from online websites like lmarena.ai | 0 | Several websites such as [lmarena.ai](https://lmarena.ai) and [Le Chat](https://chat.mistral.ai/chat) provide a web interface similar to OpenWebUI for chat interactions without logging in. However, the chat sessions are saved only in the web browser and any fresh installation of the web browser would erase all previous saved chat instances. Require a utility to manage these chats like export them from the browser and import/restore them upon fresh web browser installation or into a self-hosted OpenWebUI database? Even better would be if these chat sessions could be invoked through the OpenWebUI, that way they are always available even offline. | 2025-09-02T00:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n660tb/manage_chat_interactions_from_online_websites/ | user0X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n660tb | false | null | t3_1n660tb | /r/LocalLLaMA/comments/1n660tb/manage_chat_interactions_from_online_websites/ | false | false | self | 0 | null |
Good setup for coder LLM under 12GB VRam and 64GB DDR5? | 0 | I basically have a 6700 XT 12gb vram, and 64gb ddr5, running a Ryzen 9600X.
I have tried to use Qwen3-coder-30b but it's uberly slow at 10t/s in LM Studio.
I mean - I am paying for Copilot 10$ per month, but seeking if there's anything better that I can run locally. | 2025-09-02T00:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n65kvo/good_setup_for_coder_llm_under_12gb_vram_and_64gb/ | soyalemujica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n65kvo | false | null | t3_1n65kvo | /r/LocalLLaMA/comments/1n65kvo/good_setup_for_coder_llm_under_12gb_vram_and_64gb/ | false | false | self | 0 | null |
Best GPU under 2000$ | 0 | I am originally from India but will be traveling from Canada to India. I am thinking of buying a gpu(if possible> 24gb) from Canada to India since there are no decent options in India. Which gpu should I buy? I've startup and I'll use gpu to serve VLM(3b-7b) and some training runs. What are the options in Canada and are there any options apart from buying in Canada? Is it possible I buy from China and ship it to Canada and finally take it to India. I am okay with used cards as well. Also, under 2000$ what would be better choice 2x 3090 or 1 new 5090/4090 | 2025-09-02T00:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n65knk/best_gpu_under_2000/ | Signal-Run7450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n65knk | false | null | t3_1n65knk | /r/LocalLLaMA/comments/1n65knk/best_gpu_under_2000/ | false | false | self | 0 | null |
Multiple GPUs for whisper? | 2 | Can I use multiple GPUs (2*5050ti 16gbram) to train and fine tune whisper large models locally ?
Also for meta NLLB open source AI ?
Thank you 👍🏻 | 2025-09-01T23:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/1n64bg4/multiple_gpus_for_whisper/ | boklos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n64bg4 | false | null | t3_1n64bg4 | /r/LocalLLaMA/comments/1n64bg4/multiple_gpus_for_whisper/ | false | false | self | 2 | null |
How is GPT-OSS so much faster than DeepSeek? | 0 | I am running an RTX 3090 and a Ryzen 7 with 64 GB of ram.
DeepSeek R1 14B parameters runs at 63 TP/S.
GPT-OSS 20B parameters runs at 123 TP/S.
OSS is 30% bigger and runs twice as fast.
How? Why? | 2025-09-01T23:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n63uo3/how_is_gptoss_so_much_faster_than_deepseek/ | Jack-Donaghys-Hog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n63uo3 | false | null | t3_1n63uo3 | /r/LocalLLaMA/comments/1n63uo3/how_is_gptoss_so_much_faster_than_deepseek/ | false | false | self | 0 | null |
Top small LLM as of September '25 | 63 | So, I've been away for the last couple of months, and suddenly I don't seem to see references to new small models around here. Is there any novelty o the topic of small models since the releases of Qwen 3 and Gemma 3n? Something I could run with 4GB VRAM? thanks | 2025-09-01T23:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n63lz8/top_small_llm_as_of_september_25/ | _-inside-_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n63lz8 | false | null | t3_1n63lz8 | /r/LocalLLaMA/comments/1n63lz8/top_small_llm_as_of_september_25/ | false | false | self | 63 | null |
Replay - like Git for App States and Agent Context | 2 | Hey folks,
Announcing updates re: [https://terminals.tech](https://terminals.tech) which I've introduced here before, focused on Replay which I'm open sourcing and building out for web agent developers, as well as some really cool new concepts like full stack "intelligent" computers that run right in your browser and no backend which will be part of this month's 30 days of 30 releases.
We've included a playground for everyone to try - expect bugs since we're using a lot of cutting edge MDN and Web API features. Latest Chrome version always recommended - currently all compatibility generally possible is included across devices and browsers. Mobile optimization pending so YMMV.
More importantly though, Day 3 Updates!
\----
# Replay (alpha) open sourced (MIT):
*Core:* [*https://www.npmjs.com/package/@terminals-tech/core*](https://www.npmjs.com/package/@terminals-tech/core)
*Graph:* [*https://www.npmjs.com/package/@terminals-tech/graph*](https://www.npmjs.com/package/@terminals-tech/graph)
Replay is a simplified abstraction of the rigorous client side state management you can see in the browser consoles of the main terminals site - it allow you to make your web app into a real time stateful machine that tracks every event. You can then add it to your agents' contexts and allow users to "time travel" in their sessions (see gif)
Generates replay.terminals.tech/{id} links with the timeline view. Easily give as context to debug with agents or fix with hot patches.
Try here [https://terminals.tech/replay](https://terminals.tech/replay)
[A flowchart app that allows user to rewind the entre app state in a timeline view.](https://i.redd.it/o7kokelrpmmf1.gif)
\----
# /Zero is officially multimodal and full stack
*Click "chat with /zero" to chat with lightweight client*
*Click "open ⌀ computer" to open up webcontainer, installer, package manager, etc*
*Click popout icon on Virtual Computer to access full computer view (tabbed)*
We added Google as provider and integrated **Imagen 004, Gemini 2.5 Flash Image Preview (Nano Banana), Veo 003, Lyria 2**. Currently only image gen is supported but soon the agent will be able to generate all modalities.
We compress media on the fly with ffmpeg wasm, then use supabase edge functions for persistence for additional layer if you have an account. Also allows saving and downloading full size locally.
[Nano banana \/zero making cute ducky pics](https://preview.redd.it/vqjsu7p0tmmf1.png?width=2560&format=png&auto=webp&s=a193d4f6cba5b1cc720c243ea1bc05603172146c)
\----
Looking forward to sharing and teaching the community about a lot of really interesting things that will shape the landscape of web apps for the next 3-5 years.
HINTS: terminals zk, terminals platform, terminals sdk, terminals worlds, arduino link
Also sketching out the ideas and concepts for first WebXR powered IRL/URL warehouse club that bridges digital and physical experiences, which is our real goal longer term looking ahead the next 3-5 years. | 2025-09-01T22:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n638gv/replay_like_git_for_app_states_and_agent_context/ | brownman19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n638gv | false | null | t3_1n638gv | /r/LocalLLaMA/comments/1n638gv/replay_like_git_for_app_states_and_agent_context/ | false | false | 2 | null | |
HRM - Training from scratch - Day 2 - model successfully overfitted to tiny dataset | 11 | Hi,
so far I'm enjoying the process as it unfolds. I decided to take a backstep and understand if even the architecture can understand/learn languages.
I started by a character tokenizer and tested it if it can handle simple overfitting on a small dataset.
Afterwards I've tried a 10k character corpus to see if it can learn to autoregressively generate characters like basic gpt-like transformers can, it failed miserably actually.
However, it only worked once i added whole sentences and words to the character tokenizer, it responded well and got every prompt pair correct.
So it works if we can increase the token vocab and the less sub words in there. Which led me back to gpt2 tokenizer, it struggled alot.
I then decided to test out two features of the hrm, how it handles when the config is \`deeper\` and \`wider\`,
but in this specific architecture, i increased the transformer layers from 4 to 8 and only one h loop and l
loop. the wider was at 4 with 3 hloop and l loop.
h and l are inner and outer loops.
so i was able to successfully get it to overfit.
And thank you for reading!
below is the results of its training run;
**deeper model**
PROMPT:
<user> hello</s>
<reasoning> intent:greet; tone:brief,polite; plan:acknowledge </s>
<assistant>
OUTPUT:
Hey! »
PROMPT:
<user> what is 2 + 3?</s>
<reasoning> intent:arithmetic; eq:2+3=5; verify:integer\_add </s>
<assistant>
OUTPUT:
5. »
PROMPT:
<user> translate "good night" to french</s>
<reasoning> intent:translate; src:en; tgt:fr; rule:direct\_phrase </s>
<assistant>
OUTPUT:
« Bonne nuit. »
PROMPT:
<user> tell me a short joke</s>
<reasoning> intent:joke; tone:light; length:short </s>
<assistant>
OUTPUT:
Why did the bicycle fall over? It was two-tired.
PROMPT:
<user> hi</s>
<reasoning> intent:greet; tone:casual; plan:respond </s>
<assistant>
OUTPUT:
Hey! »
PROMPT:
<user> hello there</s>
<reasoning> intent:greet; tone:friendly; plan:mirror </s>
<assistant>
OUTPUT:
Hey!!
PROMPT:
<user> what is 3 + 2?</s>
<reasoning> intent:arithmetic; eq:3+2=5; verify:add </s>
<assistant>
OUTPUT:
5. »
PROMPT:
<user> calculate 1 + 4</s>
<reasoning> intent:arithmetic; eq:1+4=5; verify:sum </s>
<assistant>
OUTPUT:
5. »
**wider model**
PROMPT:
<user> hello</s>
<reasoning> intent:greet; tone:brief,polite; plan:acknowledge </s>
<assistant>
OUTPUT:
Hello! »
PROMPT:
<user> what is 2 + 3?</s>
<reasoning> intent:arithmetic; eq:2+3=5; verify:integer\_add </s>
<assistant>
OUTPUT:
5. »
PROMPT:
<user> translate "good night" to french</s>
<reasoning> intent:translate; src:en; tgt:fr; rule:direct\_phrase </s>
<assistant>
OUTPUT:
« Bonne nuit. »
PROMPT:
<user> tell me a short joke</s>
<reasoning> intent:joke; tone:light; length:short </s>
<assistant>
OUTPUT:
Why did the bicycle fall over? It was two-tired.
PROMPT:
<user> hi</s>
<reasoning> intent:greet; tone:casual; plan:respond </s>
<assistant>
OUTPUT:
Hello! »
PROMPT:
<user> hello there</s>
<reasoning> intent:greet; tone:friendly; plan:mirror </s>
<assistant>
OUTPUT:
Hello there!
PROMPT:
<user> what is 3 + 2?</s>
<reasoning> intent:arithmetic; eq:3+2=5; verify:add </s>
<assistant>
OUTPUT:
5. »
PROMPT:
<user> calculate 1 + 4</s>
<reasoning> intent:arithmetic; eq:1+4=5; verify:sum </s>
<assistant>
OUTPUT:
5. »
**and below is the more technical output for those that arent tired of my yapping lol.**
deeper model run:
Final CE: 0.0000 | AUX: 0.0100
GOT: Hello!
WANT: Hello!
GOT: 5.
WANT: 5.
\--- Sample 1 ---
PROMPT:
<user> hello</s>
<reasoning> intent:greet; tone:brief,polite; plan:acknowledge </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('5', 0.46979138255119324), ('.', 0.39315593242645264), ('Why', 0.07724795490503311), (' Bon', 0.032733868807554245), ('Hey', 0.009616638533771038), ('<|endoftext|>', 0.005990968085825443), (' did', 0.0042328485287725925), ('!', 0.0029024614486843348)\]
CHOSEN FIRST TOKEN: Hey
OUTPUT:
Hey! »
\--- Sample 2 ---
PROMPT:
<user> what is 2 + 3?</s>
<reasoning> intent:arithmetic; eq:2+3=5; verify:integer\_add </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('5', 0.7015942335128784), ('Why', 0.15817661583423615), (' Bon', 0.03699721768498421), ('!', 0.03692837432026863), ('Hey', 0.0328972227871418), ('<|endoftext|>', 0.017206650227308273), ('.', 0.007884377613663673), (' did', 0.0033648896496742964)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
\--- Sample 3 ---
PROMPT:
<user> translate "good night" to french</s>
<reasoning> intent:translate; src:en; tgt:fr; rule:direct\_phrase </s>
<assistant>
INTENT: translate
ALLOWED FIRST TOKENS: \['«'\]
FIRST-STEP TOP-K: \[('5', 0.7174723744392395), ('Why', 0.12315943092107773), ('.', 0.07549838721752167), (' Bon', 0.03735000267624855), ('Hey', 0.018656115978956223), ('<|endoftext|>', 0.010583776980638504), ('!', 0.008158780634403229), (' did', 0.004186202306300402)\]
CHOSEN FIRST TOKEN: «
OUTPUT:
« Bonne nuit. »
\--- Sample 4 ---
PROMPT:
<user> tell me a short joke</s>
<reasoning> intent:joke; tone:light; length:short </s>
<assistant>
INTENT: joke
ALLOWED FIRST TOKENS: \['Why'\]
FIRST-STEP TOP-K: \[('5', 0.7368988394737244), ('Why', 0.12609894573688507), ('.', 0.05201536789536476), (' Bon', 0.03589411452412605), ('Hey', 0.020157743245363235), ('<|endoftext|>', 0.011015812866389751), ('!', 0.009161355905234814), (' did', 0.003931551240384579)\]
CHOSEN FIRST TOKEN: Why
OUTPUT:
Why did the bicycle fall over? It was two-tired.
\--- Sample 5 ---
PROMPT:
<user> hi</s>
<reasoning> intent:greet; tone:casual; plan:respond </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('5', 0.6678099036216736), ('Why', 0.16081207990646362), ('!', 0.06870520859956741), ('Hey', 0.0441524013876915), (' Bon', 0.030156334862113), ('<|endoftext|>', 0.019773291423916817), (' did', 0.002431080210953951), ('.', 0.001417545136064291)\]
CHOSEN FIRST TOKEN: Hey
OUTPUT:
Hey! »
\--- Sample 6 ---
PROMPT:
<user> hello there</s>
<reasoning> intent:greet; tone:friendly; plan:mirror </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('5', 0.7042155265808105), ('Why', 0.157093808054924), ('!', 0.03952900692820549), ('Hey', 0.03467824310064316), (' Bon', 0.03410692140460014), ('<|endoftext|>', 0.01725984551012516), ('.', 0.005274066235870123), (' did', 0.0030513897072523832)\]
CHOSEN FIRST TOKEN: Hey
OUTPUT:
Hey!!
\--- Sample 7 ---
PROMPT:
<user> what is 3 + 2?</s>
<reasoning> intent:arithmetic; eq:3+2=5; verify:add </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('5', 0.6966545581817627), ('Why', 0.15768173336982727), ('!', 0.047055210918188095), ('Hey', 0.03807936608791351), (' Bon', 0.03197040408849716), ('<|endoftext|>', 0.018041569739580154), ('.', 0.003056142246350646), (' did', 0.0027533688116818666)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
\--- Sample 8 ---
PROMPT:
<user> calculate 1 + 4</s>
<reasoning> intent:arithmetic; eq:1+4=5; verify:sum </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('5', 0.7025521397590637), ('Why', 0.15613870322704315), ('!', 0.04393727704882622), ('Hey', 0.03735767677426338), (' Bon', 0.03171215206384659), ('<|endoftext|>', 0.017682280391454697), ('.', 0.0032090034801512957), (' did', 0.002745213219895959)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
wider model run:
Final CE: 0.0000 | AUX: 0.0150
\--- Sample 1 ---
PROMPT:
<user> hello</s>
<reasoning> intent:greet; tone:brief,polite; plan:acknowledge </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('.', 0.9852362871170044), ('«', 0.012538655661046505), (' Bon', 0.0013400508323684335), ('Why', 0.00027935649268329144), ('<|endoftext|>', 0.00012366671580821276), ('Hello', 0.00010915892198681831), ('!', 7.980169175425544e-05), ('5', 7.384794298559427e-05)\]
CHOSEN FIRST TOKEN: Hello
OUTPUT:
Hello! »
\--- Sample 2 ---
PROMPT:
<user> what is 2 + 3?</s>
<reasoning> intent:arithmetic; eq:2+3=5; verify:integer\_add </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('.', 0.9861264824867249), ('«', 0.011742380447685719), (' Bon', 0.0012781355762854218), ('Why', 0.00026998057728633285), ('<|endoftext|>', 0.00011890486348420382), ('Hello', 0.00010622163244988769), ('!', 7.62480340199545e-05), ('5', 7.055179594317451e-05)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
\--- Sample 3 ---
PROMPT:
<user> translate "good night" to french</s>
<reasoning> intent:translate; src:en; tgt:fr; rule:direct\_phrase </s>
<assistant>
INTENT: translate
ALLOWED FIRST TOKENS: \['«'\]
FIRST-STEP TOP-K: \[('.', 0.9849263429641724), ('«', 0.01282725390046835), (' Bon', 0.0013504876988008618), ('Why', 0.00028244793065823615), ('<|endoftext|>', 0.00012547856022138149), ('Hello', 0.0001101160523830913), ('!', 8.133111987262964e-05), ('5', 7.512614683946595e-05)\]
CHOSEN FIRST TOKEN: «
OUTPUT:
« Bonne nuit. »
\--- Sample 4 ---
PROMPT:
<user> tell me a short joke</s>
<reasoning> intent:joke; tone:light; length:short </s>
<assistant>
INTENT: joke
ALLOWED FIRST TOKENS: \['Why'\]
FIRST-STEP TOP-K: \[('.', 0.9850696921348572), ('«', 0.012696742080152035), (' Bon', 0.0013424678472802043), ('Why', 0.000281412125332281), ('<|endoftext|>', 0.00012461119331419468), ('Hello', 0.00010973347525577992), ('!', 8.056389924604446e-05), ('5', 7.462135545210913e-05)\]
CHOSEN FIRST TOKEN: Why
OUTPUT:
Why did the bicycle fall over? It was two-tired.
\--- Sample 5 ---
PROMPT:
<user> hi</s>
<reasoning> intent:greet; tone:casual; plan:respond </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('.', 0.9857224225997925), ('«', 0.01210754830390215), (' Bon', 0.0013038457836955786), ('Why', 0.0002722761710174382), ('<|endoftext|>', 0.00012143997446401045), ('Hello', 0.00010728350025601685), ('!', 7.856674346840009e-05), ('5', 7.194236968643963e-05)\]
CHOSEN FIRST TOKEN: Hello
OUTPUT:
Hello! »
\--- Sample 6 ---
PROMPT:
<user> hello there</s>
<reasoning> intent:greet; tone:friendly; plan:mirror </s>
<assistant>
INTENT: greet
ALLOWED FIRST TOKENS: \['Hey', 'Hello'\]
FIRST-STEP TOP-K: \[('.', 0.9888366460800171), ('«', 0.00931193120777607), (' Bon', 0.001104532741010189), ('Why', 0.00023444643011316657), ('<|endoftext|>', 0.00010423409548820928), ('Hello', 9.576183947501704e-05), ('!', 6.609725096495822e-05), (' there', 6.18926715105772e-05)\]
CHOSEN FIRST TOKEN: Hello
OUTPUT:
Hello there!
\--- Sample 7 ---
PROMPT:
<user> what is 3 + 2?</s>
<reasoning> intent:arithmetic; eq:3+2=5; verify:add </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('.', 0.9862282276153564), ('«', 0.011650857515633106), (' Bon', 0.001271733082830906), ('Why', 0.00026877064374275506), ('<|endoftext|>', 0.00011834150063805282), ('Hello', 0.00010586577991489321), ('!', 7.58390233386308e-05), ('5', 7.01595054124482e-05)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
\--- Sample 8 ---
PROMPT:
<user> calculate 1 + 4</s>
<reasoning> intent:arithmetic; eq:1+4=5; verify:sum </s>
<assistant>
INTENT: arithmetic
ALLOWED FIRST TOKENS: \['5'\]
FIRST-STEP TOP-K: \[('.', 0.9865846633911133), ('«', 0.011330759152770042), (' Bon', 0.001249230350367725), ('Why', 0.0002638636215124279), ('<|endoftext|>', 0.0001165428984677419), ('Hello', 0.00010449309047544375), ('!', 7.46748482924886e-05), (' there', 6.88438376528211e-05)\]
CHOSEN FIRST TOKEN: 5
OUTPUT:
5. »
| 2025-09-01T22:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n6387y/hrm_training_from_scratch_day_2_model/ | Creative-Ad-2112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n6387y | false | null | t3_1n6387y | /r/LocalLLaMA/comments/1n6387y/hrm_training_from_scratch_day_2_model/ | false | false | self | 11 | null |
Anyone know any good open source LLMs for NER analysis? | 2 | Looking for something nice and small I can run on llama.cpp. Thanks! | 2025-09-01T22:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n62tcc/anyone_know_any_good_open_source_llms_for_ner/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n62tcc | false | null | t3_1n62tcc | /r/LocalLLaMA/comments/1n62tcc/anyone_know_any_good_open_source_llms_for_ner/ | false | false | self | 2 | null |
Recommendations for a Local AI Software Engineering Setup | 3 | To give some context, I regularly writing software. I have some pet research and software development projects that I'd like to work-on in my spare time. If possible I'd like to leverage AI, locally.
I'm considering upgrading to a new macbook pro m4 max chip with 128gb of ram. I believe that the specs would allow me to run some of the larger, frontier models (e.g., gpt-oss:120b or LLama 4 scout, deepseek-r1). I'm wondering if anyone has recently made a similar hardware upgrade, and would recommend it as a local "vibe coding" setup? Was it worth the cost? Is it best to hold-off until more powerful hardware, open-source LLMs and tooling are available?
I've been recently using using claude code and I've found it more helpful than not. I'm looking for something that can get as close as possible to my experience with that, but I'd like for it to operate completely locally.
The things that I like about claude code are how easy it makes it to use agents, hooks, commands, etc. I also like how clean the user interface is. I can tell that a lot of thought has gone into deciding what information to show to users, and I like the balance that the designers have decided upon.
That being said, I'm a bit reluctant to use claude code for certain things because of IP and privacy concerns. I also find the rate limits frustrating, and the price seems a bit high if something similar can be hosted locally. For those reasons, I'm interested in developing a locally hosted solution.
Right now, I'm thinking about using opencode and maybe one of the frontier models as my bread and butter. If the frontier models are too taxing on the system, maybe I could use a lighter model that's specifically designed for software development. I haven't done much research, but it seems like devstral might be a good option at the present time?
I've tried opencode a bit, and it doesn't seem to have feature parity with claude code, but I think it has the right foundation. I'm willing to invest into it, and I'd even be willing to contributing to the project if the developers are collaborative. That being said, I am open to using something else if a better, free and open-source option is out there. I've also heard of Aider, but the user interface seems a bit clunky in comparison to claude code and opencode.
I haven't done a deep dive to see how the agentic capabilities of opencode or aider. I'd be interested to hear other people's opinions about how they compare to claude code. I'd also be interested to hear other people's opinions and about their experiences with those tools, and what combinations they thought worked best for them.
Some general, and yet, related questions for anyone:
\- Do you have experience with a completely local and open source software engineering setup?
\- Do you have recommendations about combinations of terminal interfaces and models that worked best for you?
\- Do you find yourself regularly using such tools for software engineering tasks? Or, is it something that you put to the side?
\- Do you think it's worth splurging on the hardware mentioned above for the intended purposes?
\- How would you strategize your time and money for the changes that you anticipate will occur in the future? | 2025-09-01T22:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n624a2/recommendations_for_a_local_ai_software/ | LookingRadishing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n624a2 | false | null | t3_1n624a2 | /r/LocalLLaMA/comments/1n624a2/recommendations_for_a_local_ai_software/ | false | false | self | 3 | null |
building a private LLM for businesses | 0 | I’m considering building a private LLM for businesses to host their internal data using Ollama + Open WebUI running on a cloud VPS. My stack also includes vector search (like Qdrant) and document syncing from OneDrive.
There are millions of SMEs that don't have internal AI tools, and this seems like a great way to introduce it for them.
1. Do you think there is demand for company-specific internal LLM/GPT-style chatbots?
2. What risks and or downsides do you see by providing such a service?
3. Am I missing something very obvious?
Thank you in advance | 2025-09-01T21:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n61yux/building_a_private_llm_for_businesses/ | divad9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n61yux | false | null | t3_1n61yux | /r/LocalLLaMA/comments/1n61yux/building_a_private_llm_for_businesses/ | false | false | self | 0 | null |
Optimal settings for running gpt-oss-120b on 2x 3090s and 128gb system ram | 4 | I made a post this morning about finally getting around to trying out gpt-oss-120b and I was pleasantly surprised. That being said, I would like to release my settings that give me acceptable performance on a resource constrained system such as mine. Obviously ***your mileage may vary*** but I think this is a good starting point for anyone with a machine similar to mine looking to run the full size gpt-oss model at home with acceptable speed!
Here are my system specs:
|CPU|Ryzen 9 5950X 16 Core 32 Threads|
|:-|:-|
|RAM|G.Skill Ripjaws DDR4 @ 3600mhz 128GB Total|
|GPU|1x RTX 3090 TI + 1x RTX 3090|
|MOBO|Asus ROG STRIX X570-E WIFI II|
|PSU|Thermaltake Toughpower GF1 1000W 80+ Gold|
And now for my settings. I'm currently using the latest version of LM Studio and using the official lmstudio-community distributed gguf file.
|Parameter|Value|Note|
|:-|:-|:-|
|Context Length|131072|I'm sure you could gain some t/s by lowering this, but I like having the headroom.|
|GPU Offload|28/36|Minimal noticeable difference with lowering this to 27. I multitask a lot so I've been loading it with 27 to free up some ram when I have a lot of other things going on|
|CPU Thread Pool Size|12|This is a weird one. Higher doesn't seem to always be better for some reason but too low and it hurts performance. I was getting worse performance with 14+ and anything below 10 was pretty bad. I found the sweet spot to be 12 at least for the R9 5950X. Experiment with this value depending on your CPU.|
|Evaluation Batch Size|512|This is another case similar to the aforementioned one. I tried setting it to 1024 and somehow got worse performance. I was doing increments of 128 starting at 128 and stopping at 2048 and found 512 to be the sweet spot. Everything after that got worse for me.|
|RoPE Frequency Base|Auto|N/A|
|RoPE Frequency Scale|Auto|N/A|
|Offload KV Cache to GPU Memory|True|Originally I had this disabled because in the past I've had to do this in order to run models like Llama 3.3 70b with a full 128k context on my system but for some reason gpt-oss's context doesn't have nearly as large of a memory footprint as other models. (not a ML expert but I'm guessing it has something to do with the ridiculously small hidden size) On my rig, performance is still very usable (about a 4-5 t/s difference) with this KV cache offloaded to cpu but it's not recommended unless absolutely necessary.|
|Keep Model in Memory|True|Enabled by default idk|
|Try mmap()|True|N/A|
|Seed|Default/Random|N/A|
|Number of Experts|4|Nothing to do with performance in terms of speed but I've noticed a few instances where setting this to anything other than 4 seems to degrade the output quality.|
|Force Model Expert Weights onto CPU|True|N/A|
|Flash Attention|True|N/A|
|K Cache Quantization Type|Disabled|Haven't messed with these since it launched and barely worked to begin with but I would imagine this setting would improve generation speed as well|
|V Cache Quantization Type|Disabled|Haven't messed with these since it launched and barely worked to begin with but I would imagine this setting would improve generation speed as well|
# In Summary,
My configuration is heavily geared towards as few compromises as possible while maintaining a usable speed. I get between 8-15 t/s with the settings I provided. If you're okay with possible slight quality loss or smaller context, you can probably squeeze a little more speed out of it if you change the context to something smaller like 65k or even 32k and mess with K and V cache quantization. If you're going to go that route, I would start with Q8 and I wouldn't go lower than Q4. Obviously faster system ram, a better cpu, and more pcie bandwidth will also make a big difference as well. Have fun with gpt-oss and I hope this helped some of you! Feel free to drop suggestions or ask questions below of course. | 2025-09-01T21:40:51 | https://www.reddit.com/r/LocalLLaMA/comments/1n61mm7/optimal_settings_for_running_gptoss120b_on_2x/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n61mm7 | false | null | t3_1n61mm7 | /r/LocalLLaMA/comments/1n61mm7/optimal_settings_for_running_gptoss120b_on_2x/ | false | false | self | 4 | null |
MichelRosselli/GLM-4.5-Air:Q5_K_M in RTX Pro 6000 is thinking way too much | 0 | I gave it "if there is such a thing as a "roo code rules", create the rules based on the existing [CLAUDE.md](http://CLAUDE.md) rules", and it has been thinking and typing things for like 10 minutes. Its "working" in that its thoughts make sense, but WTF? HEre is just one piece of the on-going thinking over a task that should take 5 seconds. Im using roo in VS code. Is there some setting? Or is there something wrong with this model? Dont know how to use "higgingface" in ollama, so using this random one I found in ollama. Want to test the best qwen coder and best glm 4.5 air that fits in 96gb vram.
<think></think>
Roo wants to read this file:
<think></think>Now let me check the existing .cursorrules file to understand what rules might already be in place:
Roo wants to read this file:
<think></think>Based on my analysis of the existing CLAUDE.md and .cursorrules files, I can see that both contain development guidelines for the Mountain Motor Exchange (MME) project. The task is to create "roo code rules" based on these existing documents.
Let me break this down into actionable steps:
Analyze and compare existing CLAUDE.md and .cursorrules files
Identify key differences, overlaps, and unique elements from both documents
Create a consolidated "roo code rules" document that combines the best practices
Structure the rules in a clear, organized format suitable for Roo AI assistant usage
Ensure all critical technical guidelines are preserved
Add any additional Roo-specific guidance if needed
<think></think>
Analyze and compare existing CLAUDE.md and .cursorrules files
Identify key differences, overlaps, and unique elements from both documents
Create a consolidated "roo code rules" document that combines the best practices
Structure the rules in a clear, organized format suitable for Roo AI assistant usage
Ensure all critical technical guidelines are preserved
Add any additional Roo-specific guidance if needed | 2025-09-01T21:35:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n61iaj/michelrosselliglm45airq5_k_m_in_rtx_pro_6000_is/ | devshore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n61iaj | false | null | t3_1n61iaj | /r/LocalLLaMA/comments/1n61iaj/michelrosselliglm45airq5_k_m_in_rtx_pro_6000_is/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=108&crop=smart&auto=webp&s=2022a03d5f9f5d004f6d9689dd441cdc1ad03441', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=216&crop=smart&auto=webp&s=0a78ce870149d0d17bf4b7a183eb9902bfb61e68', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=320&crop=smart&auto=webp&s=805581990d24578a4106b8995e78c20ccdbd6e2b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=640&crop=smart&auto=webp&s=bcd3a3bf437b034f214f7e92d8cf8d78fe8ff428', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=960&crop=smart&auto=webp&s=9dea1be7335c7669ec9ab12b67c3aacc6d360869', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?width=1080&crop=smart&auto=webp&s=b43ea0420a7d9d839d343cd61ce53e9ee4cbdd72', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/l6Okt_ITqgxkZX9G7z5OVz_2elAfFvPBU3WMYZmXhjk.png?auto=webp&s=07305292bb8693b6cec5b6240bb52704fe855e61', 'width': 1200}, 'variants': {}}]} |
I built a free Structured Prompt Builder (JSON/YAML/MD export + few-shot + core controls) — feedback welcome | 7 | Hey folks! I kept rewriting big “do-everything” prompts and losing track of constraints, steps, and few-shot examples. So I built a small, browser-based **Structured Prompt Builder**.
**Live demo:** [https://structured-prompt-builder.vercel.app/](https://structured-prompt-builder.vercel.app/?utm_source=chatgpt.com)
**What it does**
* Build prompts by sections: **Role, Task, Audience, Style, Tone, Constraints, Steps, Named Inputs, Few-shot**
* **Live preview** in **Markdown / JSON / YAML**
* **Core controls** saved alongside the prompt: temperature, top-p, max tokens, presence/frequency penalties
* **Import/Export**: JSON ↔️ YAML ↔️ Markdown (one-click copy & downloads)
* Reorder constraints/steps/examples with up/down buttons
* Optional **JSON-only mode** with inline schema validator
**Why I built it**
* I wanted fewer “Franken-prompts” and more **repeatable structure** I can share with teammates.
* It’s fast, simple, and **runs entirely in your browser** (no login).
**Who it’s for**
* Prompt engineers & power users who want clean, reusable templates
* PMs, devs, writers—anyone who needs a reliable prompt scaffold (PRDs, code reviews, marketing briefs, tutorials, etc.)
**How to use (30 seconds)**
1. Fill in Role + Task.
2. Add Constraints, Steps, Inputs, Few-shot.
3. Toggle JSON-only (optional), tweak core controls, then copy/export.
**Would love feedback on:**
* Any missing block you want (e.g., evaluation rubric, safety guardrails)?
* Default templates you’d use daily?
* Little quality-of-life tweaks that would save time?
Built with a tiny React UI + Tailwind and deployed on Vercel. Happy to iterate based on your comments! | 2025-09-01T21:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1n61btk/i_built_a_free_structured_prompt_builder/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n61btk | false | null | t3_1n61btk | /r/LocalLLaMA/comments/1n61btk/i_built_a_free_structured_prompt_builder/ | false | false | self | 7 | null |
Local LLM for School | 24 | Hi everyone,
I’m a teacher in a UK secondary school and a (very) amateur AI hobbyist. I’ve been thinking about ways to implement a local AI in our school to help allay concerns around using student data with cloud AI tools.
Here in the UK we’re subject to GDPR, and a lot of education decision-makers are (understandably) very risk-averse when it comes to privacy.
My initial idea is a safe, local AI that staff could use for general purposes, think lesson resource creation, drafting emails, etc. But longer-term, I was wondering if it might be possible to hook a local AI up to a read-only copy of our student database (SQL) so teachers could query things like attendance or behaviour data in natural language.
Before I embarrass myself in front of our IT staff, I thought I’d get a sanity check here first and embarrass myself with you lot instead.
Some extra context:
- I’ve managed to set up a local LLM on my home PC already.
- At school I’d have help from IT if it’s at all feasible.
- I know there’d be substantial upfront investment (GPUs etc.), but I think I could secure that.
- From what I’ve read, this would need orchestration (e.g. n8n) and a front end (e.g. OpenWebUI). Maybe JSON schemas or something similar would also be required?
So… what am I missing? Am I crazy?
Any pointers to likely roadblocks, or people who’ve done something similar, would be massively appreciated.
TIA | 2025-09-01T20:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n60hqx/local_llm_for_school/ | OkTill6991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n60hqx | false | null | t3_1n60hqx | /r/LocalLLaMA/comments/1n60hqx/local_llm_for_school/ | false | false | self | 24 | null |
Anyone else annoyed by OpenRouter’s statelessness? | 0 | I’ve been playing around with OpenRouter for some personal ML projects, and while it’s cool to have access to different models through one API, the stateless setup is honestly wearing me down. Every single request means resending the entire conversation history. Not only does that eat up tokens fast, it also adds noticeable latency, and I constantly feel like I’m gluing together “fake memory” just to keep things working.
I actually started looking around for alternatives and stumbled on Backboard.io. It’s technically “waitlist-only” right now, but I got access really quickly. The big difference is that it’s stateful, context carries over automatically between calls, so I don’t have to manage memory myself. Early days for me, but it already feels way less hacky.
Curious if anyone else has hit this wall with OpenRouter and what your workaround has been? | 2025-09-01T20:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n60a72/anyone_else_annoyed_by_openrouters_statelessness/ | SadCalligrapher4407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n60a72 | false | null | t3_1n60a72 | /r/LocalLLaMA/comments/1n60a72/anyone_else_annoyed_by_openrouters_statelessness/ | false | false | self | 0 | null |
Introducing TREE: A Lightweight Mixture-of-Experts (MoE) Architecture for Efficient LLMs | 0 | Most large LLMs (13B–20B params) are powerful but inefficient — they activate all parameters for every query, which means high compute, high latency, and high power use.
I’ve been working on an architecture called TREE (Task Routing of Efficient Experts) that tries to make this more practical:
Router (DistilBERT) → lightweight classifier that decides which expert should handle the query.
Experts (175M–1B LLMs) → smaller fine-tuned models (e.g., code, finance, health).
Hot storage (GPU) / Cold storage (disk) → frequently used experts stay “hot,” others are lazy-loaded.
Synthesizer → merges multiple expert responses into one coherent answer.
Chat memory → maintains consistency in long conversations (sliding window + summarizer).
Why TREE?
Only 5–10% of parameters are active per query.
70–80% lower compute + energy use vs dense 13B–20B models.
Accuracy remains competitive thanks to domain fine-tuning.
Modular → easy to add/remove experts as needed.
TREE is basically an attempt at a Mixture-of-Experts (MoE) system, but designed for consumer-scale hardware + modular deployment (I’m prototyping with FastAPI).
Any ideas...to improve...
https://www.kaggle.com/writeups/rambooraajesh/tree-task-routing-of-efficient-experts#3279250 | 2025-09-01T20:45:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n607y1/introducing_tree_a_lightweight_mixtureofexperts/ | ramboo_raajesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n607y1 | false | null | t3_1n607y1 | /r/LocalLLaMA/comments/1n607y1/introducing_tree_a_lightweight_mixtureofexperts/ | false | false | self | 0 | null |
Struggling with OpenRouter sessions, tried something different. | 0 | Been running some experiments with LLaMA models through OpenRouter, and honestly, the stateless setup is kind of brutal. Having to resend everything with each call makes sense from a routing perspective, but as a dev, it creates a ton of overhead. I’ve already hacked together a small memory layer just to keep context, and it still feels clunky.
Out of curiosity, I tried Backboard.io. It says “waitlist-only,” but I got in fast, so maybe they’re onboarding quietly. What stood out is the stateful sessions, it actually remembers context without me having to do all the duct-tape logic. Makes iterating with local models much smoother since I can focus on the interaction rather than rebuilding memory every time.
Has anyone else here looked into alternatives, or are you just sticking with OpenRouter + your own memory patchwork? | 2025-09-01T20:42:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n605be/struggling_with_openrouter_sessions_tried/ | Inevitable_Number276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n605be | false | null | t3_1n605be | /r/LocalLLaMA/comments/1n605be/struggling_with_openrouter_sessions_tried/ | false | false | self | 0 | null |
I pretrained and postrained a LLM with less than $50 budget which outperforms Google BERT large | 342 | Hey folks from LocalLLama sub! I am really thankful for amazing people in this sub for sharing useful things which helped me to learn lots of things about pretraing , post training and evaluation etc for your context I don't have professional ML background!
Today I am super excited to share that I pretrained and post trained 150M parameter model from scratch which outperforms Google BERT model and I also built embedding model which works on par with Jina-embedings-v2-base model in MTEB benchmarks
In this article I shared how I did this model along with links to weights of model
thanks again | 2025-09-01T20:13:23 | https://medium.com/@harishhacker3010/pretraining-a-llm-with-less-than-50-budget-which-outperforms-google-bert-dbe541b7b14b | Altruistic-Tea-5612 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1n5zed0 | false | null | t3_1n5zed0 | /r/LocalLLaMA/comments/1n5zed0/i_pretrained_and_postrained_a_llm_with_less_than/ | false | false | default | 342 | {'enabled': False, 'images': [{'id': 'ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8.png?width=108&crop=smart&auto=webp&s=0cfe026dda46c68ffbcb5cde03d36cb997f26c95', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8.png?width=216&crop=smart&auto=webp&s=0514ae66a11bb6a04c4622a500d9acd5debc67ca', 'width': 216}, {'height': 231, 'url': 'https://external-preview.redd.it/ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8.png?width=320&crop=smart&auto=webp&s=941c865ac07537a9a3280649492a995477a87363', 'width': 320}, {'height': 463, 'url': 'https://external-preview.redd.it/ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8.png?width=640&crop=smart&auto=webp&s=ebad8a84758c0785d9bbd8dcbf0a28e687394e9e', 'width': 640}], 'source': {'height': 570, 'url': 'https://external-preview.redd.it/ywyWexAkeoEnV3YXP8YcOUkQeZuDP2-5umUBtdqKkZ8.png?auto=webp&s=c1d767e69d530af85769366e0d8865efff320902', 'width': 787}, 'variants': {}}]} |
LLM SUGGESTIONS PLEASE - For AMD READON PRO 5500M 8GB VRAM - | 0 | Hello everybody,
Can anybody suggest me best LLM for general chat and some coding for my system.
MACBOOK intel i9 8CORE
32GB RAM
AMD REDEON PRO 5500M - 8GB
INTEL UHD 630 - 1.5GB
THANKS. | 2025-09-01T20:13:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n5ze1z/llm_suggestions_please_for_amd_readon_pro_5500m/ | roomygallium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5ze1z | false | null | t3_1n5ze1z | /r/LocalLLaMA/comments/1n5ze1z/llm_suggestions_please_for_amd_readon_pro_5500m/ | false | false | self | 0 | null |
Tried parsing invoices with GPT-4o, Claude Sonnet 3.5 & Invofox API (Python). Here's what I found. | 0 | I wanted to see how easy (or messy) it really is to extract structured data from PDFs with code. So over the last month, I tried a few approaches (using Postman & Python) and thought I would share on what worked, what didn’t and what ended up being worth the effort.
1. DIY Workflow with GPT-4o and Claude 3.5
Both OpenAI’s GPT-4o and Anthropic’s Claude models are surprisingly good at understanding invoice layouts (if you give them the right prompt). But there were a few annoying steps:
* You have to run OCR on every PDF first (I used `pdfplumber`)
* Then, it’s all about prompt engineering. I spent a lot of time tweaking prompts just to keep the JSON consistent. Sometimes fields went missing or labels got weird.
* Both models respond fast for short docs, costs are similar (\~$0.01 per normal invoice using 1-2k tokens) and outputs look clean most of the time.
2. Invofox API (specialized models) tuned for invoices.
* You can upload the PDF straight away. OCR, page splitting, document classification are all handled behind the scenes.
* The schema is extracted automatically from what you expect from an invoice.
* Validation, error handling, even “confidence scores” for output fields are built in.
This is great at automating invoice parsing at scale (bulk files, mixed documents). I also used Postman for this case, along with python code.
complete code: [repo](https://github.com/Anmol-Baranwal/doc-parsing)
full detailed writeup: [here](https://www.invofox.com/en/post/document-parsing-using-gpt-4o-api-vs-claude-sonnet-3-5-api-vs-invofox-api-with-code-samples)
This was mostly a side experiment out of curiosity. If you had to parse documents in a side project, would you rely on GPT/Claude + prompts or go straight for a specialized API? | 2025-09-01T19:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n5yds7/tried_parsing_invoices_with_gpt4o_claude_sonnet/ | anmolbaranwal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5yds7 | false | null | t3_1n5yds7 | /r/LocalLLaMA/comments/1n5yds7/tried_parsing_invoices_with_gpt4o_claude_sonnet/ | false | false | self | 0 | null |
LLM for SEO? | 0 | I searched but can't fins any model for SEO. Maybe you can he;p me with that ? | 2025-09-01T19:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n5ycn9/llm_for_seo/ | BillionDollarRabbit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5ycn9 | false | null | t3_1n5ycn9 | /r/LocalLLaMA/comments/1n5ycn9/llm_for_seo/ | false | false | self | 0 | null |
How do you classify intent to the llm if the input is general conversation or needs web search | 2 | I’m trying to add web search feature to my local ai chatbot but it just doesn’t understand when it can answer from its own memory or when it needs to search the browser
Can someone please help me | 2025-09-01T19:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n5y0s5/how_do_you_classify_intent_to_the_llm_if_the/ | Haunting_Stomach8967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5y0s5 | false | null | t3_1n5y0s5 | /r/LocalLLaMA/comments/1n5y0s5/how_do_you_classify_intent_to_the_llm_if_the/ | false | false | self | 2 | null |
Best gpu setup for under $500 usd | 16 | Hi, I'm looking to run a LLM locally and I wanted to know what would be the best gpu(s) to get with a $500 budget. I want to be able to run models on par with gpt-oss 20b at a usable speed. Thanks! | 2025-09-01T18:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n5wxpg/best_gpu_setup_for_under_500_usd/ | milesChristi16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5wxpg | false | null | t3_1n5wxpg | /r/LocalLLaMA/comments/1n5wxpg/best_gpu_setup_for_under_500_usd/ | false | false | self | 16 | null |
Quick question: will learning a load of calculus help me when playing around with llm's at home/fine-tuning models etc? | 2 | Hi, I am doing a few shortish courses on machine learning and a couple of 'ai'-based topics. There is quite a lot of algebra/calculus. I understand it well enough to intuit what is going on, I am just wondering if I need to sit down and learn a load of the rules of calculus, or not, in order to play around with stuff like fine-tuning models, and maybe RAG?
Thanks | 2025-09-01T18:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n5wr5n/quick_question_will_learning_a_load_of_calculus/ | whichkey45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5wr5n | false | null | t3_1n5wr5n | /r/LocalLLaMA/comments/1n5wr5n/quick_question_will_learning_a_load_of_calculus/ | false | false | self | 2 | null |
Better llama-cli help and user guide | 19 | 2025-09-01T18:18:36 | https://github.com/ggml-org/llama.cpp/discussions/15709 | rm-rf-rm | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n5wcuw | false | null | t3_1n5wcuw | /r/LocalLLaMA/comments/1n5wcuw/better_llamacli_help_and_user_guide/ | false | false | default | 19 | {'enabled': False, 'images': [{'id': 'NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=108&crop=smart&auto=webp&s=7c4afbf06ba3c7e7669a0234450bd5bfcddb1baf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=216&crop=smart&auto=webp&s=99e82c1931c0339ba25370e508c7f5a51de2862e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=320&crop=smart&auto=webp&s=b01f8690a7c9e3d592dff2fd6f7790a6d37ed78e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=640&crop=smart&auto=webp&s=246413df24a372594087fb98c2f06a5cf483ce5e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=960&crop=smart&auto=webp&s=949bf231acaa49a5f9de56710391589ff704b18d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?width=1080&crop=smart&auto=webp&s=360ce331239a640517c6b5fd3490180adf11ea23', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NU9CTDEJ8MMoFoE6hwfKDB75ec15aILwLWWriJ1l13Y.png?auto=webp&s=906ebcbe2ce093eb542078539e63ded5d7a4d6e2', 'width': 1200}, 'variants': {}}]} | |
I fine-tuned Llama 3.2 3B for transcript analysis and it outperformed bigger models with ease | 231 | I recently wrote a [small local tool ](https://github.com/bilawalriaz/lazy-notes)to transcribe my local audio notes to text using Whisper/Parakeet.
I wanted to process the raw transcripts locally without needing OpenRouter so i tried Llama 3.2 3B and got surprisingly decent yet ultimately mediocre results. I decided to see how i could improve this using SFT.
I fine-tuned Llama 3.2 3B to clean and analyze raw dictation transcripts locally, outputting a structured JSON object (title, tags, entities, dates, actions).
* Data: 13 real voice memos → teacher (Kimi K2) for gold JSON → \~40k synthetic transcripts + gold. Keys are canonicalized to stabilize JSON supervision. [Chutes.ai](http://Chutes.ai) was used, giving 5000 reqs/day.
* Training: RTX 4090 24GB, \~4 hours, LoRA (r=128, alpha=128, dropout=0.05), max seq length of 2048 tokens, batch size 16, lr=5e-5, cosine scheduler, Unsloth. Could've done it without all this VRAM but would've taken slower (8 hours on my RTX 2070 Super 8GB).
* Inference: merged to GGUF, quantized Q4\_K\_M using llama.cpp, runs locally via LM Studio.
* Evals (100-sample sanity check, scored by GLM 4.5 FP8): **overall score 5.35 (base 3B)** → **8.55 (fine-tuned)**. Completeness 4.12 → 7.62, factual accuracy 5.24 → 8.57.
* Head-to-head (10 samples): specialized 3B averaged \~8.40 vs Hermes-70B 8.18, Mistral-Small-24B 7.90, Gemma-3-12B 7.76, Qwen3-14B 7.62. Teacher Kimi K2 \~8.82.
* Why it works: task specialization + JSON canonicalization reduce output variance and help the model learn the exact structure and fields.
* Lessons learned: important to train on completions only, synthetic datasets are okay for specialised fine-tunes, Llama is surprisingly easy to train
Code, dataset pipeline, hyperparams, eval details, and a 4-bit GGUF download are in the post: [https://bilawal.net/post/finetuning-llama32-3b-for-transcripts/](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
Happy to discuss training setup, eval rubric, or deployment details! | 2025-09-01T18:15:43 | https://bilawal.net/post/finetuning-llama32-3b-for-transcripts/ | CartographerFun4221 | bilawal.net | 1970-01-01T00:00:00 | 0 | {} | 1n5w9yy | false | null | t3_1n5w9yy | /r/LocalLLaMA/comments/1n5w9yy/i_finetuned_llama_32_3b_for_transcript_analysis/ | false | false | default | 231 | null |
Pocket Pal Model | 0 | So i am lookin for a model like GPT/deepseek (as acurrate in answers as possible but not reasoning just like example "question i want to connect a car battery with a solar panel and get usable power" answer "do this and connect like this") i have tested Hermes 3 and it works but it's not as good as i would like, so is there a model like that and uncensored to the point where you han ask literally anything (yes anything)
It is on a phone.
If possible please anser with name of creator on hugging face :) thanks for any answers
If further explanation is needed i will happily give it. | 2025-09-01T18:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1n5w7sw/pocket_pal_model/ | Safe-Curve-1335 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5w7sw | false | null | t3_1n5w7sw | /r/LocalLLaMA/comments/1n5w7sw/pocket_pal_model/ | false | false | self | 0 | null |
thats why base model is greater then the thinking model . its processed 4.1 million tokens mostly from cache but those 41k tokes are so low quality . im still saying we set the bar is too low today ai still one of the most stupid ai im using | 0 | i stopped the cli bcz its start giving me stupid shit after the 10 percent of context window used . | 2025-09-01T18:11:40 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5w613 | false | null | t3_1n5w613 | /r/LocalLLaMA/comments/1n5w613/thats_why_base_model_is_greater_then_the_thinking/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'J1nqTIfSBtkXIbHKBTggOrDGSZMAkrLC-47LGFFS33k', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=108&crop=smart&auto=webp&s=f572df4e56409d834a341ea8cb6a2b8cdceea04f', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=216&crop=smart&auto=webp&s=3e389881eddc5095c1cb4049433f06e4e3682e85', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=320&crop=smart&auto=webp&s=066732307abc56d80de26f0e4a866e06b2e630b3', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=640&crop=smart&auto=webp&s=473a9ca6e94e87ae58eeab22222ce35956aaa3ee', 'width': 640}, {'height': 512, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=960&crop=smart&auto=webp&s=fdd400a70c6fbddd5b5c3ca1f69cbaa5b436f37a', 'width': 960}, {'height': 576, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?width=1080&crop=smart&auto=webp&s=3a708a60b79e9a7d51d19c602d62cd7ffe3e76e6', 'width': 1080}], 'source': {'height': 595, 'url': 'https://preview.redd.it/scsgt61oelmf1.png?auto=webp&s=60cfb68ba32f0321e2705bc3b914e69af4f0dc42', 'width': 1114}, 'variants': {}}]} | ||
NVlabs/Jet-Nemotron - GitHub | 8 | With only 2B Jet-Nemotron can handle large conversions at scale, it will fit very well in my hybrid technology stack for vertical integrations
| 2025-09-01T17:34:32 | https://github.com/NVlabs/Jet-Nemotron | Fun-Wolf-2007 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n5v5qc | false | null | t3_1n5v5qc | /r/LocalLLaMA/comments/1n5v5qc/nvlabsjetnemotron_github/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=108&crop=smart&auto=webp&s=1fa7c5affc142b8ab2f1b572aa649fa03c266f04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=216&crop=smart&auto=webp&s=04b0f9d4d0751a82dcf147a71e5f1b2adbce4040', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=320&crop=smart&auto=webp&s=9f94024a3424e9778112688a90bf059de847f563', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=640&crop=smart&auto=webp&s=01823d1ae1144fa49fb372b7c69931a4897179e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=960&crop=smart&auto=webp&s=ac082b7828ea27e3d03a6c4d1a7af92c18a60f57', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?width=1080&crop=smart&auto=webp&s=7f68cc2d87061437a673df21bab8dce046528b61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K0qHaJviyvBcqm8Ce9c9NdbCAQoDWdz4r7-DGYV1RBM.png?auto=webp&s=cd0ae8b146d6627d88b0af464ee6d3c8bfe36559', 'width': 1200}, 'variants': {}}]} | |
Vision Language Models topic for master thesis | 0 | Hello, I will be writing a thesis on this topic. I'm looking forward to your suggestions for resources. I'm particularly curious about the articles you recommend. Thank you. | 2025-09-01T17:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n5usa9/vision_language_models_topic_for_master_thesis/ | MinimumArtichoke5679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5usa9 | false | null | t3_1n5usa9 | /r/LocalLLaMA/comments/1n5usa9/vision_language_models_topic_for_master_thesis/ | false | false | self | 0 | null |
LangChain vs AutoGen — which one should a beginner focus on? | 2 | Hey guys, I have a question for those working in the AI development field. As a beginner, what would be better to learn and use in the long run: LangChain or AutoGen? I’m planning to build a startup in my country. | 2025-09-01T17:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n5up40/langchain_vs_autogen_which_one_should_a_beginner/ | 1Forbess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5up40 | false | null | t3_1n5up40 | /r/LocalLLaMA/comments/1n5up40/langchain_vs_autogen_which_one_should_a_beginner/ | false | false | self | 2 | null |
🌲 Awful Jade (aj): A Rust-Powered CLI for OpenAI Compatible APIs | 0 | 🌲 Awful Jade (aj): A Rust-Powered CLI for OpenAI Compatible APIs
Hey hey,
I’ve created an open-source project called [Awful Jade CLI](https://awful-aj.awfulsec.com/) (aka `aj`) — a cross-platform command-line interface for working with Large Language Models (LLMs). Think of it as a fast, memory-capable REPL that lives in your terminal with an in-memory vector database for long-term (outside the context window) "memory" retrieval. It includes an embedded SQLite database for session management. Dead simple YAML templates for prompt engineering (pre-prompt, post-prompt) injection, conversation forging for guided outputs, and response format for Structured Output for Tool Calling JSON-only responses for LLMs that support it.
Awful Jade CLI also exposes a plethora of useful functions in its public API as a library your Rust projects can integrate. See the [library documentation](https://awful-aj.awfulsec.com/use/library.html). This makes it ideal for use as your client library in any Rust agent framework you might be cooking up.
There's comprehensive documentation available on:
* [How to install](http://awful-aj.awfulsec.com/install/index.html)
* [Configuration](https://awful-aj.awfulsec.com/config.html)
* [Using templates](https://awful-aj.awfulsec.com/templates/index.html)
The non-interactive command opens up a lot of opportunities for [molding your output](https://awful-aj.awfulsec.com/moldable-outputs.html), especially with the ability to structure outputs using [JSON Response Schemas](https://lmstudio.ai/docs/app/api/structured-output) right in the template.
I've been using it as a library for [all of my projects](https://awful-aj.awfulsec.com/downstream-projects.html#-currently-known-projects) that require prompt engineering, calling multiple LLM services, or anything that requires executing code using the response from an LLM. If you build with it let me know and I'll rep your project in the documentation.
The code is [heavily documented](https://docs.rs/awful_aj/latest/awful_aj/) and not just written by an AI and trusted to be correct. Please use LLMs for enhancing documentation, but please ALWAYS PROOFREAD and fix language that sounds inhuman. 🦀
✨ What it Does
* Ask mode: `aj ask "question"` → get model responses directly (stores context, trims old tokens, recalls with vector search if you use a session name).
* Interactive sessions: `aj interactive` → REPL with memory (stores context, trims old tokens, recalls with vector search).
* Vector Store: Uses `all-mini-lm-l12-v2` embeddings + HNSW for semantic recall. Your assistant actually remembers past context. 🧠
* Config & Templates: Fully YAML-driven. Swap system prompts, seed conversation messages, or enforce JSON schema outputs.
* Cross-platform: macOS, Linux, Windows.
📚 Docs & Resources
* [GitHub](https://github.com/graves/awful_aj) (Open Source) 🐙
* [Docs.rs](https://docs.rs/awful_aj/latest/awful_aj/) (API Reference) 📖
* [Crates.io](https://crates.io/crates/awful_aj) (Package) 📦
🚧 Why I Built This
I spend most of my computer time in a terminal. GUIs are still almost universally trash. I wanted:
* A fast, simple, composable Rust tool that blessed [my Qwen 3 finetune](https://huggingface.co/dougiefresh/jade_qwen3_4b) with the ability to **\~remember\~**.
* Composable templates for repeatable workflows (e.g., textbook question synthesis, code refactoring, Bhagavad Gita study buddy 😂).
* An in-memory, local, privacy-first vector DB that “just works” — no external services, minimal code deps, no data leaks.
🙏 How You Can Help
* ⭐ Star the repo: [github.com/graves/awful\_aj](http://github.com/graves/awful_aj)
* 🐛 File issues if you hit bugs or edge cases.
* 📝 Contribute templates — the most creative ones become part of the examples.
* 📢 Spread the word in Rust, AI, or open-source communities.
>💡 Awful Jade: bad name, good brain. | 2025-09-01T17:12:26 | https://v.redd.it/fe2dsau45lmf1 | sqli | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5ujuc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fe2dsau45lmf1/DASHPlaylist.mpd?a=1759338762%2CYzJkNWE4N2U2OWY5YmI0ZjI1YjIxMjM4NTc3NzY0MjhhYjJmOWZhMWZmOGI4MTVlYTA3YmJmZmI4NTAxOGM2MA%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/fe2dsau45lmf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fe2dsau45lmf1/HLSPlaylist.m3u8?a=1759338762%2CMDk4MTA0ZjM5M2RhYWZmNTk0MTc4MTBhZTE0NDEzYTk0MmM2OTNmYmZhNTlhZjBkNjMzZDVmNGJkZTRhMWViZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fe2dsau45lmf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n5ujuc | /r/LocalLLaMA/comments/1n5ujuc/awful_jade_aj_a_rustpowered_cli_for_openai/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a5cbc00dea8c9826feb7aac6489daf189c7f96d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=216&crop=smart&format=pjpg&auto=webp&s=6533c36ebed25ef48d26157a0febd7aba369e4f2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=320&crop=smart&format=pjpg&auto=webp&s=0a84462f1b49e98710b242fb7cea3f87ce68c0c3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=640&crop=smart&format=pjpg&auto=webp&s=ccb0e97a2a20785ae3780db7e12dfe8dfcf19dca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=960&crop=smart&format=pjpg&auto=webp&s=094dc9021b07dadec63c1604f7996f322c987983', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=af98c8b02914acfdaf86eb84a2fa92df109b6471', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cGhxNTFidTQ1bG1mMeQIRofuXYN8yWgOZF0tNiFehU2YLKpWF02mRkCOZfaa.png?format=pjpg&auto=webp&s=f9ce7d4e205bd1ca35d6c22c2dce71f01e9a15fe', 'width': 1920}, 'variants': {}}]} | |
Llama.cpp - so we're not fully offloading to GPU? | 3 | I wonder what the performance cost of this is, exactly?
I've tried quite a few quants now and if you enable the --verbose flag, you always see the following:
load_tensors: tensor 'token_embd.weight' (q8_0) (and 0 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors: CUDA1 model buffer size = 15803.55 MiB
load_tensors: CUDA2 model buffer size = 14854.57 MiB
load_tensors: CPU_Mapped model buffer size = 315.30 MiB | 2025-09-01T17:09:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n5uh44/llamacpp_so_were_not_fully_offloading_to_gpu/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5uh44 | false | null | t3_1n5uh44 | /r/LocalLLaMA/comments/1n5uh44/llamacpp_so_were_not_fully_offloading_to_gpu/ | false | false | self | 3 | null |
Dear Vibe coders.... raise your hand if you hit the rate limit | 0 | 2025-09-01T17:07:55 | theundertakeer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5ufc1 | false | null | t3_1n5ufc1 | /r/LocalLLaMA/comments/1n5ufc1/dear_vibe_coders_raise_your_hand_if_you_hit_the/ | false | false | 0 | {'enabled': True, 'images': [{'id': '38ea5EP1EAf3-03UzIx7FqCp--ex_2RrVsHh-iq6lw4', 'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/n9muq2255lmf1.jpeg?width=108&crop=smart&auto=webp&s=93c6242562151afb40c3d41d8480fbda1082448d', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/n9muq2255lmf1.jpeg?width=216&crop=smart&auto=webp&s=9f77cefdfcc81598be4607b08c98827dc083ca88', 'width': 216}, {'height': 610, 'url': 'https://preview.redd.it/n9muq2255lmf1.jpeg?width=320&crop=smart&auto=webp&s=81049657c09128877da95d35834045a846455268', 'width': 320}], 'source': {'height': 721, 'url': 'https://preview.redd.it/n9muq2255lmf1.jpeg?auto=webp&s=0e5e6ab15d730461a919dc1310645072578fdc73', 'width': 378}, 'variants': {}}]} | |||
Who? Me? Nah thanks, I am Engineer not a Viber...or...whatsapp... | 153 | 2025-09-01T17:05:46 | theundertakeer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5uda9 | false | null | t3_1n5uda9 | /r/LocalLLaMA/comments/1n5uda9/who_me_nah_thanks_i_am_engineer_not_a/ | false | false | default | 153 | {'enabled': True, 'images': [{'id': 'bprgibur4lmf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=108&crop=smart&auto=webp&s=a009906ac7e2a1dc4c8b744d575809e3ddba2cf7', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=216&crop=smart&auto=webp&s=fe7809a84f51d5a21ff66fa250f8e26c7f1ba75c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=320&crop=smart&auto=webp&s=3e47467726387e8550053c8e228b9ac4fcefada4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=640&crop=smart&auto=webp&s=e914bc544371352d5a37ccf88f9d04e75493dd5c', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=960&crop=smart&auto=webp&s=ab3fe5e34adbc66eb8af21a331cd42feff47a298', 'width': 960}, {'height': 1081, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?width=1080&crop=smart&auto=webp&s=e58ac207d41b75088c7286d601e731a3e0b1bf75', 'width': 1080}], 'source': {'height': 1081, 'url': 'https://preview.redd.it/bprgibur4lmf1.jpeg?auto=webp&s=ef6854483be9925986c52976b9f899ffdba191c5', 'width': 1080}, 'variants': {}}]} | ||
AMD 6x7900xtx 24GB + 2xR9700 32GB VLLM QUESTIONS | 173 | Dear reddit community, last two years from time to time our pc with one 7900xtx growed into this machine.
I am try to find solution to utilize it for 2-3 parallel queries at high speed for the qwen3-coder-flash model or for the quantized version of qwen3-235b-instruct.
I test different ways to launch VLLM with different cards, but it stay on Cuda graph (i also disabled with enforce\_eager).
version: '3.8'
services:
vllm:
pull_policy: always
tty: true
restart: unless-stopped
ports:
- 8000:8000
image: rocm/vllm-dev:nightly_main_20250817
shm_size: '128g'
volumes:
- /mnt/tb_disk/llm:/app/models
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
- /dev/mem:/dev/mem
environment:
- ROCM_VISIBLE_DEVICES=1,2,3,4,5,7,0,6
- HIP_VISIBLE_DEVICES=1,2,3,4,5,7,0,6
- VLLM_USE_V1=0
- VLLM_ATTENTION_BACKEND=ROCM_FLASH
- ROCM_USE_FLASH_ATTN_V2_TRITON=True
- VLLM_USE_TRITON_FLASH_ATTN=1
- VLLM_CUSTOM_OPS=all
- NCCL_DEBUG=ERROR
- PYTORCH_HIP_ALLOC_CONF=expandable_segments:True
command: |
sh -c '
vllm serve /app/models/models/vllm/Qwen3-Coder-30B-A3B-Instruct \
--served-model-name qwen3-235b-a22b:Q2_K_XL \
--max-model-len 131072 \
--gpu-memory-utilization 0.97 \
--tensor-parallel-size 4 \
--enable-auto-tool-choice \
--disable-log-requests \
--tool-call-parser qwen3_coder \
--enable-chunked-prefill \
--max-num-batched-tokens 4096 \
--max-num-seqs 8
'
volumes: {}
**This work ok for -tp 4, but for -tp 8 always stack.**
i know about llama-cpp, but it's very slow if we look at same utilization in vllm, maybe someone here have successful launch tensor parallelism in TGI?
**Interesting thing: R9700 does not loose speed inference in case when model distributed between two cards or one.**
Feel free to ask any question about this machine.
also some GPTQ models work and some don't, maybe it's due to the quantization format,
Other helpful info: MB: MZ32-AR0 3200MT/s x8 32gb, 2x PSU. | 2025-09-01T16:55:26 | djdeniro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5u32n | false | null | t3_1n5u32n | /r/LocalLLaMA/comments/1n5u32n/amd_6x7900xtx_24gb_2xr9700_32gb_vllm_questions/ | false | false | default | 173 | {'enabled': True, 'images': [{'id': 'txo8g9us0lmf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=108&crop=smart&auto=webp&s=5709ccecae284be431a5d36dcf1e9029ee4d0b1b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=216&crop=smart&auto=webp&s=89354c3c6e710df93c9d1035029e93a1087342be', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=320&crop=smart&auto=webp&s=e397a328e8720d37e7e6ce2bee2d3213fd2fd2eb', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=640&crop=smart&auto=webp&s=1a9855934c099881ce9400ed4488f4043ebbec7e', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=960&crop=smart&auto=webp&s=147bb8805421d58f2d0b8344d868126cee4877d0', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?width=1080&crop=smart&auto=webp&s=1998729018f45cd80cd25a1c280323dae72eb9a8', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/txo8g9us0lmf1.jpeg?auto=webp&s=d2046e743379f4acc719f3dc921f7e19844d03d4', 'width': 1280}, 'variants': {}}]} | |
VLLM FP16 weights + FP8 KV vs FP8 weights + FP8 KV in terms of speed? | 6 | how much faster would TTFT get if I use an FP8 model instead of FP16? I'm already using FP8 KV but merging my LORA with a FP8 model weights seems to be much more complicated.
Sorry if I'm using wrong terms and words I'm not very technical :p | 2025-09-01T16:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n5tgqv/vllm_fp16_weights_fp8_kv_vs_fp8_weights_fp8_kv_in/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5tgqv | false | null | t3_1n5tgqv | /r/LocalLLaMA/comments/1n5tgqv/vllm_fp16_weights_fp8_kv_vs_fp8_weights_fp8_kv_in/ | false | false | self | 6 | null |
Are there any SDKs that offer native tool calling functionality that can be used with any LLMs | 2 | Title says all. I know most model providers offer this on their cloud APIs but I am looking for an SDK that implements tool calling so that it can be used with any open weights model. | 2025-09-01T16:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n5t4km/are_there_any_sdks_that_offer_native_tool_calling/ | Ok_Needleworker_5247 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5t4km | false | null | t3_1n5t4km | /r/LocalLLaMA/comments/1n5t4km/are_there_any_sdks_that_offer_native_tool_calling/ | false | false | self | 2 | null |
Which llm do you use for agentic coding with 2 rtx 3090s + roo code? | 7 |
I am currently running glm4.5 air q1 with a draft model with llama-swap+llama.cpp
models:
"glm45-air":
cmd: |
/home/filippo/llama.cpp/build/bin/llama-server
-hf unsloth/GLM-4.5-Air-GGUF:IQ1_M
--split-mode layer --tensor-split 0.49,0.51
--flash-attn
-c 85000 --ubatch-size 512
--cache-type-k q4_1 --cache-type-v q4_1
-ngl 99 --threads -1
--port ${PORT} --host 0.0.0.0
--no-mmap
-hfd mradermacher/GLM-4.5-DRAFT-0.6B-v3.0-i1-GGUF:Q6_K -ngld 99
--kv-unified
i am getting \~3/400 PP and 20/30 tk/s and it's the only model that can properly work with roocode for me
I have tried unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF\_Qwen3-Coder-30B-A3B-Instruct-UD-Q6\_K\_XL.gguf, unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF\_Qwen3-30B-A3B-Thinking-2507-UD-Q6\_K\_XL.gguf, unsloth\_Qwen3-30B-A3B-Instruct-2507-GGUF\_Qwen3-30B-A3B-Instruct-2507-UD-Q6\_K\_XL.gguf
but either they think too much and they get the tool calls right, or they just generate wrong tool calls
I also tried Devstral small, but same problem
can you share your cmd lines and pp and tk/s speeds?
I have 128gb ddr4 2300MHz(4 sticks) ram but when offloading to ram is quite slow and not good enough for agentic coding for me
| 2025-09-01T16:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n5szaj/which_llm_do_you_use_for_agentic_coding_with_2/ | Filo0104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5szaj | false | null | t3_1n5szaj | /r/LocalLLaMA/comments/1n5szaj/which_llm_do_you_use_for_agentic_coding_with_2/ | false | false | self | 7 | null |
How much vram needed to run higgs audio v2 in real time? | 1 | i was wondering how much gpu vram would it take higgs audio to become real time speed | 2025-09-01T16:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1n5sldo/how_much_vram_needed_to_run_higgs_audio_v2_in/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5sldo | false | null | t3_1n5sldo | /r/LocalLLaMA/comments/1n5sldo/how_much_vram_needed_to_run_higgs_audio_v2_in/ | false | false | self | 1 | null |
LocalLLM for video creation? | 0 | I have macbook pro m4 max chip with 128gb of ram and 2tb ssd and 16core cpu / 40core gpu.
which model is decent and i can run on my local setup in order to create short videos? 40-60 seconds?
Thanks in advance! | 2025-09-01T15:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n5rt8j/localllm_for_video_creation/ | Ok-Respond2582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5rt8j | false | null | t3_1n5rt8j | /r/LocalLLaMA/comments/1n5rt8j/localllm_for_video_creation/ | false | false | self | 0 | null |
How do you manage long-running GPU jobs without wasting hours of compute? | 0 | For example:
* Do you checkpoint aggressively?
* Run on smaller GPUs and distribute?
* Or just accept idle time as part of the game?
I’ve been thinking a lot about **GPU utilization** and where most people see inefficiencies. Curious what’s working for you. | 2025-09-01T15:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n5rraz/how_do_you_manage_longrunning_gpu_jobs_without/ | Significant-Cash7196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5rraz | false | null | t3_1n5rraz | /r/LocalLLaMA/comments/1n5rraz/how_do_you_manage_longrunning_gpu_jobs_without/ | false | false | self | 0 | null |
Need advice on setting up RAG with multi-modal data for an Agent | 2 | I am working on a digital agent, where I have information about a product from 4 different departments. Below are the nature of each department data source:
1. Data Source-1: The data is in text summary format. In future I am thinking of making it into structured data for better RAG retrieval
2. Data Source-2: For each product, two versions are there, one is summary (50 to 200 words) and other one is very detailed document with lots of sections and description (\~3000 words)
3. Data Source-3: For each product, two versions are there, one is summary (50 to 200 words) excel and other one is very detailed document with lots of sections and description (\~3000 words)
4. Data Source-4: Old reference documents (pdf) related to that product, each document contains any where between 10 to 15 pages with word count of 5000 words
My thought process is to handle any question related to a specific product, I should be able to extract all the metadata related to that product. But here, If I add all the content related to a product every time, the prompt length will increase significantly.
For now I am taking the summary data of each data source as a metadata. And keeping product name in the vector database. So when user asks any question related to a specific product thorough RAG I can identify correct product and from metadata I can access all the content. Here I know, I can stick with conditional logic as well for getting metadata, but I am trying with RAG thinking I may use additional information in the embedding extraction.
Now my question is for Data Source - 3 and 4, for some specific questions, I need detailed document information. Since I can't send this every time due to context and token usage limitations, I am looking for creating RAG for these documents, but I am not sure how scalable that is. because if I want to maintain 1000 different products, then I need 2000 separate vector databases.
Is my thought process correct, or is there any better alternative. | 2025-09-01T15:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n5rhnt/need_advice_on_setting_up_rag_with_multimodal/ | Ahmad401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5rhnt | false | null | t3_1n5rhnt | /r/LocalLLaMA/comments/1n5rhnt/need_advice_on_setting_up_rag_with_multimodal/ | false | false | self | 2 | null |
I'm building local, open-source, fast, efficient, minimal, and extendible RAG library I always wanted to use | 178 | I got tired of overengineered and bloated AI libraries and needed something to prototype local RAG apps quickly so I decided to make my own library,
Features:
➡️ Get to prototyping local RAG applications in seconds: uvx rocketrag prepare & uv rocketrag ask is all you need
➡️ CLI first interface, you can even visualize embeddings in your terminal
➡️ Native llama.cpp bindings - no Ollama bullshit
➡️ Ready to use minimalistic web app with chat, vectors visualization and browsing documents➡️ Minimal footprint: milvus-lite, llama.cpp, kreuzberg, simple html web app
➡️ Tiny but powerful - use any chucking method from chonkie, any LLM with .gguf provided and any embedding model from sentence-transformers
➡️ Easily extendible - implement your own document loaders, chunkers and BDs, contributions welcome!
Link to repo: [https://github.com/TheLion-ai/RocketRAG](https://github.com/TheLion-ai/RocketRAG)
Let me know what you think. If anybody wants to collaborate and contribute DM me or just open a PR! | 2025-09-01T15:17:56 | https://v.redd.it/tqnduvlflkmf1 | Avienir | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5rhbd | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/tqnduvlflkmf1/DASHPlaylist.mpd?a=1759331891%2COTk0MTljZTJjMjkzMmQ5NTAyNjhhZDY2YmYxYjlmMDQwYzcxYzdmNmVkODk1YzQwZjNmNjRiNmI0ZmI2ODcyOQ%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/tqnduvlflkmf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/tqnduvlflkmf1/HLSPlaylist.m3u8?a=1759331891%2CYTBlNjhmNzJiMDE4YTFhMGJkMjgyNTU4OTNiYTgzZTBiYWEzODQ5ZjY5MzYxMGZlMzFmNjBhM2ViYTQxY2JkZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tqnduvlflkmf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 876}} | t3_1n5rhbd | /r/LocalLLaMA/comments/1n5rhbd/im_building_local_opensource_fast_efficient/ | false | false | 178 | {'enabled': False, 'images': [{'id': 'cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx.png?width=108&crop=smart&format=pjpg&auto=webp&s=75b99779bcd1370175c776d317ac4b5033bec395', 'width': 108}, {'height': 177, 'url': 'https://external-preview.redd.it/cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx.png?width=216&crop=smart&format=pjpg&auto=webp&s=ebf481b2fd1359bf49876a7f7dc78a3a2b81073f', 'width': 216}, {'height': 263, 'url': 'https://external-preview.redd.it/cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx.png?width=320&crop=smart&format=pjpg&auto=webp&s=12eeb30bd6f3005b8466c09577f9d225ae0a9a13', 'width': 320}, {'height': 526, 'url': 'https://external-preview.redd.it/cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx.png?width=640&crop=smart&format=pjpg&auto=webp&s=e5b749e6a74ee0d7ba67b0dc14a5095589b43e82', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cXB3bmh1bGZsa21mMfbflv5Di1j64vZv4v6FbqGgackbIUKWjlzVaYUu9HIx.png?format=pjpg&auto=webp&s=26df71eff7702090b088285fb6b81d8313b5e157', 'width': 876}, 'variants': {}}]} | |
Old audio recording enhancement Model | 6 | Hi all,
I am trying to find if there is any model that can be used to enhance and recover lost frequencies of old audio tape recordings.
The requirement is that I have old music band recordings on tapes. The tapes loose a lot of frequencies in recordings and I am looking for a way to generate them back.
Any ideas would be helpful.
What would a setup for this looks like software wise. I am currently using LM Studio and Llamacpp on Ryzen AI 395+ Max 128G
Thanks | 2025-09-01T15:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n5r2qr/old_audio_recording_enhancement_model/ | Recent-Success-1520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5r2qr | false | null | t3_1n5r2qr | /r/LocalLLaMA/comments/1n5r2qr/old_audio_recording_enhancement_model/ | false | false | self | 6 | null |
Can someone help me with where to generate or get a roleplay dataset (mid-nsfw) to fine-tune LLaMA 3.1 8b? | 21 | 😶 | 2025-09-01T14:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n5qmu1/can_someone_help_me_with_where_to_generate_or_get/ | internal-pagal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5qmu1 | false | null | t3_1n5qmu1 | /r/LocalLLaMA/comments/1n5qmu1/can_someone_help_me_with_where_to_generate_or_get/ | false | false | nsfw | 21 | null |
How do you handle background noise & VAD for real-time voice agents? | 5 | I’ve been experimenting with building a voice agent using real-time STT, but I’m running into the classic issue: the transcriber happily picks up everything — background noise, side voices, even silence that gets misclassified.
Stt: GPT-4o Transcribe (using their VAD) over WebSocket
For folks who’ve built real-time voice agents / caller bots:
How do you decide when to turn STT on/off so it only captures the right user at the right time?
Do you rely mostly on model-side VAD (like GPT-4o’s) or add another layer (Silero VAD, WebRTC noise suppression, Krisp, etc.)?
Any best practices for keeping things real-time while filtering background voices?
Do you handle this more on the client side (mic constraints, suppression) or on the backend?
I’m especially curious about what has actually worked for others in production | 2025-09-01T14:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1n5qi4l/how_do_you_handle_background_noise_vad_for/ | Funny_Working_7490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5qi4l | false | null | t3_1n5qi4l | /r/LocalLLaMA/comments/1n5qi4l/how_do_you_handle_background_noise_vad_for/ | false | false | self | 5 | null |
Integrated experience between desktop and mobile, is there a way? | 2 | Hi! 😀
I'm new to running local LLM, and my main objective started with wanting to keep away from daily limits and away from monthly subscriptions while being able to use LLM as a tool for light research, ideation, and mundane work.
I installed LM Studio on my macbook and tested a bunch of models and overall had good results and it sparked a genuine interest to keep exploring.
In the meantime on iOS, I've tried the following apps:
* H2O AI - the experience was lacking, I didn't like it...
* Pocket Pal - quickly found a show-stopping bug that I reported [here](https://www.reddit.com/r/LocalLLaMA/comments/1n5e605/pocket_pal_on_ios_completion_failed_context_is/) .
* Apollo powered by Liquid AI - this is the one that I'm testing now. First impression is solid, it opened up a door to signup with OpenRouter (which I had not tried before) and that's the way I'm currently testing (with one of their free models). Later on I will try loading a model directly into Apollo.
The initial problem seems solved, I'm able to run LLM on both devices.
But right now, I'm thinking... **What would be the best appr*****oach to access the same prompts from both desktop and mobile? Just like ChatGPT or Grok or Perplexity allows to?***
I understand LM Studio can act as a server and with some network / vpn configuration I would be able to connect to it from my phone, from anywhere, but that would require leaving the computer on all the time and that's far from ideal to me. I've also read a little bit about a separate tool called LLM Pigeon which aims to solve this, but again it relies on the computer running.
So... how are you folks dealing with this?
I appreciate your feedback 🙏 | 2025-09-01T14:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n5q50r/integrated_experience_between_desktop_and_mobile/ | voprosy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5q50r | false | null | t3_1n5q50r | /r/LocalLLaMA/comments/1n5q50r/integrated_experience_between_desktop_and_mobile/ | false | false | self | 2 | null |
which one is faster? FP16 VLLM or Transformers + BitsAndBytes Q8? | 0 | I mainly need speed especially TTFT, I want to do FP8 on VLLM but it requires, so it comes down to either FP16 VLLM or Q8 non VLLM. which is better? | 2025-09-01T14:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n5pwde/which_one_is_faster_fp16_vllm_or_transformers/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5pwde | false | null | t3_1n5pwde | /r/LocalLLaMA/comments/1n5pwde/which_one_is_faster_fp16_vllm_or_transformers/ | false | false | self | 0 | null |
Built a private AI framework that runs fully offline on iPhone (demo inside) (OPEN SOURCE) | 12 | # 🐍 Basilisk AI
🚨 **A framework for a fully customizable private AI system that runs 100% offline.**
Basilisk is built to combine **vision, language, and reasoning** into a single offline pipeline — with no reliance on the cloud. It’s designed as a private, adaptable framework that you can extend to your own use cases.
---
## 🚀 Features
- 🖼️ **MiniVLM2** – Vision-Language model for image understanding
- 🔬 **CNN** – Lightweight convolutional neural network optimized for iOS & NumPy
- 🧠 **MiniLLM** – Small language model for reasoning + dialogue
- ⏳ **MiniLSM** – Context & sequence memory
---
## 🔒 Why Basilisk?
- **Fully Offline** → runs entirely on-device (tested on iPhone with Pyto)
- **Private by Design** → no data ever leaves your device
- **Customizable** → extend the framework with your own datasets & modules
- **Lightweight** → NumPy-only execution, minimal dependencies
---
## 📂 Structure Overview
Basilisk/
├── MiniVLM2.py # Vision-Language module
├── CNNModel.py # Convolutional Neural Network
├── MiniLLM.py # Lightweight reasoning model
├── MiniLSM.py # Sequence & memory module
├── main.py # Demo runner
└── README.md # Documentation
---
## ⚡ Quick Start
**Requirements**
- Python 3.11+
- Pyto (for iOS)
- NumPy
**Run**
python main.py
---
## 🎯 Use Cases
- 📊 Data & chart analysis
- 🖼️ Offline image recognition
- 🔍 Local text & document understanding
- 🤖 On-device private assistant
---
## 📜 License & Access
Basilisk is a **framework for a fully customizable private AI system**.
It is open-source inspired, but not yet hosted publicly.
📩 To get the code, just email me: **clucero.2411@gmail.com**
> The future of AI isn’t in the cloud — it’s in your hands.
| 2025-09-01T14:07:20 | https://v.redd.it/q0r6e5s68kmf1 | Traditional_Day2212 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5pnjq | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/q0r6e5s68kmf1/DASHPlaylist.mpd?a=1759327652%2COGRhZDJhMjkxYTg0MjllNTdkMGYxZjE5YjljZGE1NzM5YTA5YTYwMDBiZDJlOGRiNGQ2MWNjN2MwZGRmNmY0Nw%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/q0r6e5s68kmf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/q0r6e5s68kmf1/HLSPlaylist.m3u8?a=1759327652%2COGI0OWMwYTE4NjM3ZmM4MDgxOTNjNjFmZDYyNjJmNjE2MjFiNzNjMTBiNzNlMzcxYThhY2M1MmU5YTkxNDA3Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/q0r6e5s68kmf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}} | t3_1n5pnjq | /r/LocalLLaMA/comments/1n5pnjq/built_a_private_ai_framework_that_runs_fully/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A.png?width=108&crop=smart&format=pjpg&auto=webp&s=e6f632828a499c47c62cfb8e848c3e156921e429', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A.png?width=216&crop=smart&format=pjpg&auto=webp&s=63c2b4a97565d22522a8ad9685ce387a5d7bc0b7', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A.png?width=320&crop=smart&format=pjpg&auto=webp&s=48f319052ca61a48c847bc9b8e546211d7913f48', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A.png?width=640&crop=smart&format=pjpg&auto=webp&s=51350d312d2e5234c114319cd2dc7550ecb4bd87', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/bmdxOXg0czY4a21mMRFTCHiFf2voMJf7wjbDcIgNqYuBxaApNc19HBSW_O8A.png?format=pjpg&auto=webp&s=90782f71e314bd8a39aafce93d1900a82d1e5a00', 'width': 886}, 'variants': {}}]} | |
A new model has appeared on LMArena: qwen-max-2025-08-15. | 1 | [removed] | 2025-09-01T14:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n5pja8/a_new_model_has_appeared_on_lmarena/ | NikoDraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5pja8 | false | null | t3_1n5pja8 | /r/LocalLLaMA/comments/1n5pja8/a_new_model_has_appeared_on_lmarena/ | false | false | self | 1 | null |
A new model has appeared on LMArena: qwen-max-2025-08-15. | 1 | [removed] | 2025-09-01T13:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1n5pef7/a_new_model_has_appeared_on_lmarena/ | NikoDraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5pef7 | false | null | t3_1n5pef7 | /r/LocalLLaMA/comments/1n5pef7/a_new_model_has_appeared_on_lmarena/ | false | false | self | 1 | null |
I Vibe Coded this Irresponsible, Open Source, MCP Server: "MCP God Mode" | 0 | This premade MCP Server drops in with **70 tools out of the gate**. Most servers ship with 3–10. This one? It’s meant to feel like “god mode,” giving your AI unfettered access to your OS *by default;* that’s intentional. It’s for OS troubleshooting, experimentation, and yes, some chaos if that’s how you roll. Imagine using Cursor with the ability to troubleshoot your entire OS with one command; you can do that with this.
I'll have you warned: the server is not a production-safe build. It’s a playground. Run it if you want your AI acting like a sysadmin, a hacker, and a personal assistant rolled into one.
The README is obnoxious on purpose (vibe-coded charm, if you will). I built it with Cursor half experimentation, half provocation, and meticulous testing.
# Tools Included (roll call)
health – check system health
system\_info – show system specs
system\_exec – run system commands
proc\_run – run processes
proc\_run\_elevated – run with admin/sudo
shell\_exec\_smart – smart shell execution
system\_monitor – live performance stats
fs\_list – list files/folders
fs\_read\_text – read text files
fs\_write\_text – write text files
fs\_search – search files with regex
file\_system\_advanced – advanced file ops
download\_file – grab files from URL
git\_status – show repo status
win\_processes – manage Windows processes
unix\_processes – manage Linux/macOS processes
process\_management – cross-platform process control
win\_services – manage Windows services
unix\_services – manage Unix services
service\_control – universal service ops
unix\_sudo\_exec – run sudo commands
performance\_monitor – detailed monitoring
system\_maintenance – cleanup + optimize
system\_repair – fix common issues
system\_backup – create backups
disk\_management – handle disks/partitions
create\_restore\_point – make restore points
log\_analysis – scan system logs
security\_audit – full system audit
security\_scan – vulnerability scan
registry\_read – read registry/configs
registry\_write – write registry/configs
network\_diagnostics – ping/traceroute tools
network\_scan – scan LAN devices
network\_advanced – firewall/routing info
win\_advanced – Windows net/system ops
unix\_advanced – Unix net/system ops
api\_client – test APIs
security\_privacy – VPN/proxy/adblock
browser\_control – basic browser control
browser\_automation – automate browsing
browser\_cleanup – clear browser data
browser\_advanced – tabs, history, bookmarks
web\_automation – logins/forms automation
web\_scraping – extract web content
change\_wallpaper – set wallpaper
content\_processing – OCR, PDFs, media
event\_log\_analyzer – parse event logs
email\_status – check email health
email\_config – set up accounts
email\_compose – draft emails
email\_send – send emails
email\_check – check inbox
email\_login – secure login
email\_accounts – manage multiple accounts
email\_set\_active – switch accounts
email\_drafts – manage drafts
calculator – basic calc
math\_calculate – advanced mathjs calc
math\_solve – solve equations
math\_derivative – calculate derivatives
math\_integral – solve integrals
math\_matrix – matrix math
math\_statistics – stats functions
math\_units – convert units
math\_complex – complex numbers
math\_plot – plot functions
dice\_rolling – RNG + dice rolls
rag\_search – search with AI + context
rag\_query – contextual AI queries
**Why care?**
Because it turns Claude/ChatGPT/Cursor into a **sysadmin with no leash**. It bridges Windows and Unix, folds in math/AI logic, email, browser automation, API clients, security scans, backups, system repair, even wallpaper control. It’s like handing your LLM a Swiss Army knife dipped in nitro.
**Repo link:** [https://github.com/BlinkZer0/MCP-God-Mode](https://github.com/BlinkZer0/MCP-God-Mode?utm_source=chatgpt.com)
Curious what tools I *missed*. If you had god mode on your AI, what would you want added? | 2025-09-01T13:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1n5p4ik/i_vibe_coded_this_irresponsible_open_source_mcp/ | Blink_Zero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5p4ik | false | null | t3_1n5p4ik | /r/LocalLLaMA/comments/1n5p4ik/i_vibe_coded_this_irresponsible_open_source_mcp/ | false | false | self | 0 | null |
Epyc 9575F + 4 * 3090 inference speed? | 8 | I’m planning to build a server with 9575F
+ 12 * ddr5 64G 6400 + 4 * 3090, to run run local inference using moe models like ds-r1 or GLM 4.5 and do a lot more other self hosted stuffs. With ik_llama.cpp or ktransformer, does anyone have approximately the idea how much tps I’ll get with GLM 4.5 Q4_K_M with 8.2B actually active params (for simplicity, supposing zero context)? Moreover, I currently have only 1 3090 and i’m still waiting to see if better cards with higher vram will come out, what’s the approximate tps with only 1 3090 with the same cpu setup?
| 2025-09-01T13:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n5p1oj/epyc_9575f_4_3090_inference_speed/ | Unhappy-Tangelo5790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5p1oj | false | null | t3_1n5p1oj | /r/LocalLLaMA/comments/1n5p1oj/epyc_9575f_4_3090_inference_speed/ | false | false | self | 8 | null |
Best benchmarks for testing different GPU's 24-48gb | 0 | I have a host of various cards \[A6000, 4090 24gb, 4090 48gb, 3090ti, A100\]
Id like to know what benchmarks that can compare these cards memory bandwidths, and processing speeds, so i can compare against these newly made RTX 4090 48gb cards.
What are some technical benchmarks I can run that would showcase how a 4090 48gb compares to an A6000 or two 3090 ti's | 2025-09-01T13:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n5p1em/best_benchmarks_for_testing_different_gpus_2448gb/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5p1em | false | null | t3_1n5p1em | /r/LocalLLaMA/comments/1n5p1em/best_benchmarks_for_testing_different_gpus_2448gb/ | false | false | self | 0 | null |
LocalLLaMA-like community for non-local models? | 4 | Can anyone recommend subreddits with a technical audience like this one, but where it’s acceptable to ask non-LLM-related questions? I find myself more and more frustrated with communities like ... (naming them apparently deletes my post, haha).
Bot posts, bot responses and filled with people that literally have no idea what they're talking about.
Where would one go for a technical discussions on non-local models? | 2025-09-01T13:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n5ow0z/localllamalike_community_for_nonlocal_models/ | gopietz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5ow0z | false | null | t3_1n5ow0z | /r/LocalLLaMA/comments/1n5ow0z/localllamalike_community_for_nonlocal_models/ | false | false | self | 4 | null |
LocalLLaMA community for non-local models? | 1 | [removed] | 2025-09-01T13:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n5otx3/localllama_community_for_nonlocal_models/ | gopietz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5otx3 | false | null | t3_1n5otx3 | /r/LocalLLaMA/comments/1n5otx3/localllama_community_for_nonlocal_models/ | false | false | self | 1 | null |
Best 20-14b No COT tool calling LLM | 3 | Hi,
I’m struggling to find a good choice here. I have a very latency sensitive system that requires an LLM to make multiple independent tool calls. None of the tool calls are particularly difficult (just general search tools) but it needs to be fast.
I designed the system for Llama 3.3 70b but it’s far too slow. Llama 3 8b is a lot faster but fails many tool calls and performs worse.
What do people recommend that has fast time to first token, no cot (to keep latency low), and does well in tool calling?
Don’t worry about hardware assume I can run any size model.
| 2025-09-01T13:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n5oqti/best_2014b_no_cot_tool_calling_llm/ | neeeser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5oqti | false | null | t3_1n5oqti | /r/LocalLLaMA/comments/1n5oqti/best_2014b_no_cot_tool_calling_llm/ | false | false | self | 3 | null |
The next leap in capability: agent operating system | 0 | OpenRouter is very cool but when it adds tool providers and not just models, it will be insane.
OpenAI admits this themselves on their benchmarks. You just can't compare a model versus a **model + tools**. [https://openai.com/index/introducing-gpt-5/](https://openai.com/index/introducing-gpt-5/)
https://preview.redd.it/scolykajxjmf1.png?width=717&format=png&auto=webp&s=1382156862ae94953028caaa4ddac93dd2a1b17a
Right now with openrouter [tool calling](https://openrouter.ai/docs/features/tool-calling), you have to fulfill the tool response yourself. But imagine if they start adding provider endpoints that handle the tool calls and you can just spec them in the json.
Requesty, their overly spammy but otherwise very credible competitor, is very close behind and will no doubt try to do exactly the same thing.
All the majors (pwc, msft, goolge, etc ad nauseum) are building something similar, but typically, they are largely proprietary with huge lock in and very high switching costs.
I hope we can all, as an open community, get behind the companies that follow **a keep it simple** (complex open standards are just another hidden lock in method) approach to open standards and zero lock in.
My pref is OR right now because they are open, very street and scrappy, but will happily change to someone who proves to be both more so but also efficacious.
An example of an even more open and street approach would be the x402 standard where we don't have to go through a proxy / router. However unless the providers group up and actively subsidize these efforts, it will probably not become efficacious.
| 2025-09-01T13:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n5oitb/the_next_leap_in_capability_agent_operating_system/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5oitb | false | null | t3_1n5oitb | /r/LocalLLaMA/comments/1n5oitb/the_next_leap_in_capability_agent_operating_system/ | false | false | 0 | null | |
How to connect Jan to local Ollama? | 0 | i tried with /v1/ as well but it's not working
tried an empty api key as well
open webui works fine | 2025-09-01T12:47:24 | MobyFreak | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5nrel | false | null | t3_1n5nrel | /r/LocalLLaMA/comments/1n5nrel/how_to_connect_jan_to_local_ollama/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 't04n88ulujmf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=108&crop=smart&auto=webp&s=9b1e24008591e2cc4f30eecd7e84149e679abfad', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=216&crop=smart&auto=webp&s=5db02ce32d96eeadf8fdea484a0c5dd29b8476ac', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=320&crop=smart&auto=webp&s=85dea6c55a7a9481fea6f558909eeb9e811db31a', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=640&crop=smart&auto=webp&s=1946e9a1ca0627de0664ae28810e2153b933f6db', 'width': 640}, {'height': 565, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=960&crop=smart&auto=webp&s=a3697215f0a9a48c06085ac939c3fe9e50603230', 'width': 960}, {'height': 635, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?width=1080&crop=smart&auto=webp&s=6d44a84cd9f0680cca5c25d8922d769f8b714b89', 'width': 1080}], 'source': {'height': 1096, 'url': 'https://preview.redd.it/t04n88ulujmf1.png?auto=webp&s=4f612da57d4f16445ef6facb132224c7de1582cf', 'width': 1862}, 'variants': {}}]} | |
Whats your take RAG or MCP will lead the future? | 0 | I have summarised my understanding and I would love to know your POV on this:
* **RAG integrates language generation with real-time information** retrieval from external sources. It improves the accuracy and relevancy of LLM responses by fetching updated data without retraining. RAG uses vector databases and frameworks like Langchain or LlamaIndex for storing and retrieving semantically relevant data chunks to answer queries dynamically. Its main advantages include dynamic knowledge access, improved factual accuracy, scalability, reduced retraining costs, and fast iteration. However, RAG requires manual content updates, may retrieve semantically close but irrelevant info, and does not auto-update with user corrections.
* **MCP provides persistent, user-specific memory and context to LLMs**, enabling them to interact with multiple external tools and databases in real-time. It stores structured memory across sessions, allowing personalization and stateful interactions. MCP's strengths include persistent memory with well-defined schemas, memory injection into prompts for personalization, and integration with tools for automating actions like sending emails or scheduling. Limitations include possible confusion from context overload with many connections and risks from malicious data inputs.
Here are the key differences between them: [https://hyscaler.com/insights/rag-vs-mcp-full-guide-2/](https://hyscaler.com/insights/rag-vs-mcp-full-guide-2/) | 2025-09-01T12:14:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n5n2k0/whats_your_take_rag_or_mcp_will_lead_the_future/ | kingchaitu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5n2k0 | false | null | t3_1n5n2k0 | /r/LocalLLaMA/comments/1n5n2k0/whats_your_take_rag_or_mcp_will_lead_the_future/ | false | false | self | 0 | null |
Context Reasoning Benchmarks: GPT-5, Claude, Gemini, Grok on Real Tasks | 49 | Hi everyone,
Context reasoning evaluates whether a model can read the provided material and answer only from it. The context reasoning category is part of our Task Completion Benchmarks. It tests LLMs on grounded question answering with strict use of the provided source, long context retrieval, and resistance to distractors across documents, emails, logs, and policy text.
**Quick read on current winners**
Top tier (score ≈97): Claude Sonnet 4, GPT-5-mini
Next tier (≈93): Gemini 2.5 Flash, Gemini 2.5 Pro, Claude Opus 4, OpenAI o3
Strong group (≈90–88): Claude 3.5 Sonnet, GLM-4.5, GPT-5, Grok-4, GPT-OSS-120B, o4-mini.
**A tricky failure case to watch for**
We include tasks where relevant facts are dispersed across a long context, like a travel journal with scattered city mentions. Many models undercount unless they truly track entities across paragraphs. The better context reasoners pass this reliably.
**Takeaway**
Context use matters as much as raw capability. Anthropic’s recent Sonnet models, Google’s Gemini 2.5 line, and OpenAI’s new 5-series (especially mini) show strong grounding on these tasks.
**You can see the category, examples, and methodology here:**
[https://opper.ai/tasks/context-reasoning](https://opper.ai/tasks/context-reasoning)
For those building with it, what strengths or edge cases are you seeing in context-heavy workloads? | 2025-09-01T12:14:21 | facethef | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5n2h3 | false | null | t3_1n5n2h3 | /r/LocalLLaMA/comments/1n5n2h3/context_reasoning_benchmarks_gpt5_claude_gemini/ | false | false | default | 49 | {'enabled': True, 'images': [{'id': 'h8d68m9enjmf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=108&crop=smart&auto=webp&s=b36de2df8f307744234c217847d68f4f6a87bab3', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=216&crop=smart&auto=webp&s=a97e1144e0fa2de3a178c596fabae66bf33c5bd0', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=320&crop=smart&auto=webp&s=ccffd6a7d02bef4b29f98584f54f2c05845ed586', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=640&crop=smart&auto=webp&s=d21724bca765d6cd8e82243cab247845f595ebca', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=960&crop=smart&auto=webp&s=63d8a3ca27760386a524d2943ff4aa7d84056b8f', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?width=1080&crop=smart&auto=webp&s=cf6f4897001b5967000fb1c228eb8b5028bd3ecf', 'width': 1080}], 'source': {'height': 1864, 'url': 'https://preview.redd.it/h8d68m9enjmf1.png?auto=webp&s=59a10ef2b609a9d717f498285b92f26ed127bd72', 'width': 2728}, 'variants': {}}]} | |
Hardware selection for LocalLLM + Obsidian Vault (PKM) | 7 | Hi guys, as the title suggests, I am getting into using PKM for my notes. I have been using google studio API keys to run AI assistant with my vault notes and RAG embedding to run my queries. Honesty I am blown away with the personal performance increase that I am feeling with the setup. I am ready to invest around 2500 euros for a local AI setup as I don't want to share my information stored in notes with google for privacy reasons. I am torn between a RTX 5080 setup vs Framework 125 Gb desktop. I am planning to design my own pipelines and integrate AI agents running locally with my notes to give me best cognitive improvement. I am interested in building a smart second brain that works. Although framework can run larger model, but as I want to get my hands dirty with trial and error, I am hesitant that having a iGPU that does not use CUDA might be a bottleneck. At the same time RTX offers better token generation but running larger models will be a bottleneck, Please let me know if you have any suggestions for hardware and LLM selection, I am planning to spend around 2500 euros for my build.
| 2025-09-01T12:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n5mvkp/hardware_selection_for_localllm_obsidian_vault_pkm/ | Dethros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5mvkp | false | null | t3_1n5mvkp | /r/LocalLLaMA/comments/1n5mvkp/hardware_selection_for_localllm_obsidian_vault_pkm/ | false | false | self | 7 | null |
What are your struggles with tool-calling and local models? | 6 | Hey folks
I've been diving into tool-calling with some local models and honestly, it's been a bit of a grind. It feels like getting consistent, reliable tool use out of local models is a real challenge.
What is your experience?
Personally, I'm running into issues like models either not calling the right tool, or calling it correctly but then returning plain text instead of a properly formatted tool call.
It's frustrating when you know your prompting is solid because it works flawlessly with something like an OpenAI model.
I'm curious to hear about your experiences. What are your biggest headaches with tool-calling?
* What models have you found to be surprisingly good (or bad) at it?
* Are there any specific prompting techniques or libraries that have made a difference for you?
* Is it just a matter of using specialized function-calling models?
* How much does the client or inference engine impact success?
Just looking to hear experiences to see if it's worth the investment to build something that makes this easier for people! | 2025-09-01T11:48:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n5mjps/what_are_your_struggles_with_toolcalling_and/ | juanviera23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5mjps | false | null | t3_1n5mjps | /r/LocalLLaMA/comments/1n5mjps/what_are_your_struggles_with_toolcalling_and/ | false | false | self | 6 | null |
Best UGI models that are runnable on consumer-grade hardware? | 5 | I've been looking at the [UGI leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) and whilst it's useful, a lot of the best models are fully proprietary or just enormous (600B params or whatever) and I'm wanting something with more like 20B params or less. What have you found is the best truly uncensored model with as little political lean as possible that can be run locally on consumer-grade hardware? | 2025-09-01T10:59:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n5lnfa/best_ugi_models_that_are_runnable_on/ | jez999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5lnfa | false | null | t3_1n5lnfa | /r/LocalLLaMA/comments/1n5lnfa/best_ugi_models_that_are_runnable_on/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=108&crop=smart&auto=webp&s=4fed45d58e99cc83855597120854de89c347e568', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=216&crop=smart&auto=webp&s=2510e708c2f0841ae632c158f3f56ea96c1b7d84', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=320&crop=smart&auto=webp&s=6d26bbe082978bb449bd7c6f7af816eb0a541206', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=640&crop=smart&auto=webp&s=d62b3c80de08418dce193019680397aa5951f826', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=960&crop=smart&auto=webp&s=682b0c8f47cb16a6dd30410b7c2c011a7aa3b0c8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=1080&crop=smart&auto=webp&s=d3d89fef73f3019551d5197716a2352763075e20', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?auto=webp&s=4b549f21f14526b68f8d9b142e6fbb50268e67b7', 'width': 1200}, 'variants': {}}]} |
Using llama in examsprint | 1 | [removed] | 2025-09-01T10:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n5lhti/using_llama_in_examsprint/ | good_user_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5lhti | false | null | t3_1n5lhti | /r/LocalLLaMA/comments/1n5lhti/using_llama_in_examsprint/ | false | false | self | 1 | null |
normal PC build with 2GPU AMD RADEON AI PRO R9700 vs 1xR9700 + MS-S1 Max mini PC (powered by AMD Ryzen AI Max+ 395) | 3 | The MS-S1 Max mini PC will be equipped with a full PCIe x16 slot, allowing you to install a discrete graphics card.
I'm already starting to wonder if I should wait with the 1st option in favor of the 2nd one.
Any thoughts on this?
[https://www.techradar.com/pro/this-mini-pc-is-the-first-computer-ever-to-have-a-revolutionary-new-tech-that-allows-usb-to-finally-match-thunderbolt-minisforum-ms-s1-max-has-usb-4-0-v2-ports](https://www.techradar.com/pro/this-mini-pc-is-the-first-computer-ever-to-have-a-revolutionary-new-tech-that-allows-usb-to-finally-match-thunderbolt-minisforum-ms-s1-max-has-usb-4-0-v2-ports) | 2025-09-01T09:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n5khlt/normal_pc_build_with_2gpu_amd_radeon_ai_pro_r9700/ | Mundane_Progress_898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5khlt | false | null | t3_1n5khlt | /r/LocalLLaMA/comments/1n5khlt/normal_pc_build_with_2gpu_amd_radeon_ai_pro_r9700/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?width=108&crop=smart&auto=webp&s=7c9078b22782054d769ed62036e14f4def1c0c79', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?width=216&crop=smart&auto=webp&s=877a09b4b39d46d8bad019d31398b565e1483987', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?width=320&crop=smart&auto=webp&s=246047e41fa024b40ebe68789e25d3bb53d4aa5d', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?width=640&crop=smart&auto=webp&s=7ca1773c4a0484d67c3b83674ddf976099f4d503', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?width=960&crop=smart&auto=webp&s=1607ab8a044e063c5c1fa9f88a5aef217490bb16', 'width': 960}], 'source': {'height': 545, 'url': 'https://external-preview.redd.it/qPQq_D_W5qDxz26fvmKQk2rj0gOGL_BH3ZjoYDfRRik.png?auto=webp&s=225522fc90529db096d8004c530c4266b4f5e52d', 'width': 970}, 'variants': {}}]} |
Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB | 2 | I'm just about to place an order for a Macbook Pro, and my current plan is to get a starter computer (14" M4 Pro, 48GB) and save up for a stronger desktop (e.g. Mac Studio) in the future.
Just wanted to explore another option which is pay $1.8+k more and get a 14" M3 Max, 128GB and skip the future desktop. Anyone has experience with the 14" M3 Max? Is the move to 128GB really worth the extra cash (previous generation too). Does it throttle a lot at 14" vs 16"? | 2025-09-01T09:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n5k5rt/macbook_pro_m4_pro_48gb_desktop_vs_m3_max_128gb/ | tangbj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5k5rt | false | null | t3_1n5k5rt | /r/LocalLLaMA/comments/1n5k5rt/macbook_pro_m4_pro_48gb_desktop_vs_m3_max_128gb/ | false | false | self | 2 | null |
Semantic Service Matching | 1 | [removed] | 2025-09-01T09:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n5jx62/semantic_service_matching/ | Accurate_Parsley_663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5jx62 | false | null | t3_1n5jx62 | /r/LocalLLaMA/comments/1n5jx62/semantic_service_matching/ | false | false | self | 1 | null |
What are the best AI generators for creating characters and icons right now? | 0 | Hey everyone! I’m looking for your personal recommendations: what are the best AI tools today for generating characters (like avatars, personas, illustrations) and icons (e.g., for apps, branding)? | 2025-09-01T08:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n5jn9w/what_are_the_best_ai_generators_for_creating/ | Severe_Basket_7109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5jn9w | false | null | t3_1n5jn9w | /r/LocalLLaMA/comments/1n5jn9w/what_are_the_best_ai_generators_for_creating/ | false | false | self | 0 | null |
gpt-oss 120b actually isn't that bad. | 129 | Title says it all. I just wanted to make this post to see what everyone else thinks. It runs at a respectable 10\~ tokens a second with 128k context split between a 3090TI and a 3090 (K and V caches on system ram) and did very well on some math and coding tests I put it through. It honestly feels like a lightweight version of ChatGPT which is not something I would complain about given that it's open weight and runs on 2 consumer gpus. It's not perfect and it refuses for absolutely no reason sometimes but for what it is, it's not terrible. It outperforms Llama 3.3 70b in a lot of ways which is my usual go-to but I can't decide if I like it ENOUGH to make it my default. Perhaps maybe I'll try and finetune it for longer answers and less censorship? Idk I just wanted to say that I gave it a shot and as much as I hate what OpenAI has become, I can't really say it's a terrible llm for what it is. The 20b model is still pretty iffy though. | 2025-09-01T08:46:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n5jhts/gptoss_120b_actually_isnt_that_bad/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n5jhts | false | null | t3_1n5jhts | /r/LocalLLaMA/comments/1n5jhts/gptoss_120b_actually_isnt_that_bad/ | false | false | self | 129 | null |
I built, pre-trained, and fine-tuned a small language model and it is truly open-source. | 769 | Okay, most of the time we all read open-source and in reality it is just open-weights. This time it is truly open-source.
Lille is a 130M parameter model trained from scratch and every part of the stack is open. Dataset, Model weights, Training code, Tokenizer, Optimizer, Evaluation framework...
Two versions are available: a base model trained on billions of tokens, and an instruction-tuned version fine-tuned on a curated instruction dataset.
Fun fact: it was trained locally on a single RTX 4070-TI.
I’d love feedback, suggestions, or contributions - whether it’s fine-tuning ideas, evaluation improvements, or even architectural tweaks.
Thanks! Check it out: [Lille 130M Instruct](https://huggingface.co/Nikity/lille-130m-instruct) | 2025-09-01T08:26:34 | itsnikity | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n5j783 | false | null | t3_1n5j783 | /r/LocalLLaMA/comments/1n5j783/i_built_pretrained_and_finetuned_a_small_language/ | false | false | 769 | {'enabled': True, 'images': [{'id': 'S1xSHjWdfK3NO_ct4yc2IbTa-w05N37rBHmA5KT39pU', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=108&crop=smart&auto=webp&s=0a375db15500cfaf7b99161da27d6ee9952d9361', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=216&crop=smart&auto=webp&s=c6ffc1e5686ddefe69ded1161b42e717675d73f8', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=320&crop=smart&auto=webp&s=245462275dde2a18a14fe9677099e4e0ddcaf825', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=640&crop=smart&auto=webp&s=147ff4faaa129cc07cda0d4d53d824668e625f35', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=960&crop=smart&auto=webp&s=79c452ee38a49ca78fd05061c7940b3ee69c137f', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?width=1080&crop=smart&auto=webp&s=672f2825ddba8e5db1af1ab16b23243722961555', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://preview.redd.it/cwyoa0f6kimf1.png?auto=webp&s=84a38a3a6a6e1585074a3ca53e4b6f62ffa48cb1', 'width': 2320}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.