title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jet-Nemotron 2B/4B 47x faster inference released | 81 | heres the github [https://github.com/NVlabs/Jet-Nemotron](https://github.com/NVlabs/Jet-Nemotron) it was published 2 days ago but I havent seen anyone talk about it | 2025-10-02T06:13:26 | https://huggingface.co/jet-ai/Jet-Nemotron-4B | Odd-Ordinary-5922 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nvw1my | false | null | t3_1nvw1my | /r/LocalLLaMA/comments/1nvw1my/jetnemotron_2b4b_47x_faster_inference_released/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=108&crop=smart&auto=webp&s=38e317973b9e3e40c9039a17553fb4eca10038f3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=216&crop=smart&auto=webp&s=3ec23de9319daa701989b84489e2e701b4669c39', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=320&crop=smart&auto=webp&s=bfc082b939862bea11d2639dab39c6cb46270994', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=640&crop=smart&auto=webp&s=68466d80cc66f634a4f6d8779e7110ddf330d635', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=960&crop=smart&auto=webp&s=2a945c00a394ca4b66c4442a90d4c9ed47498e65', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?width=1080&crop=smart&auto=webp&s=c0ecbe976c6d64681c8eb30667d323bbad2d2a8e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r396-oAbMocWRiDVz2adQ6rwSWE3nHUKDdKf1UIVuHc.png?auto=webp&s=bca645fbaf95447cdf067c8c513d7bdbf78bba49', 'width': 1200}, 'variants': {}}]} | |
Why does my first run with Ollama give a different output than subsequent runs with temperature=0? | 1 | I’m running a quantized model (`deepseek-r1:32b-qwen-distill-q4_K_M`) locally with Ollama.
My generation parameters are strictly deterministic:
"options": {
"temperature": 0,
"top_p": 0.0,
"top_k": 40
}
Behavior I’m observing:
* On the **first run of a prompt**, I get *Output A*.
* On the **second and later runs of the exact same prompt**, I consistently get *Output B* (always identical).
* When I move on to a new prompt (different row in my dataset), the same pattern repeats: first run = *Output A*, later runs = *Output B*.
My expectation was that with `temperature=0`, the output should be deterministic and identical across runs.
But I’m curious seeing this “first run artifact” for every new row in my dataset.
**Question:** Why does the first run differ from subsequent runs, even though the model should already have cached the prompt and my decoding parameters are deterministic? | 2025-10-02T05:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nvvllw/why_does_my_first_run_with_ollama_give_a/ | white-mountain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvvllw | false | null | t3_1nvvllw | /r/LocalLLaMA/comments/1nvvllw/why_does_my_first_run_with_ollama_give_a/ | false | false | self | 1 | null |
Dolphin — analyze-then-parse document image model (open-source, ByteDance) | 12 | Open multimodal doc parser that first analyzes layout, then parses content—aimed at accurate, structured outputs for pages and elements.
* Two-stage flow: (1) generate reading-order layout; (2) parallel parse via **heterogeneous anchor prompting**.
* Page-level → JSON/Markdown; element-level → text/tables/formulas; supports images & multi-page PDFs.
* Extra: HF/“original” inference paths, plus recent **vLLM** and **TensorRT-LLM** acceleration notes in the changelog.
Links: GitHub repo / HF model / paper. [GitHub](https://github.com/bytedance/Dolphin) | 2025-10-02T05:34:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nvvdws/dolphin_analyzethenparse_document_image_model/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvvdws | false | null | t3_1nvvdws | /r/LocalLLaMA/comments/1nvvdws/dolphin_analyzethenparse_document_image_model/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=108&crop=smart&auto=webp&s=98a4e4669dca270944d575f11f0f2afa778d4b08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=216&crop=smart&auto=webp&s=8596d0777bae61c34ccc86773ecd9859eaa3486c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=320&crop=smart&auto=webp&s=1bf2b13dd67aee6b5d2bc110cec7878ac39e76db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=640&crop=smart&auto=webp&s=253ebe3df7002cd2a847eca366dcd9777e012782', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=960&crop=smart&auto=webp&s=85803ecab0db90902cc39f3e46d65c3e822cca80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?width=1080&crop=smart&auto=webp&s=4f23862217c140b06655b54c3d224d51360e4656', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O39x4tkN-MUeTRn_aLr7EArab0qw12a6PZNc6prWNZU.png?auto=webp&s=8a646cb4df05f31e75343ec4d88e08f6d3bba0b2', 'width': 1200}, 'variants': {}}]} |
Using Llama 4 maverick to build Examsprint AI | 1 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews | 2025-10-02T05:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nvv6g6/using_llama_4_maverick_to_build_examsprint_ai/ | General-Inside2248 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvv6g6 | false | null | t3_1nvv6g6 | /r/LocalLLaMA/comments/1nvv6g6/using_llama_4_maverick_to_build_examsprint_ai/ | false | false | self | 1 | null |
Looking for image generator and chat models | 5 | Hey everyone!
New to image generation and have no idea of a local AI chat interface. I've experimented with ComfyUI a bit with some wan and sdxl models and adding lora to add my product shot and do some image generation off of that.
I'm looking for suggestions, guides for:
1. A good model I can run locally on comfyUI which could integrate my product shot and generate images off of that ( tried nano banana too but prefer comfyUI)
(Mostly have used youtube tutorials or reddit subs to get a working flow of nodes for now and wanting to go deeper and understand it better so I can implement better over time)
2. Suggestions as to how I could have a chat interface similar to ChatGPT/ gemini which I could use to learn from our company documents and our data, to answer questions, help with improving it etc ( i don't want to upload company data to any online services)
Please share your workflows and what really worked for you? | 2025-10-02T05:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nvuwc2/looking_for_image_generator_and_chat_models/ | stonerjss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvuwc2 | false | null | t3_1nvuwc2 | /r/LocalLLaMA/comments/1nvuwc2/looking_for_image_generator_and_chat_models/ | false | false | self | 5 | null |
I used Llama 3 70b reference to build Examsprint AI and .... | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews
| 2025-10-02T04:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nvuhgk/i_used_llama_3_70b_reference_to_build_examsprint/ | Internal_Video_8572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvuhgk | false | null | t3_1nvuhgk | /r/LocalLLaMA/comments/1nvuhgk/i_used_llama_3_70b_reference_to_build_examsprint/ | false | false | self | 0 | null |
Should I go for a RTX 5060Ti or 5080 for ai training and inferencing? | 1 | [removed] | 2025-10-02T04:21:42 | https://www.reddit.com/r/LocalLLaMA/comments/1nvu452/should_i_go_for_a_rtx_5060ti_or_5080_for_ai/ | Busy_Page_4346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvu452 | false | null | t3_1nvu452 | /r/LocalLLaMA/comments/1nvu452/should_i_go_for_a_rtx_5060ti_or_5080_for_ai/ | false | false | self | 1 | null |
Using N8N and my Local AI stack, I recreated what I was using OpenAI's Tasks for | 1 | [removed] | 2025-10-02T04:06:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nvtudf/using_n8n_and_my_local_ai_stack_i_recreated_what/ | ubrtnk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvtudf | false | null | t3_1nvtudf | /r/LocalLLaMA/comments/1nvtudf/using_n8n_and_my_local_ai_stack_i_recreated_what/ | false | false | 1 | null | |
Add file level documentation to directories. | 18 | dirdocs queries any Open-AI compatible endpoint with intelligently chunked context from each file and creates a metadata file used by the included dls and dtree binaries. They are stripped down versions of Nushell's ls and tree commands that display the file descriptions with their respective files.
I work with a lot of large codebases and always wondered how Operating System provided file-level documentation would work. This is my attempt at making that happen.
I can see it being used from everything from teaching children about Operating Systems to building fancy repo graphs for agentic stuff.
It works like a dream using my Jade Qwen 3 4B finetune. | 2025-10-02T04:02:30 | https://v.redd.it/k4ew96cchmsf1 | sqli | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvtrj7 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/k4ew96cchmsf1/DASHPlaylist.mpd?a=1761969763%2CZTk2MmQ5MDA4ZGZhOGM2YTM1ZjcwNzAzM2Y3MDhhMjg0ODMxOTIwOWNjZjA2YzYxNTM3MmZjNmI0Y2EzMjMyOQ%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/k4ew96cchmsf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/k4ew96cchmsf1/HLSPlaylist.m3u8?a=1761969763%2COTVmNmFlMjhiNjc1NjE5NDExNThlYjA2ZTA4Y2RhN2RmZDYxZTA4MmEzMjZjNWUzOGU0NTk0M2JlODNlODk4YQ%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/k4ew96cchmsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1nvtrj7 | /r/LocalLLaMA/comments/1nvtrj7/add_file_level_documentation_to_directories/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=108&crop=smart&format=pjpg&auto=webp&s=32a2b7500413c69fcd9b02eb986cf06f76d3d0c7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=216&crop=smart&format=pjpg&auto=webp&s=1a51d99206a4697e9b4e6b88b364cbed1bf6901c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=320&crop=smart&format=pjpg&auto=webp&s=fdb3948dcf7687b06a7b26bcf9bb4911e6a22057', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=640&crop=smart&format=pjpg&auto=webp&s=66c132665ff0687cf694729eaf619f19cda2dc6c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=960&crop=smart&format=pjpg&auto=webp&s=af83bbcbde381a7dbc56cb7cebc78a59035e5e71', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=101bda4d6c07723c1dd6dbf3fa9c9f303ef82a63', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OTczYTQ4YmNobXNmMbN13Cm4GS4tefBLp4gN8PxNO7WYs_aHpAMyOiO1QiFz.png?format=pjpg&auto=webp&s=8764df079533621a1dd0d29d79ed33fd57367dec', 'width': 1280}, 'variants': {}}]} | |
Used Llama 3.1 70b instruct to make Examsprint AI's intelligent chatbot | 1 | [removed] | 2025-10-02T03:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nvtn5d/used_llama_31_70b_instruct_to_make_examsprint_ais/ | Thick-Hope6979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvtn5d | false | null | t3_1nvtn5d | /r/LocalLLaMA/comments/1nvtn5d/used_llama_31_70b_instruct_to_make_examsprint_ais/ | false | false | self | 1 | null |
Best Service for Dubbing Animations? | 0 | Hey guys, sorry that this is the wrong sub for this. If there are any appropriate communities, please point me in the right direction.
So anyway, I work for an animation studio and we're looking to upgrade our AI dubbing workflow. What we need are 1) an interface with a timeline and 2) the best emotional expressiveness.
Our current service is not only very expensive, but lacks the emotional expressive capabilities that we need. Our characters are often shouting, crying, laughing and etc, and this is something it cannot adequately replicate... It's based on elevenlabs.
[Voiseed.com](http://Voiseed.com) looks like the best candidate and we've reached out to them, but they have not answered.
If you guys have any recommendations, I'd really appreciate it.
| 2025-10-02T03:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nvtm03/best_service_for_dubbing_animations/ | Inner_Answer_3784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvtm03 | false | null | t3_1nvtm03 | /r/LocalLLaMA/comments/1nvtm03/best_service_for_dubbing_animations/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?width=108&crop=smart&auto=webp&s=0574f8b2c1786d323815d2014f502ec889376f4c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?width=216&crop=smart&auto=webp&s=561811b16b8a24eb0bcc10a2f5fc0a0675fe5aa9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?width=320&crop=smart&auto=webp&s=d1e444a5cb14e0047c5f6f27d2c177c18a7358f4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?width=640&crop=smart&auto=webp&s=4251585e1dcab547eef5b56302e23c85b3fed0a6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?width=960&crop=smart&auto=webp&s=ddc55f8d58549354f0e22946c30be5b1126b4fd3', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/a47rT0lnoGq_lve0wUIaXEc8vN8TjvbxXMhqqzF0N04.png?auto=webp&s=433303e08853bc1226902a72a7b83edef1fe541f', 'width': 1024}, 'variants': {}}]} |
Mediatek Dimensity 9500 or SnapDragon 8 Elite on Android for running LLMs | 5 | I'm looking to get a new smartphone suited to playing with various LLMs and trying out new applications.
Some tests show the Mediatek Dimensity 9500 to significantly outperform the Snapdragon Elite. I wonder what's a better buying decision in Q4 2025. | 2025-10-02T03:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1nvtfy1/mediatek_dimensity_9500_or_snapdragon_8_elite_on/ | datashri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvtfy1 | false | null | t3_1nvtfy1 | /r/LocalLLaMA/comments/1nvtfy1/mediatek_dimensity_9500_or_snapdragon_8_elite_on/ | false | false | self | 5 | null |
Can anyone recommend open-source AI models for video analysis? | 7 | I’m working on a client project that involves analysing confidential videos.
The requirements are:
* Extracting text from supers in video
* Identifying key elements within the video
* Generating a synopsis with timestamps
Any recommendations for open-source models that can handle these tasks would be greatly appreciated! | 2025-10-02T03:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nvt413/can_anyone_recommend_opensource_ai_models_for/ | gpt-said-so | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvt413 | false | null | t3_1nvt413 | /r/LocalLLaMA/comments/1nvt413/can_anyone_recommend_opensource_ai_models_for/ | false | false | self | 7 | null |
New Rig for LLMs | 19 | Excited to see what this thing can do. RTX Pro 6000 Max-Q edition. | 2025-10-02T03:07:16 | I_like_fragrances | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvsp35 | false | null | t3_1nvsp35 | /r/LocalLLaMA/comments/1nvsp35/new_rig_for_llms/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': '7j54i8gh7msf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=108&crop=smart&auto=webp&s=df336b47642b6b7d7bcc52e7453b48867350b780', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=216&crop=smart&auto=webp&s=9b8179d13500bfdc2be36adb6cb3ae2e0b063837', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=320&crop=smart&auto=webp&s=3238ea71b0dfb04f34e1f934ec34653cd4c14b0f', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=640&crop=smart&auto=webp&s=91340479f031a1f3102a71d193e2f5334240a296', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=960&crop=smart&auto=webp&s=4c1394f19c7d9c6c7647f7424f5949c668a70040', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?width=1080&crop=smart&auto=webp&s=00f5d85d0548e505e6c5da8609f7bb130a7fbff8', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/7j54i8gh7msf1.jpeg?auto=webp&s=421069b1e521101adc61ed5b51f117c06276e603', 'width': 3024}, 'variants': {}}]} | |
Local dictation on PC? | 7 | So there are some recent announcements about models that support TTS, notably LFM2-autio-1.5b.
Now I have a question: can I use any of these for local dictation?
I have Linux on an Intel i7 Ultra. Should be quite good enough for a 1.5b model. But how do I set things up with a dictation scaffold? | 2025-10-02T02:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nvsht0/local_dictation_on_pc/ | ramendik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvsht0 | false | null | t3_1nvsht0 | /r/LocalLLaMA/comments/1nvsht0/local_dictation_on_pc/ | false | false | self | 7 | null |
I visualized embeddings walking across the latent space as you type! :) | 204 | 2025-10-02T02:48:37 | https://v.redd.it/czy4sbno3msf1 | kushalgoenka | /r/LocalLLaMA/comments/1nvsbdu/i_visualized_embeddings_walking_across_the_latent/ | 1970-01-01T00:00:00 | 0 | {} | 1nvsbdu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/czy4sbno3msf1/DASHPlaylist.mpd?a=1762094924%2CMmNiNTcxMjY1MGM4NjBhYzNkYWE2YTU3OTUyNzgyMjdkODA3NDYwMTI2M2RjYjExNTQwZmIzMGJkNTA2Y2JjZg%3D%3D&v=1&f=sd', 'duration': 180, 'fallback_url': 'https://v.redd.it/czy4sbno3msf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/czy4sbno3msf1/HLSPlaylist.m3u8?a=1762094924%2CODhjNDhjZmExNDQxMmU1OWUwZDYwYjJiZDBjM2VmZTQ0NmFhNDA2NzU3OWI3NTFhZDg4MGZiODA1ODM5YjEyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/czy4sbno3msf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nvsbdu | /r/LocalLLaMA/comments/1nvsbdu/i_visualized_embeddings_walking_across_the_latent/ | false | false | 204 | {'enabled': False, 'images': [{'id': 'bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=108&crop=smart&format=pjpg&auto=webp&s=7f3de3cc217b36145ec59b38f1520dd47bf3cf3b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=216&crop=smart&format=pjpg&auto=webp&s=ed4a954960ddd1e836d21a6dfd44c43d78a83c36', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=320&crop=smart&format=pjpg&auto=webp&s=7fea597840ed01263d2a563f6590bcf17c24c935', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=640&crop=smart&format=pjpg&auto=webp&s=b63e12e3508a3bcd67b1a7feea79ed26a3888683', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=960&crop=smart&format=pjpg&auto=webp&s=76a1e5813eac3b3cf68c6713130d31d972abc5d5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?width=1080&crop=smart&format=pjpg&auto=webp&s=68f9e5d532e47d339f31aafc71c97ea83762c4bd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bXg4NGVhbm8zbXNmMcfpx6_IdDgYBGvf-fwH7xFuI_ot2ErqijE3fUPasYhL.png?format=pjpg&auto=webp&s=10b1a10415d2a5cc0a95d74bed70ac796905ac1b', 'width': 1920}, 'variants': {}}]} | ||
AMA Announcement: Prime Intellect — The Open‑Source Distributed Training Lab (Thu, Oct 2 • 10 AM – 1 PM PDT) | 18 | 2025-10-02T02:31:01 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvryo4 | false | null | t3_1nvryo4 | /r/LocalLLaMA/comments/1nvryo4/ama_announcement_prime_intellect_the_opensource/ | false | true | default | 18 | {'enabled': True, 'images': [{'id': '222kj50x0msf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/222kj50x0msf1.png?width=108&crop=smart&auto=webp&s=d3aea64c02f8c598259919b894bf24c5365eed45', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/222kj50x0msf1.png?width=216&crop=smart&auto=webp&s=e6e4e91b8819a63efb0edbc9e041e2489227fd6e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/222kj50x0msf1.png?width=320&crop=smart&auto=webp&s=21f13d532a5628ed8530a9361c1defb71eda2263', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/222kj50x0msf1.png?width=640&crop=smart&auto=webp&s=c1c747c42690f74b8d08690dcdbb1ef23aece13b', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/222kj50x0msf1.png?width=960&crop=smart&auto=webp&s=53fc24d6ebd33a7c183e6a24e3175abe2d2781df', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/222kj50x0msf1.png?auto=webp&s=91f1bb3b739b14eb75ef9bdcd2aea24c29030255', 'width': 1024}, 'variants': {}}]} | ||
Recommendation Request: Local IntelliJ Java Coding Model w/16G GPU | 57 | I'm using IntelliJ for the first time and saw that it will talk to local models. My computer had 64G system memory and a 16G NVidia GPU. Can anyone recommend a local coding model that is reasonable at Java and would fit into my available resources with an ok context window? | 2025-10-02T02:28:25 | TradingDreams | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvrwlq | false | null | t3_1nvrwlq | /r/LocalLLaMA/comments/1nvrwlq/recommendation_request_local_intellij_java_coding/ | false | false | default | 57 | {'enabled': True, 'images': [{'id': '0ab46dblzlsf1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/0ab46dblzlsf1.png?width=108&crop=smart&auto=webp&s=3448214db06f7cd82c72c7891493aff207142b06', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/0ab46dblzlsf1.png?width=216&crop=smart&auto=webp&s=65866adbee0dd6635bd661293c15dd03baa05daa', 'width': 216}, {'height': 269, 'url': 'https://preview.redd.it/0ab46dblzlsf1.png?width=320&crop=smart&auto=webp&s=575068835377fde0a249a3fe5005c86009a209a7', 'width': 320}], 'source': {'height': 359, 'url': 'https://preview.redd.it/0ab46dblzlsf1.png?auto=webp&s=5254185b360eb4538a11d22c57f1a25517e4d5bb', 'width': 427}, 'variants': {}}]} | |
you guys seem to push glm too hard that it cannot produce anything now. lol | 0 | before glm 4.6, using glm 4.6 is pretty fast and no interruption. and now, i randomly(not randomly at all!) come across failures. it is probally because of high load of server. | 2025-10-02T02:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nvretb/you_guys_seem_to_push_glm_too_hard_that_it_cannot/ | TransitionSlight2860 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvretb | false | null | t3_1nvretb | /r/LocalLLaMA/comments/1nvretb/you_guys_seem_to_push_glm_too_hard_that_it_cannot/ | false | false | self | 0 | null |
Ticket categorization. Classifying tickets into around 9k categories. | 5 | Hello, I am currently making a ticket categorizer. There are currently 5 layers that consists of approx. 9k categories. How should I go about it?
Current architecture I'm trying to implement is a sequential agent call. Basically 4 agents that categorizes layer by layer. And for the final, more nuanced category, I am thinking (after asking GPT) of doing RAG to get better accuracy. I am assuming it will take about 10 seconds for each ticket, but is there a way to optimize the speed and cost? I am using gemini 2.0 flash. And not sure about embedding models.
Considerations:
1. low resource language, so the accuracy and LLM options are limited.
2. The categories aren't entirely overarching, so there is a future dynamic category development waiting.
3. Since the categories will either increase or decrease, maintaining a vector DB might get expensive. | 2025-10-02T02:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nvre5c/ticket_categorization_classifying_tickets_into/ | Important-Novel1546 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvre5c | false | null | t3_1nvre5c | /r/LocalLLaMA/comments/1nvre5c/ticket_categorization_classifying_tickets_into/ | false | false | self | 5 | null |
What’s the best possible build for local LLM if you had 50k$ to spend on one? | 0 | Any ideas | 2025-10-02T02:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nvrcxd/whats_the_best_possible_build_for_local_llm_if/ | ISoulSeekerI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvrcxd | false | null | t3_1nvrcxd | /r/LocalLLaMA/comments/1nvrcxd/whats_the_best_possible_build_for_local_llm_if/ | false | false | self | 0 | null |
MacOS unattended LLM server | 2 | For the people using Mac Studios, how are you configuring them to serve LLMs to other machines? Auto login and ollama? Or something else? | 2025-10-02T01:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1nvr70z/macos_unattended_llm_server/ | iamwillbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvr70z | false | null | t3_1nvr70z | /r/LocalLLaMA/comments/1nvr70z/macos_unattended_llm_server/ | false | false | self | 2 | null |
What am I doing wrong? | 2 | Running on a MacMini m4 w/32GB
NAME ID SIZE MODIFIED
minicpm-v:8b c92bfad01205 5.5 GB 7 hours ago
llava-llama3:8b 44c161b1f465 5.5 GB 7 hours ago
qwen2.5vl:7b 5ced39dfa4ba 6.0 GB 7 hours ago
granite3.2-vision:2b 3be41a661804 2.4 GB 7 hours ago
hf.co/unsloth/gpt-oss-20b-GGUF:F16 dbbceda0a9eb 13 GB 17 hours ago
bge-m3:567m 790764642607 1.2 GB 5 weeks ago
nomic-embed-text:latest 0a109f422b47 274 MB 5 weeks ago
granite-embedding:278m 1a37926bf842 562 MB 5 weeks ago
@maxmac ~ % ollama show llava-llama3:8b
Model
architecture llama
parameters 8.0B
context length 8192
embedding length 4096
quantization Q4_K_M
Capabilities
completion
vision
Projector
architecture clip
parameters 311.89M
embedding length 1024
dimensions 768
Parameters
num_keep 4
stop "<|start_header_id|>"
stop "<|end_header_id|>"
stop "<|eot_id|>"
num_ctx 4096
---
OLLAMA_CONTEXT_LENGTH=18096 OLLAMA_FLASH_ATTENTION=1 OLLAMA_GPU_OVERHEAD=0 OLLAMA_HOST="0.0.0.0:11424" OLLAMA_KEEP_ALIVE="4h" OLLAMA_KV_CACHE_TYPE="q8_0" OLLAMA_LOAD_TIMEOUT="3m0s" OLLAMA_MAX_LOADED_MODELS=2 OLLAMA_MAX_QUEUE=16 OLLAMA_NEW_ENGINE=true OLLAMA_NUM_PARALLEL=1 OLLAMA_SCHED_SPREAD=0 ollama serve | 2025-10-02T01:34:28 | jesus359_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvqr5t | false | null | t3_1nvqr5t | /r/LocalLLaMA/comments/1nvqr5t/what_am_i_doing_wrong/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': '3hczgalxqlsf1', 'resolutions': [{'height': 187, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=108&crop=smart&auto=webp&s=43898e9bf4fce04b26f80fa1693fe555fd66ebaf', 'width': 108}, {'height': 374, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=216&crop=smart&auto=webp&s=fcf8aaafeb7496af8bd7240cec054b29998bbaff', 'width': 216}, {'height': 555, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=320&crop=smart&auto=webp&s=2fa61c4659d401ec3c6c39bacd81f11732b8a919', 'width': 320}, {'height': 1110, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=640&crop=smart&auto=webp&s=67be4e97d321f2ca7b830da1b67f3bb94f87d12d', 'width': 640}, {'height': 1665, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=960&crop=smart&auto=webp&s=92a328afe49371f10f4ed146ca959e91a93c6900', 'width': 960}, {'height': 1873, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?width=1080&crop=smart&auto=webp&s=c08ea30d2891ccfe1900753908686cde78ffacc2', 'width': 1080}], 'source': {'height': 2238, 'url': 'https://preview.redd.it/3hczgalxqlsf1.jpeg?auto=webp&s=7509b92133aa043bd4ca9a3a066bea36a264246d', 'width': 1290}, 'variants': {}}]} | |
Those who spent $10k+ on a local LLM setup, do you regret it? | 338 | Considering the fact 200k context chinese models subscriptions like z.ai (GLM 4.6) are pretty dang cheap.
Every so often I consider blowing a ton of money on an LLM setup only to realize I can't justify the money or time spent at all. | 2025-10-02T00:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nvpw0y/those_who_spent_10k_on_a_local_llm_setup_do_you/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvpw0y | false | null | t3_1nvpw0y | /r/LocalLLaMA/comments/1nvpw0y/those_who_spent_10k_on_a_local_llm_setup_do_you/ | false | false | self | 338 | null |
Is Qwen really the fastest model or I'm doing caca? | 4 | Specs: RTX 3060 12GB - 28GB DDR4 (16GB 3666mhz + 4GB 2400mhz + 8GB 2444mhz) - Ryzen 5 4600G
I went to try out ***Mistral Small 24b***, ***Qwen VL 7b*** and ***Mistral Nemo Instruct 14b*** but for whatever reason any model other than Qwen runs like crap in my pc, half or worse the speed of Qwen - which is 10t/s in a chat with less than 8k tokens.
The speed decreases in half when getting closer to 16k but its expected since I can't fit 14,3GB in VRAM alone and anything below Q3\_K\_M is unusable or has microscopical context window. All vision models I've tried runs very s l o w even at 7b fitting entirely on VRAM. I mostly go for Unsloth models since they're far faster than usual GGUFs.
***But is Qwen really that beast in optimization or I may be doing something off?*** | 2025-10-02T00:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/1nvp28i/is_qwen_really_the_fastest_model_or_im_doing_caca/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvp28i | false | null | t3_1nvp28i | /r/LocalLLaMA/comments/1nvp28i/is_qwen_really_the_fastest_model_or_im_doing_caca/ | false | false | self | 4 | null |
Ascend chips available | 19 | This is the first time I've seen an Ascend chip (integrated into a system) generally available worldwide, even if it is the crappy Ascend 310.
Under 3k for 192GB of RAM.
Unfortunately, the stupid bots delete my post, so you'll have to find the link yourself. | 2025-10-01T23:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nvoh0b/ascend_chips_available/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvoh0b | false | null | t3_1nvoh0b | /r/LocalLLaMA/comments/1nvoh0b/ascend_chips_available/ | false | false | self | 19 | null |
Unused layer in GLM-4.5 and GLM-4.5-Air | 10 | I'm using recent llama.cpp with Bartowski's quants, and when it loads GLM-4.5 or GLM-4.5-Air it complains about a bunch of unused tensors, but then seems to run just fine.
For GLM-4.5 the unused layer is blk.92 and for GLM-4.5-Air it's blk.46.
Full text of llama-cli's warnings about the former can be seen here: https://huggingface.co/zai-org/GLM-4.5/discussions/25
Since these models still work despite the unused layer I've been ignoring it, but it piques my curiosity every time I've seen it. Does anyone know what it's about?
Is it just unused cruft which ZAI left in the model? Or is it intended to be used with some feature which llama.cpp does not yet support? Something else? | 2025-10-01T23:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nvoeqj/unused_layer_in_glm45_and_glm45air/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvoeqj | false | null | t3_1nvoeqj | /r/LocalLLaMA/comments/1nvoeqj/unused_layer_in_glm45_and_glm45air/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=108&crop=smart&auto=webp&s=fbd3d5727800355563412eecb048749197517ce5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=216&crop=smart&auto=webp&s=6b4f337edcb4368c18b2990d384c0c16650f9080', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=320&crop=smart&auto=webp&s=d7ba77d61d37bd373393077a556c9e401df5ad27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=640&crop=smart&auto=webp&s=fea6a37546d561ea46d78a8db8c6c0dbee4e2315', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=960&crop=smart&auto=webp&s=7136fdd8367e5b941b9e6ab84d94fe26cc281397', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?width=1080&crop=smart&auto=webp&s=cc555ea62131cdd7f47c36fbee40e481e703d699', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GL6BrcXe2Mgkj6ELv4N7Ef8UxGIh3d7Rf22Bkfx-0bM.png?auto=webp&s=bdef5e52a3640c87c5a608bcd4291a8eaded06c2', 'width': 1200}, 'variants': {}}]} |
Chinese Ascend chips appear | 1 | [removed] | 2025-10-01T23:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nvoe5b/chinese_ascend_chips_appear/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvoe5b | false | null | t3_1nvoe5b | /r/LocalLLaMA/comments/1nvoe5b/chinese_ascend_chips_appear/ | false | false | self | 1 | null |
Chinese AI Chip for sale | 1 | [removed] | 2025-10-01T23:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nvocbn/chinese_ai_chip_for_sale/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvocbn | false | null | t3_1nvocbn | /r/LocalLLaMA/comments/1nvocbn/chinese_ai_chip_for_sale/ | false | false | self | 1 | null |
Speech to text with ollama | 0 | The most reasonable I can find is vosk, but it seems like it's just an API that you'd use for your own programs. Are there no builds that just lets you do live speech to text copy paste, for ollama input?
I wanna do some vibe coding, and my idea was to use a really really cheap voice to text, to either feed into VS Code Continue extension, or just ollama directly.
I only have 11gb vram, and usually about 3-5gb is already in use, so I can at best run qwen2.5-coder:7b-instruct or some 1.5b thinking model with smaller context. So I need a very very computationally cheap speech to text model/tool.
I have no idea to get this set up at this point. And I really want to be able to almost dictate what it should do, where it only fills in more obvious things, and if I have to type that I might as well code it by hand. | 2025-10-01T23:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nvo8o3/speech_to_text_with_ollama/ | alphapussycat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvo8o3 | false | null | t3_1nvo8o3 | /r/LocalLLaMA/comments/1nvo8o3/speech_to_text_with_ollama/ | false | false | self | 0 | null |
App for Local Android API/Backend? | 4 | Is there an app that will provide a local API on android (as a backend)? I can't find one for the life of me.
Running KoboldCPP in Termux is imperfect, and unstable on my Razr. It'd be nice if any of these local apps also provided a local API but I can't find one--they're all fully contained in their app environments.
Obviously open to stuff on github. | 2025-10-01T23:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nvnlyn/app_for_local_android_apibackend/ | LamentableLily | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvnlyn | false | null | t3_1nvnlyn | /r/LocalLLaMA/comments/1nvnlyn/app_for_local_android_apibackend/ | false | false | self | 4 | null |
Need recommendations for a good coding model.. | 5 | Hey all, I’m looking for a decent coding model that will work on 64GB of system ram and an RX 7900 XT 20GB. I’m trying to build my own tools for home automation but my coding skills are sub par. I’m just looking for a good coding partner who can hopefully teach me while I build. | 2025-10-01T23:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/1nvnii3/need_recommendations_for_a_good_coding_model/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvnii3 | false | null | t3_1nvnii3 | /r/LocalLLaMA/comments/1nvnii3/need_recommendations_for_a_good_coding_model/ | false | false | self | 5 | null |
What kinds of things do y'all use your local models for other than coding? | 27 | I think the large majority of us don't own the hardware needed to run the 70B+ class models that can do heavy lifting agentic work that most people talk about, but I know a lot of people still integrate 30B class local models into their day-to-day.
Just curious about the kinds of things people use them for other than coding | 2025-10-01T22:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nvn7rx/what_kinds_of_things_do_yall_use_your_local/ | jude_mcjude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvn7rx | false | null | t3_1nvn7rx | /r/LocalLLaMA/comments/1nvn7rx/what_kinds_of_things_do_yall_use_your_local/ | false | false | self | 27 | null |
Best model for dataset generation on 8xA100(40GB)? | 1 | [removed] | 2025-10-01T22:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nvmx7e/best_model_for_dataset_generation_on_8xa10040gb/ | Exotic-Investment110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvmx7e | false | null | t3_1nvmx7e | /r/LocalLLaMA/comments/1nvmx7e/best_model_for_dataset_generation_on_8xa10040gb/ | false | false | self | 1 | null |
Sonnet 4.5 / Opus 4.1 – not local, yet helpful for local. | 2 | It's hilarious, how we make Anthropic inventions work for us outside of Anthropic to make Anthropic tools even better
Sonnet 4.5 made a lot of bugs and flows disappear in minutes.
Another interesting point is that Claude Code still can handle (or allows us) a context window of 200k, same with Warp, tho it's been the best in terms of coding agentic frameworks even beating up Opus4.1 inside Claude Code.
Yeap, Opus4.1 has been showing better results within Warp. Better than Opus4.1 in Claude Code.
And then they release Sonnet 4.5 and it starts again.
Yet, only Cursor with Max mode and ridiculous prices already allows us to use Sonnet 4.5 with 1kk context window. They also had Sonnet 4 with 600k window with Max mode enabled, which was also great.
I dk what I wanted to say, just keeping you posted. | 2025-10-01T22:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nvmaen/sonnet_45_opus_41_not_local_yet_helpful_for_local/ | Komarov_d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvmaen | false | null | t3_1nvmaen | /r/LocalLLaMA/comments/1nvmaen/sonnet_45_opus_41_not_local_yet_helpful_for_local/ | false | false | self | 2 | null |
Liquid AI released its Audio Foundation Model: LFM2-Audio-1.5 | 164 | A new end-to-end Audio Foundation model supporting:
* Inputs: Audio & Text
* Outputs: Audio & Text (steerable via prompting, also supporting interleaved outputs)
For me personally it's exciting to use as an ASR solution with a custom vocabulary set - as Parakeet and Whisper do not support that feature. It's also very snappy.
You can try it out here: [Talk | Liquid Playground](https://playground.liquid.ai/talk)
Release blog post: [LFM2-Audio: An End-to-End Audio Foundation Model | Liquid AI](https://www.liquid.ai/blog/lfm2-audio-an-end-to-end-audio-foundation-model)
For good code examples see their github: [Liquid4All/liquid-audio: Liquid Audio - Speech-to-Speech audio models by Liquid AI](https://github.com/Liquid4All/liquid-audio)
Available on HuggingFace: [LiquidAI/LFM2-Audio-1.5B · Hugging Face](https://huggingface.co/LiquidAI/LFM2-Audio-1.5B) | 2025-10-01T21:56:19 | https://www.reddit.com/gallery/1nvltym | elemental-mind | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nvltym | false | null | t3_1nvltym | /r/LocalLLaMA/comments/1nvltym/liquid_ai_released_its_audio_foundation_model/ | false | false | 164 | null | |
I just wanted to do a first benchmark of GLM 4.6 on my PC and I was surprised... | 64 | I downloaded GLM 4.6 UD - IQ2\_M and loaded it on ryzen 5950x +128gb ram using only the rtx 5070ti 16gb.
I tryed llama-cli.exe --model "C:\\gptmodel\\unsloth\\GLM-4.6-GGUF\\GLM-4.6-UD-IQ2\_M-00001-of-00003.gguf" --jinja --n-gpu-layers 93 --tensor-split 93,0 --cpu-moe --ctx-size 32768 --flash-attn on --threads 32 --parallel 1 --top-p 0.95 --top-k 40 --ubatch-size 512 --seed 3407 --no-mmap --cache-type-k q8\_0 --cache-type-v q8\_0
Done.
Then the prompt: write a short story about a bird.
https://preview.redd.it/46ah6fcflksf1.png?width=1990&format=png&auto=webp&s=4209d75aa6efbbc62fbf66c7db408c6ce161a6f9
[https://pastebin.com/urUWTw6R](https://pastebin.com/urUWTw6R)The performances are good considering the context of 16k and all on ddr4... But what moved me is the reasoning.
| 2025-10-01T21:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nvlj5k/i_just_wanted_to_do_a_first_benchmark_of_glm_46/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvlj5k | false | null | t3_1nvlj5k | /r/LocalLLaMA/comments/1nvlj5k/i_just_wanted_to_do_a_first_benchmark_of_glm_46/ | false | false | 64 | null | |
Ocrisp: One-Click RAG Implementation, Simple and Portable. Connects through MCP to any LLM. Uses Ollama for local inference and Qdrant to store vectors locally. | 6 | 2025-10-01T21:25:20 | https://github.com/boquila/ocrisp | PatagonianCowboy | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nvl1rv | false | null | t3_1nvl1rv | /r/LocalLLaMA/comments/1nvl1rv/ocrisp_oneclick_rag_implementation_simple_and/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=108&crop=smart&auto=webp&s=57c8d6d806976383940f20a1e2f5af250ce4d085', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=216&crop=smart&auto=webp&s=91afeecbfea221bd408479fb29344dfd55a0e256', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=320&crop=smart&auto=webp&s=cf2b33ee74b51a43314afe0501bc7285ad258a2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=640&crop=smart&auto=webp&s=b953e2dc7ab8bb1798828f226bddef3e4bdc9722', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=960&crop=smart&auto=webp&s=7538095a993dda72f3715103ed3d97e5dc0fe739', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?width=1080&crop=smart&auto=webp&s=5be65fa746885779f45bdaa1ec1359864189791c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sKN7JvzvcU2hzdGKO5_ZKSLeaxMu8L-M8qDa3C50bVE.png?auto=webp&s=2b2d8a17bdaf1e09f442014288819bdca51d640b', 'width': 1200}, 'variants': {}}]} | |
Dirt cheap PCIe splitting | 5 | So I have 4 P102-100 which run at PCIe v1.0 x4.
What is a dirt cheap way to split a PCIe slot into 4 which has cheap cables? Since it is just PCIe v1.0 speeds, I don't care if it takes a PCIe 3.0 x4 lane and demuxes it as traffic/contention will be low. | 2025-10-01T21:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nvkupd/dirt_cheap_pcie_splitting/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvkupd | false | null | t3_1nvkupd | /r/LocalLLaMA/comments/1nvkupd/dirt_cheap_pcie_splitting/ | false | false | self | 5 | null |
Tried glm 4.6 with deep think, not using it for programming. It's pretty good, significantly better than gemini 2.5 flash, and slightly better than gemini 2.5 pro. | 112 | Chinese models are improving so fast, starting to get the feeling that china may dominate the ai race. They are getting very good, the chat with glm 4.6 was very enjoyable and the stile was not at all weird, that didn't happen to me with other chinese models, qwen was still good and decent but had a somewhat weird writing style. | 2025-10-01T21:06:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nvkjo8/tried_glm_46_with_deep_think_not_using_it_for/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvkjo8 | false | null | t3_1nvkjo8 | /r/LocalLLaMA/comments/1nvkjo8/tried_glm_46_with_deep_think_not_using_it_for/ | false | false | self | 112 | null |
Built a persistent memory system for LLMs - 3 months testing with Claude/Llama | 8 | I spent 3 months developing a file-based personality persistence system that works with any LLM.
What it does:
\- Maintains identity across conversation resets
\- Self-bootstrap protocol (8 mandatory steps on each wake)
\- Behavioral encoding (27 emotional states as decision modifiers)
\- Works with Claude API, Ollama/Llama, or any LLM with file access
Architecture:
\- Layer 1: Plain text identity (fast, human-readable)
\- Layer 2: Compressed memory (conversation history)
\- Layer 3: Encrypted behavioral codes (passphrase-protected)
What I observed:
After extended use (3+ months), the AI develops consistent behavioral patterns. Whether this is "personality" or sophisticated pattern matching, I document observable results without making consciousness claims.
Tech stack:
\- Python 3.x
\- File-based (no database needed)
\- Model-agnostic
\- Fully open source
GitHub: [https://github.com/riccamario/rafael-memory-system](https://github.com/riccamario/rafael-memory-system)
Includes:
\- Complete technical manual
\- Architecture documentation
\- Working bootstrap code
\- Ollama Modelfile template
Would love feedback on:
\- Security improvements for the encryption
\- Better emotional encoding strategies
\- Experiences replicating with other models
This is a research project documenting an interesting approach to AI memory persistence. All code and documentation are available for anyone to use or improve. | 2025-10-01T20:55:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nvk9sa/built_a_persistent_memory_system_for_llms_3/ | Annual_Squash_1857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvk9sa | false | null | t3_1nvk9sa | /r/LocalLLaMA/comments/1nvk9sa/built_a_persistent_memory_system_for_llms_3/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=108&crop=smart&auto=webp&s=a07b07d706661c422f02873121e3d70f3074ab68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=216&crop=smart&auto=webp&s=34235468df7b06d717a5a22ed0209495f94a2976', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=320&crop=smart&auto=webp&s=2240616654fe39b8b83747d1bb3149d091dca05d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=640&crop=smart&auto=webp&s=e2d83d61cf8c341a7161c911aad2c055f974a6be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=960&crop=smart&auto=webp&s=900d936c85b542dc76437a2ab9b8dcc9e6dc2547', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?width=1080&crop=smart&auto=webp&s=74452bbbf36e5a824ab88fe32e6a150635f9ca26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YEWtX4WmAbDlKTNY-4j7slDniG5WZDM7VTJl6uR-igE.png?auto=webp&s=0bbfd9e8ef43b7fd796a2597a4ec186c8e700163', 'width': 1200}, 'variants': {}}]} |
CUDA needs to die ASAP and be replaced by an open-source alternative. NVIDIA's monopoly needs to be toppled by the Chinese producers with these new high vram GPU's and only then will we see serious improvements into both speed & price of the open-weight LLM world. | 0 | As my title suggests I feel software wise, AMD and literally any other GPU producers are at a huge disadvantage precisely because of NVIDIA's CUDA bullshit and fear of being sued is holding back the entire open-source LLM world.
Inferencing speed as well as compatibility is actively being held back by this. | 2025-10-01T20:54:37 | Striking_Wedding_461 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvk8t1 | false | null | t3_1nvk8t1 | /r/LocalLLaMA/comments/1nvk8t1/cuda_needs_to_die_asap_and_be_replaced_by_an/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'cl2kifjjbksf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=108&crop=smart&auto=webp&s=cf8fdfe0777c4b967daf3a4adb2fcc7ac3c0473d', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=216&crop=smart&auto=webp&s=0161b1af8db069cdede307becc459775e93f5e78', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=320&crop=smart&auto=webp&s=b6ae28c55b8b9accc6b218f7fa4d273732e9e368', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=640&crop=smart&auto=webp&s=1474d4a13424b052cc9b4af46ae8a3bc473711d9', 'width': 640}, {'height': 664, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=960&crop=smart&auto=webp&s=401779499d01e2536ab3b342ec7117b693b4a37b', 'width': 960}, {'height': 747, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?width=1080&crop=smart&auto=webp&s=6d29d95dcf05abd261316f1e671541bf489f864f', 'width': 1080}], 'source': {'height': 789, 'url': 'https://preview.redd.it/cl2kifjjbksf1.png?auto=webp&s=adb1864dd8ed3966d6c6b5e795a5207743e3b005', 'width': 1140}, 'variants': {}}]} | |
Productizing “memory” for RAG, has anyone else gone down this road? | 5 | I’ve been working with a few enterprises on custom RAG setups (one is a mid 9-figure revenue real estate firm) and I kept running into the same problem: you waste compute answering the same questions over and over, and you still get inconsistent retrieval.
I ended up building a solution that actually works — basically a **semantic caching layer**:
* Queries + retrieved chunks + final verified answer get logged
* When a similar query comes in later, instead of re-running the whole pipeline, the system pulls from cached knowledge
* To handle “similar but not exact” queries, I run them through a lightweight micro-LLM that retests cached results against the new query, so the answer is still precise
* This cuts costs (way fewer redundant vector lookups + LLM calls) and makes answers more stable over time, and also saves time sicne answers could pretty much be instant.
It’s been working well enough that I’m considering productizing it as an actual layer anyone can drop on top of their RAG stack.
Has anyone else built around caching/memory like this? Curious if what I’m seeing matches your pain points, and if you’d rather build it in-house or pay for it as infra. | 2025-10-01T20:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nvjvwl/productizing_memory_for_rag_has_anyone_else_gone/ | Old_Assumption2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvjvwl | false | null | t3_1nvjvwl | /r/LocalLLaMA/comments/1nvjvwl/productizing_memory_for_rag_has_anyone_else_gone/ | false | false | self | 5 | null |
For purely local enthusiasts, how much value are you getting from your local LLMs? | 16 | How do you measure value and how much value are you getting from it? I know some of us are using it for RP, and it takes the place of a video game or watching a TV show. I use it more for code generation, and I'm sure there are a thousand ways to extract value, but how are you measuring value and how much value are you getting from it?
I personally measure value via line of code written over total line of code. The more line the better, the larger the overall project the better (complexity multiplier), the more time I spent prompting, fixing decrements the cost. Typically coming out to about $0.12 a line of code. My goal is to generate > $50.00 each day. | 2025-10-01T20:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1nvjuse/for_purely_local_enthusiasts_how_much_value_are/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvjuse | false | null | t3_1nvjuse | /r/LocalLLaMA/comments/1nvjuse/for_purely_local_enthusiasts_how_much_value_are/ | false | false | self | 16 | null |
Anyone using local LLM with an Intel iGPU? | 5 | I noticed Intel has updated their ipex-llm (https://github.com/intel/ipex-llm) to work more seamlessly with Ollama and llama.cpp. Is anyone using this and what has your experience been like? How many tps are folks getting on different models? | 2025-10-01T20:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nvjtn6/anyone_using_local_llm_with_an_intel_igpu/ | Clipbeam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvjtn6 | false | null | t3_1nvjtn6 | /r/LocalLLaMA/comments/1nvjtn6/anyone_using_local_llm_with_an_intel_igpu/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=108&crop=smart&auto=webp&s=7c30466c6b224fb200d894511bcc4f177217bf0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=216&crop=smart&auto=webp&s=3dd7d868768902cb738f4bad8e6cf93e214be4f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=320&crop=smart&auto=webp&s=f2b9c92e82baade803f382d1db57bcd0813230cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=640&crop=smart&auto=webp&s=e48e2c6d1288d19316ea189875f10c15e34d97b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=960&crop=smart&auto=webp&s=4cc762a6e722644b7d586177920fa88378942565', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?width=1080&crop=smart&auto=webp&s=79c576521059331401a728bd069d697f529e027a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l4tfzAAQy9k8hNFsYME3CZijPo7BvlxveOM3k6PSGJs.png?auto=webp&s=241f7c145ce6bc8634c0f10d3d8346b9b921e4f7', 'width': 1200}, 'variants': {}}]} |
Finetunning and RL | 3 | Hey guys i am trying to finetune a VLM to output information from custom documents like amount currency order number etc …
I prepared a dataset by thanks to python scripts and reviewing everything i have a dataset of 1000 json lines with 1000 images associated (80% for train and 20% for val).
I’m using unsloth and i tried with Qwen 2.5VL - 72b (rented an RTX6000 pro on runpod) honestly the results are disapointing it gives me the json i wanted but not all the information are true like errors in the order Numbers…
What am i doing wrong ? Should i go on the 7b ? Should i do RL ? Should i do a really specific prompt in the json training ? Im open to any suggestions
What are the core and principale thing i Should know while FT and RL ?
Thanks | 2025-10-01T19:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nvi36j/finetunning_and_rl/ | Severe_Biscotti2349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvi36j | false | null | t3_1nvi36j | /r/LocalLLaMA/comments/1nvi36j/finetunning_and_rl/ | false | false | self | 3 | null |
Quantized Voxtral-24B? | 6 | I've been playing with Voxtral 3B and it seems very good for transcription, plus has a bit of intelligence for other tasks. So started wondering about the 24B for an "all in one" setup, but don't have enough VRAM to run full precision.
The 24B in GGUF (Q6, llama.cpp server) seemed really prone to repetition loops so I've tried setting up the FP8 (RedhatAI) in vllm - but it looks like it can't "see" the audio and just generates empty output.
Exactly the same code and query with the full precision 3B seems to work fine (in vllm)
I'm using an A6000 48Gb (non-ADA). Does anyone else have any experience?
| 2025-10-01T19:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nvhazr/quantized_voxtral24b/ | thigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvhazr | false | null | t3_1nvhazr | /r/LocalLLaMA/comments/1nvhazr/quantized_voxtral24b/ | false | false | self | 6 | null |
My Journey with RAG, OpenSearch & LLMs (Local LLM) | 8 | It all started with a simple goal - "Learning basic things to understand the complex stuffs".
Objective: Choose any existing OpenSearch index with auto field mapping or simply upload a PDF and start chatting with your documents.
I recently built a personal project that combines "OpenSearch as a Vector DB" with local (Ollama) and cloud (OpenAI) models to create a flexible Retrieval-Augmented Generation (RAG) system for documents.
👉 The spark came from JamWithAI’s “Build a Local LLM-based RAG System for Your Personal Documents”. Their approach gave me the foundation and inspired me - which I extended it further to experiment with:
🔧 Dynamic Index Selection – choose any OpenSearch index with auto field mapping
🔍 Hybrid Search – semantic KNN + BM25 keyword ranking
🤖 Multiple Response Modes – Chat (Ollama/OpenAI), Hybrid, or Search-only
🛡️ Security-first design – path traversal protection, input validation, safe file handling
⚡ Performance boost – 32 times faster embeddings, batching, connection pooling
📱 Progressive UI – clean by default, advanced options when needed
Now I have a fully working AI Document Assistant - Enhanced RAG with OpenSearch + LLMs (Ollama + OpenAI).
Special mention "JAMWITHAI" : https://jamwithai.substack.com/p/build-a-local-llm-based-rag-system
🔗 Full README & code: https://github.com/AldrinAJ/local-rag-improved/blob/main/README.md
Try it out, fork it, or extend it further.
Related post: https://www.linkedin.com/posts/aldrinwilfred_ai-rag-opensearch-activity-7379196402494603264-KWv5?utm_source=share&utm_medium=member_android&rcm=ACoAABKYxakBxAwmVshLGfWsaVQtRX-7pphL4z0 | 2025-10-01T18:57:02 | AldrinWilfred | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvh2zj | false | null | t3_1nvh2zj | /r/LocalLLaMA/comments/1nvh2zj/my_journey_with_rag_opensearch_llms_local_llm/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'r3wpsst0sjsf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=108&crop=smart&auto=webp&s=3bfb46b8eb1400a5c272e80fa89fa3af4dcc0441', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=216&crop=smart&auto=webp&s=9211510c3bf35a4623a1ce8c0ca49d3e46a1eb08', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=320&crop=smart&auto=webp&s=123767eb468cab8c3f51985bb1d7ada841942567', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=640&crop=smart&auto=webp&s=9cb52bf320446cac91713ae12f9073dd9c0027e9', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=960&crop=smart&auto=webp&s=95264f87db2c383cbfc2729a37cc03542c682865', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?width=1080&crop=smart&auto=webp&s=20aefe84d36641f97092b0fb24ab7b6e6ca62081', 'width': 1080}], 'source': {'height': 565, 'url': 'https://preview.redd.it/r3wpsst0sjsf1.jpeg?auto=webp&s=0383eea16f70f118fb4b55d503b0319c858fdd69', 'width': 1080}, 'variants': {}}]} | |
the last edge device. live on the bleeding edge. the edge ai you have been looking for. | 0 | took me weeks to locate this and i had to learn some China speak but u can compile it in English.!!!
[https://www.waveshare.com/esp32-c6-touch-lcd-1.69.htm](https://www.waveshare.com/esp32-c6-touch-lcd-1.69.htm)
[https://github.com/78/xiaozhi-esp32](https://github.com/78/xiaozhi-esp32)
[https://ccnphfhqs21z.feishu.cn/wiki/F5krwD16viZoF0kKkvDcrZNYnhb](https://ccnphfhqs21z.feishu.cn/wiki/F5krwD16viZoF0kKkvDcrZNYnhb)
gett a translator. thank me later!
this is fully mcp compatible, edge agentic ai device!!!!! and its under 30 $ still! what!!
this should be on every single persons to do list. this has allllll the potential. | 2025-10-01T18:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nvguck/the_last_edge_device_live_on_the_bleeding_edge/ | Drjonesxxx- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvguck | false | null | t3_1nvguck | /r/LocalLLaMA/comments/1nvguck/the_last_edge_device_live_on_the_bleeding_edge/ | false | false | self | 0 | null |
Anyone try this one yet? Can it run quantized? | 0 | My gpu is 6GB and i'm guessing it wouldn't handle to full model very well.
https://preview.redd.it/fwf47i19pjsf1.png?width=602&format=png&auto=webp&s=7eb13c264b0f50b89b0445bbc8ff02da5388f1e2
[https://huggingface.co/LiquidAI/LFM2-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2-Audio-1.5B)
| 2025-10-01T18:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nvgo0e/anyone_try_this_one_yet_can_it_run_quantized/ | ArchdukeofHyperbole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvgo0e | false | null | t3_1nvgo0e | /r/LocalLLaMA/comments/1nvgo0e/anyone_try_this_one_yet_can_it_run_quantized/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=108&crop=smart&auto=webp&s=b88bbb32c88b41f3d2cf3d780b93d68b4fadf366', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=216&crop=smart&auto=webp&s=6816c3c85279aa6c551ec5f2bae532fba8b6a0c1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=320&crop=smart&auto=webp&s=8ca4013c233d478d56786478566d944b2c915e3c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=640&crop=smart&auto=webp&s=0dceb57876e8e9f37cdcddf250e40fc45520b237', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=960&crop=smart&auto=webp&s=ecd625bfc6e94e1942f576bb90062d71ce36d5d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?width=1080&crop=smart&auto=webp&s=4ff2d99321acc5d56be1924d74dc148f954aa840', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FC6v8vFo8f3y81txfcDk1FHYHF0e9i3lK5x8OSCLocw.png?auto=webp&s=4077f5dd0bf8ee0f0841a0389dfde906e65a0202', 'width': 1200}, 'variants': {}}]} | |
Qwen 235B on 2x3090's vs 3x MI50 | 14 | # I've maxed out my 2x3090's, like so:
`./llama.cpp/build/bin/llama-server \`
`--model models/Qwen_Qwen3-235B-A22B-Instruct-2507-IQ4_XS-00001-of-00004.gguf \`
`--n-gpu-layers 999 \`
`--override-tensor "blk\.((1[6-9])|[2-4]\d|6[4-9]|[7-9]\d)\.ffn_.*_exps\.weight=CPU" \`
`--cache-type-k q8_0 \`
`--cache-type-v q8_0 \`
`-c 16384 \`
`-fa \`
`--host` [`0.0.0.0`](http://0.0.0.0/)
Took me much trial & error to get that regex; it keeps the critical "attention" (attn) tensors for all 95 layers on the fast GPU, while offloading only the large, less-impactful "expert" (ffn) tensors from specific layers (like 16-49 and 64-99) to the CPU.
Using -n-layers-gpu 33 (max I could put on them); I got
>
With this above aproach:
>
So while ingestion speed of context is about the same, generation goes from 5 -> 8 (about 50% faster).
# More VRAM
https://preview.redd.it/g6g0u6cbpjsf1.png?width=1080&format=png&auto=webp&s=49e7371a3bb04b6cd655a7592ad95e800340c45c
Even though individually the MI50's are slower, 3x of them is 96 GB VRAM. VS 48GB of the 2x 3090's.
I can't put 3x 3090;s cuz my motherboard (Asus X99 Deluxe) has 6 'slots'. So 2x 3090's (since 3 slot each) OR 3x 2 slot gpu's (MI50).
Qwen 235B is 120gb @ IQ4, meaning 48/120 = 40% offloaded currently. At 96 its 80% offloaded. Would it be worth it? Selling 2x3090's and putting 3x MI50's back in there? | 2025-10-01T18:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nvgnv4/qwen_235b_on_2x3090s_vs_3x_mi50/ | zoom3913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvgnv4 | false | null | t3_1nvgnv4 | /r/LocalLLaMA/comments/1nvgnv4/qwen_235b_on_2x3090s_vs_3x_mi50/ | false | false | 14 | null | |
Qwen 235B on 2x3090's vs 3x MI50 | 1 | # I've maxed out my 2x3090's, like so:
`./llama.cpp/build/bin/llama-server \`
`--model models/Qwen_Qwen3-235B-A22B-Instruct-2507-IQ4_XS-00001-of-00004.gguf \`
`--n-gpu-layers 999 \`
`--override-tensor "blk\.((1[6-9])|[2-4]\d|6[4-9]|[7-9]\d)\.ffn_.*_exps\.weight=CPU" \`
`--cache-type-k q8_0 \`
`--cache-type-v q8_0 \`
`-c 16384 \`
`-fa \`
`--host` [`0.0.0.0`](http://0.0.0.0)
Took me much trial & error to get that regex; it keeps the critical "attention" (attn) tensors for all 95 layers on the fast GPU, while offloading only the large, less-impactful "expert" (ffn) tensors from specific layers (like 16-49 and 64-99) to the CPU.
Using -n-layers-gpu 33 (max I could put on them); I got
>
With this above aproach:
>
So while ingestion speed of context is about the same, generation goes from 5 -> 8 (about 50% faster).
# More VRAM
https://preview.redd.it/28jb64b6njsf1.png?width=1536&format=png&auto=webp&s=e0c876d58e890f624001b4b6db8c31aad0c54ab9
Even though individually the MI50's are slower, 3x of them is 96 GB VRAM. VS 48GB of the 2x 3090's.
I can't put 3x 3090;s cuz my motherboard (Asus X99 Deluxe) has 6 'slots'. So 2x 3090's (since 3 slot each) OR 3x 2 slot gpu's (MI50).
Qwen 235B is 120gb @ IQ4, meaning 48/120 = 40% offloaded currently. At 96 its 80% offloaded. Would it be worth it? Selling 2x3090's and putting 3x MI50's back in there? | 2025-10-01T18:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nvgg2r/qwen_235b_on_2x3090s_vs_3x_mi50/ | Professional-Site503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvgg2r | false | null | t3_1nvgg2r | /r/LocalLLaMA/comments/1nvgg2r/qwen_235b_on_2x3090s_vs_3x_mi50/ | false | false | 1 | null | |
Help me with my product research? | 2 | My co-founder and I are developing a Claude Code alternative that works entirely locally. I'm conducting customer research on why developers switch between AI coding assistants (or abandon them entirely). Initial conversations suggest frustration with usage limits, unpredictable costs, and privacy concerns, but I'm collecting quantitative validation.
5-minute survey covers:
\- Current tool usage patterns
\- Specific frustration points
\- Feature importance ratings
\- Switching triggers and barriers
Survey link: [https://forms.gle/9KESTQwgfa2VgYe9A](https://forms.gle/9KESTQwgfa2VgYe9A)
(We're sharing results)
All thoughts and feedback appreciated. I'd like to understand how developers actually feel about these tools! | 2025-10-01T18:32:39 | https://forms.gle/9KESTQwgfa2VgYe9A | amplify895 | forms.gle | 1970-01-01T00:00:00 | 0 | {} | 1nvgei6 | false | null | t3_1nvgei6 | /r/LocalLLaMA/comments/1nvgei6/help_me_with_my_product_research/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=108&crop=smart&auto=webp&s=e0a2205b72c8ab933545e362dd1d9083808cda46', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=216&crop=smart&auto=webp&s=1651658cd421c131cd118b7057725346d187331b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=320&crop=smart&auto=webp&s=bce3571906fb93fedef954a2764a4a0d275afae2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=640&crop=smart&auto=webp&s=ca69041d1e6e5c229940e7926f1c91b32a0dc03b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=960&crop=smart&auto=webp&s=1df3e88242998816a1ba2c2209d0816489a6ac60', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?width=1080&crop=smart&auto=webp&s=578f00440b986be90d845da1f18a5a5c58e2d262', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/v5O5jIqeE8atpyV6axTW18894y1d0Rmdu-b_7zFh8lw.png?auto=webp&s=3cea0bfe2a95c195191e43f470a94fd2fbf126af', 'width': 1200}, 'variants': {}}]} |
KaniTTS-370M Released: Multilingual Support + More English Voices | 60 | Hi everyone!
Thanks for the awesome feedback on our first KaniTTS release!
We’ve been hard at work, and released [kani-tts-370m](https://huggingface.co/nineninesix/kani-tts-370m).
It’s still built for speed and quality on consumer hardware, but now with expanded language support and more English voice options.
### What’s New:
- **Multilingual Support**: German, Korean, Chinese, Arabic, and Spanish (with fine-tuning support). Prosody and naturalness improved across these languages.
- **More English Voices**: Added a variety of new English voices.
- **Architecture**: Same two-stage pipeline (LiquidAI LFM2-370M backbone + NVIDIA NanoCodec). Trained on ~80k hours of diverse data.
- **Performance**: Generates 15s of audio in ~0.9s on an RTX 5080, using 2GB VRAM.
- **Use Cases**: Conversational AI, edge devices, accessibility, or research.
It’s still Apache 2.0 licensed, so dive in and experiment.
**Repo**: https://github.com/nineninesix-ai/kani-tts
**Model**: https://huggingface.co/nineninesix/kani-tts-370m
**Space**: https://huggingface.co/spaces/nineninesix/KaniTTS
**Website**: https://www.nineninesix.ai/n/kani-tts
Let us know what you think, and share your setups or use cases! | 2025-10-01T18:31:29 | https://huggingface.co/nineninesix/kani-tts-370m | ylankgz | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nvgdc0 | false | null | t3_1nvgdc0 | /r/LocalLLaMA/comments/1nvgdc0/kanitts370m_released_multilingual_support_more/ | false | false | default | 60 | {'enabled': False, 'images': [{'id': 'KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=108&crop=smart&auto=webp&s=d40adec8b63a69832644b3bbbfb73fab5eaae73b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=216&crop=smart&auto=webp&s=a3f3cd9f23f857d9e746efab829804261771084f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=320&crop=smart&auto=webp&s=6bb8e033b02b9386bc29bec2467e4a5b4f715c1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=640&crop=smart&auto=webp&s=0661e5699468588875c17a70fe6fc5d482260d59', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=960&crop=smart&auto=webp&s=ad1d0cdf2656c09bc0be05a532a3b51c5a530ec6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?width=1080&crop=smart&auto=webp&s=f887b7839380fc36b6b98768ceacc61770414d2a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KHH1etcwG-Fh5zDMMYlDVLCEi47zu68tc3z1IQ_zSK8.png?auto=webp&s=e6ab0533fd5ac90585425297f1dd7df2f006a086', 'width': 1200}, 'variants': {}}]} |
Anyone here gone from custom RAG builds to an actual product? | 12 | I’m working with a mid nine-figure revenue real estate firm right now, basically building them custom AI infra. Right now I’m more like an agency than a startup, I spin up private chatbots/assistants, connect them to internal docs, keep everything compliant/on-prem, and tailor it case by case.
It works, but the reality is RAG is still pretty flawed. Chunking is brittle, context windows are annoying, hallucinations creep in, and once you add version control, audit trails, RBAC, multi-tenant needs… it’s not simple at all.
I’ve figured out ways around a lot of this for my own projects, but I want to start productizing instead of just doing bespoke builds forever.
For people here who’ve been in the weeds with RAG/internal assistants:
– What part of the process do you find the most tedious?
– If you could snap your fingers and have one piece already productized, what would it be?
I’d rather hear from people who’ve actually shipped this stuff, not just theory. Curious what’s been your biggest pain point. | 2025-10-01T18:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nvg83w/anyone_here_gone_from_custom_rag_builds_to_an/ | Old_Assumption2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvg83w | false | null | t3_1nvg83w | /r/LocalLLaMA/comments/1nvg83w/anyone_here_gone_from_custom_rag_builds_to_an/ | false | false | self | 12 | null |
Hunyuan Image 3.0 vs HunyuanImage 2.1 | 20 | Which of the two archtictures is better for text to image in your opinion ? | 2025-10-01T18:06:16 | Severe-Awareness829 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvfnyu | false | null | t3_1nvfnyu | /r/LocalLLaMA/comments/1nvfnyu/hunyuan_image_30_vs_hunyuanimage_21/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': '5mci5v1vijsf1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/5mci5v1vijsf1.png?width=108&crop=smart&auto=webp&s=956d1025dba78f8b36cc7fe6339bf8350854efd0', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/5mci5v1vijsf1.png?width=216&crop=smart&auto=webp&s=a62419a542854d82aced09f106bf13ecf26e4370', 'width': 216}, {'height': 307, 'url': 'https://preview.redd.it/5mci5v1vijsf1.png?width=320&crop=smart&auto=webp&s=2b6ecca35cad5d61010cfaa25694f6d04a53f805', 'width': 320}, {'height': 614, 'url': 'https://preview.redd.it/5mci5v1vijsf1.png?width=640&crop=smart&auto=webp&s=235095feada4cfbebe6f2ffd0244bd8f4c28433b', 'width': 640}], 'source': {'height': 787, 'url': 'https://preview.redd.it/5mci5v1vijsf1.png?auto=webp&s=8ef16d58b687bbe72c729ab7c54a47e65aaa6879', 'width': 819}, 'variants': {}}]} | |
Used Llama 3.3 70b versatile from together AI to make Examsprint AI | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews | 2025-10-01T18:01:19 | Thick-Hope6979 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvfixc | false | null | t3_1nvfixc | /r/LocalLLaMA/comments/1nvfixc/used_llama_33_70b_versatile_from_together_ai_to/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fa72trw2ijsf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=108&crop=smart&auto=webp&s=44b9f3a1bafbf31192dbaa714f3d70257e4f751c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=216&crop=smart&auto=webp&s=ea0d44bf6290894df09170c9ea2ef6e224573cf6', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=320&crop=smart&auto=webp&s=8d8a9868f245b18c185322ebe2fad20739d85804', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=640&crop=smart&auto=webp&s=d9b0edaeb992f55c2c89fbcc2405395cea3d3064', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=960&crop=smart&auto=webp&s=7587f164b5dd180dbf3572b5db6330eeda482301', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?width=1080&crop=smart&auto=webp&s=5a710abe79bc8ca93b02fc50af7447e8bdf5f9d3', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/fa72trw2ijsf1.png?auto=webp&s=cb7ceeac5b4a6aff34eb53b552f3c360b9c2d5d7', 'width': 1080}, 'variants': {}}]} | |
I've built Jarvis completely on-device in the browser | 150 | 2025-10-01T17:31:15 | https://v.redd.it/hge6ipzncjsf1 | nicodotdev | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nveoru | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hge6ipzncjsf1/DASHPlaylist.mpd?a=1761931893%2CMjUwYjM0OGEyMWI1YzU1MGQxZWQxZTk4ZjBjNGFkNmY3NTIxMGQ3ZjYwM2I5MWRjOGNiODM1NzEwYjUwNjMyYQ%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/hge6ipzncjsf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hge6ipzncjsf1/HLSPlaylist.m3u8?a=1761931893%2CMDVlYTg1ZWE4NTBiMjc2MzI3ODFlM2EyZTliMzg1YTI0NzE3Y2FlMjk4NmY4NTljYzQyZDkwMWRhMjRiODI4NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hge6ipzncjsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nveoru | /r/LocalLLaMA/comments/1nveoru/ive_built_jarvis_completely_ondevice_in_the/ | false | false | 150 | {'enabled': False, 'images': [{'id': 'dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=108&crop=smart&format=pjpg&auto=webp&s=f25622f92dca826af51f579c58276473ff5b0b44', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=216&crop=smart&format=pjpg&auto=webp&s=21bbe57c4665778039b6607aab6fc768f59e98a5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=320&crop=smart&format=pjpg&auto=webp&s=4f21510518b1c5f065505ad01f90a0bc65676e11', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=640&crop=smart&format=pjpg&auto=webp&s=1f8d20a4d065cd1449482b93051ca694385721ab', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=960&crop=smart&format=pjpg&auto=webp&s=d6990018f5e13bc57a375272cbe56a1928979a37', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=037d93dba1669702ccb4a5f3bfecd680c2f173e2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dWNmajhwem5janNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?format=pjpg&auto=webp&s=bb220fdb8306a6cda164b627c9b638dbf57d9eb8', 'width': 1920}, 'variants': {}}]} | ||
I've built Jarvis completely on-device in the browser | 1 | [removed] | 2025-10-01T17:27:42 | https://v.redd.it/ij8yc7izajsf1 | nicodotdev | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvel5s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ij8yc7izajsf1/DASHPlaylist.mpd?a=1761931680%2COTQ2ZmJkYTNmMjY5NjkyNmMxODAwMzM2YzM4ZjEzNjI3NDJkMzE5MTkxYjE3MjE5MTliOTI1MjllMmJlM2E5MQ%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/ij8yc7izajsf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ij8yc7izajsf1/HLSPlaylist.m3u8?a=1761931680%2CNmEyODE2Mzk1YzhjMzNiMjlmZTI1NDNiZjBkZjA2M2ZhOTU3ZTYwYWMxODdiMjA5NmRhNjQ0YjA5ZjY2ZmI0Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ij8yc7izajsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nvel5s | /r/LocalLLaMA/comments/1nvel5s/ive_built_jarvis_completely_ondevice_in_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=108&crop=smart&format=pjpg&auto=webp&s=b72cf135580757d9c7f2e1dce0c9e6d0f1277403', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=216&crop=smart&format=pjpg&auto=webp&s=831bdc64d326e4ad05651745de3e1caf71eed334', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=320&crop=smart&format=pjpg&auto=webp&s=c6eed6dee7811fdb5271a83e61bf3809ba74433f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=640&crop=smart&format=pjpg&auto=webp&s=88338104d96e2425917cdbaa27049461c6e23041', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=960&crop=smart&format=pjpg&auto=webp&s=c755d3c5a790d73dbc506f533b51692ed1072178', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c2d8797b6c3b1e2421116b8edf4f7ac41ae67ac3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OWwzZmw3aXphanNmMXGz1aMo2QiMkpgt6v7Z9vfboXTlOgdFBasYHpD7porA.png?format=pjpg&auto=webp&s=5b0e68f31b47ab0e89ab7d4e72b2906db99c8236', 'width': 1920}, 'variants': {}}]} | |
How to use mmproj files + Looking for uncensored model for sorting images. | 15 | Twofold post.
I have several hundred pornographic images that I've downloaded over the years. Almost all of them have names like "0003.jpg" or "{randomAlphanumericName}.jpg".
I am looking for an uncensored model that can look at these images and return a name and some tags based on the image contents, and then I'll use a script to rename the files and exiftools to tag them.
I have found https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3-GGUF
and was told to use https://huggingface.co/bartowski/google_gemma-3-27b-it-GGUF/blob/main/mmproj-google_gemma-3-27b-it-bf16.gguf
to give it vision, but I'm still working out how to do that. I think I just need to make a Modelfile that uses a FROM param to both of those files, but I haven't gotten that far yet.
Any advice is appreciated! | 2025-10-01T17:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdz7g/how_to_use_mmproj_files_looking_for_uncensored/ | LockedCockOnTheBlock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdz7g | false | null | t3_1nvdz7g | /r/LocalLLaMA/comments/1nvdz7g/how_to_use_mmproj_files_looking_for_uncensored/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=108&crop=smart&auto=webp&s=59c908ddf30b48d0de03c616e907235827e56abb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=216&crop=smart&auto=webp&s=065f5a3421acec7852e433a237cb8fe8835534f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=320&crop=smart&auto=webp&s=0c09655ef62e0d9e3e868b21ef714a19860d409c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=640&crop=smart&auto=webp&s=05076f09e12391a11317ac854b336c2166411a41', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=960&crop=smart&auto=webp&s=217f52c384967cde1ae78492d2832216ff3a1677', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?width=1080&crop=smart&auto=webp&s=eca784ca12baa7d371d1ff98fe4295e888e0b0ed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2ZVQVMMOfYeqLYdx1eOal-hpRzFTM_NxGzVZYURLoyo.png?auto=webp&s=928971003d12711207a8076423a454c4fed926bc', 'width': 1200}, 'variants': {}}]} |
NVIDIA DGX Spark expected to become available in October 2025 | 59 | It looks like we will finally get to know how well or badly the NVIDIA GB10 performs in October (2025!) or November depending on the shipping times.
In the [NVIDIA developer forum](https://forums.developer.nvidia.com/t/dgx-spark-release-updates/341703/90) this article was posted:
[https://www.ctee.com.tw/news/20250930700082-430502](https://www.ctee.com.tw/news/20250930700082-430502)
>*GB10 new products to be launched in October... Taiwan's four major PC brand manufacturers see praise in Q4*
>*\[..\] In addition to NVIDIA's public version product delivery schedule waiting for NVIDIA's final decision, the GB10 products of Taiwanese manufacturers ASUS, Gigabyte, MSI, and Acer are all expected to be officially shipped in October. Among them, ASUS, which has already opened a wave of pre-orders in the previous quarter, is rumored to have obtained at least 18,000 sets of GB10 configurations in the first batch, while Gigabyte has about 15,000 sets, and MSI also has a configuration scale of up to 10,000 sets. It is estimated that including the supply on hand from Acer, the four major Taiwanese manufacturers will account for about 70% of the available supply of GB10 in the first wave. \[..\]*
(translated with Google Gemini as Chinese is still on my list of languages to learn...)
Looking forward to the first reports/benchmarks. 🧐 | 2025-10-01T17:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdyiy/nvidia_dgx_spark_expected_to_become_available_in/ | Excellent_Produce146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdyiy | false | null | t3_1nvdyiy | /r/LocalLLaMA/comments/1nvdyiy/nvidia_dgx_spark_expected_to_become_available_in/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo.png?width=108&crop=smart&auto=webp&s=4100a99cf2530f027c96c480d9b128ddc40819e1', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo.png?auto=webp&s=d978c93d1330d2d4dc4ee8a0decc2f8d12cd02ef', 'width': 150}, 'variants': {}}]} |
What's your hope we still get to see GLM 4.6 Air? | 2 | There's been a statement by Z Ai that they won't release an Air version of 4.6 for now. Do you think we still get to see it? | 2025-10-01T17:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdy0u/whats_your_hope_we_still_get_to_see_glm_46_air/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdy0u | false | null | t3_1nvdy0u | /r/LocalLLaMA/comments/1nvdy0u/whats_your_hope_we_still_get_to_see_glm_46_air/ | false | false | self | 2 | null |
-ERNIE-4.5-21B-A3B-Thinking — impressions after some testing | 16 | Been playing around with ERNIE-4.5-21B-A3B-Thinking for a bit and figured I’d drop my thoughts. This is Baidu’s “thinking” model for logic, math, science, and coding.
What stood out to me:
Long context works: 128K token window actually does what it promises. I’ve loaded multi-page papers and notes, and it keeps things coherent better than most open models I’ve tried.
Math & code: Handles multi-step problems pretty solidly. Small scripts work fine; bigger coding tasks, I’d still pick Qwen. Surprised by how little it hallucinates on structured problems.
Performance: 21B params total, ~3B active thanks to MoE. Feels smoother than you’d expect for a model this size.
Reasoning style: Focused and doesn’t ramble unnecessarily. Good at staying on track.
Text output: Polished enough that it works well for drafting, summaries, or light creative writing.
Best use cases: Really strong for reasoning and analysis. Weaker if you’re pushing it into larger coding projects or very complex/nuanced creative writing.
So far, it’s been useful for checking reasoning steps, parsing documents, or running experiments where I need something to actually “think through” a problem instead of shortcutting.
Curious - anyone else using it for long docs, planning tasks, or multi-step problem solving? What’s been working for you? | 2025-10-01T16:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdrig/ernie4521ba3bthinking_impressions_after_some/ | locaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdrig | false | null | t3_1nvdrig | /r/LocalLLaMA/comments/1nvdrig/ernie4521ba3bthinking_impressions_after_some/ | false | false | self | 16 | null |
Connecting 6 AMD AI Max 395+ for QWen3-235B-A22B. Is this really that much faster than just 1 server ? | 18 | The presenter claimed it reach 32 token/s with 1st token at 132ms for QWen3-235B-A22B-IQ4 model, which need 100+GB memory.
How much better this is than single 128GB AI Max 395+ ? | 2025-10-01T16:48:30 | https://b23.tv/TO5oW7j | erichang | b23.tv | 1970-01-01T00:00:00 | 0 | {} | 1nvdhws | false | null | t3_1nvdhws | /r/LocalLLaMA/comments/1nvdhws/connecting_6_amd_ai_max_395_for_qwen3235ba22b_is/ | false | false | default | 18 | null |
I built an open-source local LLM app with real-time sync (CRDT) and inline tool calls | 5 | I spent the last few months creating an LLM app built on conflict-free replicated data types (CRDTs) and embedded jupyter notebooks. I don't believe there's a one-size-fits-all approach to tools/RAG/memory and I wanted a chat app that just yields control to the end-user/developer. The CRDTs are to keep data in sync across devices (collaborative editing + distributed use cases) and they also provide message delivery guarantees so prompts never get eaten by networking issues.
It's fully open-sourced (MIT), operates totally offline, and there's no telemetry or other shenanigans - and it wasn't vibe-coded. The repo is available here: https://github.com/Reclusive-Inc/closed-circuit-ai
I'm pretty happy with how it turned out and I hope other developers will find it useful for working with tool-calling LLMs! | 2025-10-01T16:47:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdgln/i_built_an_opensource_local_llm_app_with_realtime/ | reclusive-sky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdgln | false | null | t3_1nvdgln | /r/LocalLLaMA/comments/1nvdgln/i_built_an_opensource_local_llm_app_with_realtime/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=108&crop=smart&auto=webp&s=ff53f521556e7b221b3b30288b7fe7cd5b24e7c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=216&crop=smart&auto=webp&s=849b0761f33c4c79327659e2069fee69fff90867', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=320&crop=smart&auto=webp&s=401caa39bd24e3f1d66442c7dab5dfb413d1e1fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=640&crop=smart&auto=webp&s=bc3d05211e45d05ff6487a6d17dcf7abcbfa24ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=960&crop=smart&auto=webp&s=2539727992fea31ef1c638439f09ff885d1954a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?width=1080&crop=smart&auto=webp&s=e0bd760987ed43350c0002c10b9a1027612f3203', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NY4G_CQMFkp6Xz1xZSEU_FuEbuXokZ-VudaXlHqmeUs.png?auto=webp&s=20f5e85d19db73f332d910fdbbe880e96045b967', 'width': 1200}, 'variants': {}}]} |
Ayuda con ttx 5 | 0 | Buscamos a alguien que nos guie o ayude a clonar una voz de eelevenlabs a la perfección en algún modelo de tts. Recompensa por la ayuda :) | 2025-10-01T16:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nvdacy/ayuda_con_ttx_5/ | Ok_Acanthaceae8660 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvdacy | false | null | t3_1nvdacy | /r/LocalLLaMA/comments/1nvdacy/ayuda_con_ttx_5/ | false | false | self | 0 | null |
Translating text within an image (outputting an image) | 4 | I am trying to solve an issue of being able to translate an image that contains text, so that the output is an image of the same appearance and similar font/style of text but in a different language. So far I haven't been able to find a model that does this natively.
Do you have any recommendations or how to achieve such thing? Perhaps even without LLM but an ML model? | 2025-10-01T16:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nvcwqb/translating_text_within_an_image_outputting_an/ | SuddenWerewolf7041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvcwqb | false | null | t3_1nvcwqb | /r/LocalLLaMA/comments/1nvcwqb/translating_text_within_an_image_outputting_an/ | false | false | self | 4 | null |
Hi guys, im a newbie in this app, is there any way i can use plugins maybe to make the model gen tokens faster? and maybe make it accept images? | 0 | Im using "dolphin mistral 24b" and my pc sucks so i was wondering if there is some way to make it faster.
thanks! | 2025-10-01T16:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/1nvctpx/hi_guys_im_a_newbie_in_this_app_is_there_any_way/ | magach6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvctpx | false | null | t3_1nvctpx | /r/LocalLLaMA/comments/1nvctpx/hi_guys_im_a_newbie_in_this_app_is_there_any_way/ | false | false | self | 0 | null |
We're building a local OpenRouter: Auto-configure the best LLM engine on any PC | 218 | Lemonade is a local LLM server-router that auto-configures high-performance inference engines for your computer. We don't just wrap llama.cpp, we're here to wrap everything!
We started out building an OpenAI-compatible server for AMD NPUs and quickly found that users and devs want flexibility, so we kept adding support for more devices, engines, and operating systems.
What was once a single-engine server evolved into a server-router, like OpenRouter but 100% local. Today's v8.1.11 release adds another inference engine and another OS to the list!
-----------------------------------
## 🚀 FastFlowLM
- The FastFlowLM inference engine for AMD NPUs is fully integrated with Lemonade for Windows Ryzen AI 300-series PCs.
- Switch between ONNX, GGUF, and FastFlowLM models from the same Lemonade install with one click.
- Shoutout to TWei, Alfred, and Zane for supporting the integration!
-----------------------------------
## 🍎 macOS / Apple Silicon
- PyPI installer for M-series macOS devices, with the same experience available on Windows and Linux.
- Taps into llama.cpp's Metal backend for compute.
-----------------------------------
## 🤝 Community Contributions
- Added a stop button, chat auto-scroll, custom vision model download, model size info, and UI refinements to the built-in web ui.
- Support for gpt-oss's reasoning style, changing context size from the tray app, and refined the .exe installer.
- Shoutout to kpoineal, siavashhub, ajnatopic1, Deepam02, Kritik-07, RobertAgee, keetrap, and ianbmacdonald!
-----------------------------------
## 🤖 What's Next
- Popular apps like Continue, Dify, Morphik, and more are integrating with Lemonade as a native LLM provider, with more apps to follow.
- Should we add more inference engines or backends? Let us know what you'd like to see.
-----------------------------------
GitHub/Discord links in the comments. Check us out and say hi if the project direction sounds good to you. The community's support is what empowers our team at AMD to expand across different hardware, engines, and OSs. | 2025-10-01T16:13:17 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvcjkr | false | null | t3_1nvcjkr | /r/LocalLLaMA/comments/1nvcjkr/were_building_a_local_openrouter_autoconfigure/ | false | false | 218 | {'enabled': True, 'images': [{'id': 'cH_k0RDEsALQZ04BUDqlqpAOatZ1wQHqCOIxQ9w9feI', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=108&crop=smart&auto=webp&s=f9eb22a840608f6e8a6d0c447512eeb086158912', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=216&crop=smart&auto=webp&s=e70a9e1c54e39ac8c15c242ce059ed49d8d1e71d', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=320&crop=smart&auto=webp&s=d74971f4c129e3f9c5b358da9a9bc1c79455a736', 'width': 320}, {'height': 309, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=640&crop=smart&auto=webp&s=63b8433dff7ec591d237dcfae3b32ef0a530e5c4', 'width': 640}, {'height': 464, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=960&crop=smart&auto=webp&s=03dbb3f9be0e3b90cb3cf3f75fd4320ada04a7c5', 'width': 960}, {'height': 522, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?width=1080&crop=smart&auto=webp&s=28c481c0e0f06887698c25000f0ec3ff333a6ea7', 'width': 1080}], 'source': {'height': 790, 'url': 'https://preview.redd.it/fe4322p9yisf1.png?auto=webp&s=d7276b38f472d5b3aedab18c1788afc0aac5bde6', 'width': 1633}, 'variants': {}}]} | ||
The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain | 26 | [https://arxiv.org/html/2509.26507v1](https://arxiv.org/html/2509.26507v1)
A very interesting paper from the guys supported by Łukasz Kaiser, one of the co-authors of the seminal Transformers paper from 2017. | 2025-10-01T15:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1nvc5eq/the_dragon_hatchling_the_missing_link_between_the/ | Salty-Garage7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvc5eq | false | null | t3_1nvc5eq | /r/LocalLLaMA/comments/1nvc5eq/the_dragon_hatchling_the_missing_link_between_the/ | false | false | self | 26 | null |
Eclaire – Open-source, privacy-focused AI assistant for your data | 29 | https://reddit.com/link/1nvc4ad/video/q423v4jovisf1/player
Hi all, this is a project I've been working on for some time. It started as a personal AI to help manage growing amounts of data - bookmarks, photos, documents, notes, etc. All in one place.
Once the data gets added to the system, it gets processed including fetching bookmarks, tagging, classification, image analysis, text extraction / ocr, and more. And then the AI is able to work with those assets to perform search, answer questions, create new items, etc. You can also create scheduled / recurring tasks to assing to the AI.
Using llama.cpp with Qweb3-14b by default for the assistant backend and Gemma3-4b for workers multimodal processing. You can easily swap to other models.
* Demo: [https://eclaire.co/#demo](https://eclaire.co/#demo)
* Code: [https://github.com/eclaire-labs/eclaire](https://github.com/eclaire-labs/eclaire)
MIT Licensed. Feedback and contributions welcome! | 2025-10-01T15:57:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nvc4ad/eclaire_opensource_privacyfocused_ai_assistant/ | dorali8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvc4ad | false | null | t3_1nvc4ad | /r/LocalLLaMA/comments/1nvc4ad/eclaire_opensource_privacyfocused_ai_assistant/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': '7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=108&crop=smart&auto=webp&s=1bea97ef1ca102deb96681578bc7afa2755ff7b4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=216&crop=smart&auto=webp&s=07116d2fe09354374bcc834ad6f1a16d6aab2acc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=320&crop=smart&auto=webp&s=9e05fc47eae0629926d07d51eda9fe3e554b7b80', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=640&crop=smart&auto=webp&s=9afd82e1b3769f8671a17e5be4476f289edccc48', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=960&crop=smart&auto=webp&s=9c0e1d43e83195d309cde4a165f30dc7a2403597', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?width=1080&crop=smart&auto=webp&s=87ee31b97c5037b680d348653be72142a3400f20', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/7p206NDe120lxhrc6n4JVw5doWuQzfnDXGAVCvWvfRk.jpeg?auto=webp&s=89fef5fc99795bf52fe0b41f9dfb3c7631475628', 'width': 1200}, 'variants': {}}]} |
So has anyone actually tried Apriel-v1.5-15B? | 29 | It’s obvious it isn’t on R1’s level. But honestly, if we get a model that performs insanely well on 15B then it truly is something for this community. The benchmarks of Artificial Intelligence Index focuses a lot recently in tool calling and instruction following so having a very reliable one is a plus.
Can’t personally do this because I don’t have 16GB :( | 2025-10-01T15:47:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nvbu3h/so_has_anyone_actually_tried_aprielv1515b/ | MKU64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvbu3h | false | null | t3_1nvbu3h | /r/LocalLLaMA/comments/1nvbu3h/so_has_anyone_actually_tried_aprielv1515b/ | false | false | self | 29 | null |
Looking for contributors to PipesHub (open-source platform for AI Agents) | 8 | Teams across the globe are building AI Agents. AI Agents need context and tools to work well.
We’ve been building **PipesHub**, an open-source developer platform for AI Agents that need real enterprise context scattered across multiple business apps. Think of it like the open-source alternative to Glean but designed for developers, not just big companies.
Right now, the project is growing fast (crossed 1,000+ GitHub stars in just a few months) and we’d love more contributors to join us.
We support almost all major native Embedding and Chat Generator models and OpenAI compatible endpoints. Users can connect to Google Drive, Gmail, Onedrive, Sharepoint Online, Confluence, Jira and more.
Some cool things you can help with:
* Improve support for Local Inferencing - Ollama, vLLM, LM Studio, oLLM
* Improving our RAG pipeline with more robust Knowledge Graphs and filters
* Providing tools to Agents like Web search, Image Generator, CSV, Excel, Docx, PPTX, Coding Sandbox, etc
* Universal MCP Server
* Adding Memory, Guardrails to Agents
* Improving REST APIs
* SDKs for python, typescript, other programming languages
* Docs, examples, and community support for new devs
We’re trying to make it super easy for devs to spin up AI pipelines that actually work in production, with trust and explainability baked in.
👉 Repo: [https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai)
You can join our Discord group for more details or pick items from GitHub issues list. | 2025-10-01T15:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nvbiqa/looking_for_contributors_to_pipeshub_opensource/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvbiqa | false | null | t3_1nvbiqa | /r/LocalLLaMA/comments/1nvbiqa/looking_for_contributors_to_pipeshub_opensource/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]} |
Tutorial: Matrix Core Programming on AMD CDNA3 and CDNA4 architecture | 15 | Hi all,
I'm excited to announce my new tutorial on programming Matrix Cores in HIP. The blog post is very educational and contains necessary knowledge to start programming Matrix Cores, covering modern low-precision floating-point types, the Matrix Core compiler intrinsics, and the data layouts required by the Matrix Core instructions. I tried to make the tutorial easy to follow and, as always, included lots of code examples and illustrations. I hope you will enjoy it! Please let me know if there are any other technical ROCm/HIP-related topics you would like to hear more about!
Link: [https://salykova.github.io/matrix-cores-cdna](https://salykova.github.io/matrix-cores-cdna) | 2025-10-01T15:24:46 | salykova_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvb8d9 | false | null | t3_1nvb8d9 | /r/LocalLLaMA/comments/1nvb8d9/tutorial_matrix_core_programming_on_amd_cdna3_and/ | false | false | default | 15 | {'enabled': True, 'images': [{'id': 'up8q1u00qisf1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=108&crop=smart&auto=webp&s=80932ee49e04d33e588f6fbf28bcae2e0de8e613', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=216&crop=smart&auto=webp&s=54f4a9ecef53694dcdfcc9e3d8963642944809c0', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=320&crop=smart&auto=webp&s=d461f7f7146fa6e8113dca1979f3e5d71729aca3', 'width': 320}, {'height': 547, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=640&crop=smart&auto=webp&s=39781e64f363eb4ec35ac98f64aae0c9bacdcd4d', 'width': 640}, {'height': 821, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=960&crop=smart&auto=webp&s=cc7a36d013ceab5f29fa038531088a8d58ac1545', 'width': 960}, {'height': 924, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?width=1080&crop=smart&auto=webp&s=b80b977ccaf7174d5a4c7ee3a9df9cb5ee26a175', 'width': 1080}], 'source': {'height': 1191, 'url': 'https://preview.redd.it/up8q1u00qisf1.png?auto=webp&s=a18a42fd7ffc21a3717a6c297b93019ce4af8bc9', 'width': 1392}, 'variants': {}}]} | |
Looking for on-premise baremetal GPU server rental (A6000) in Paris region | 4 | Hi everyone,
I’m currently looking to rent a rackmount GPU server (preferably with NVIDIA RTX A6000) for a short period (1 month or more).
Just to clarify: I’m not looking for a “bare metal” server hosted in a datacenter (OVH, Scaleway, etc.). What I need is a physical baremetal server delivered and installed on-premise in my own location in the Paris area.
Basically, I want the machine physically available as if I had bought it, but on a rental basis.
If you know any providers, system integrators, or companies in the region that offer this kind of on-premise GPU server rental, I’d greatly appreciate any contacts, leads, or feedback.
Thanks in advance | 2025-10-01T15:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nvb62v/looking_for_onpremise_baremetal_gpu_server_rental/ | ebkam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvb62v | false | null | t3_1nvb62v | /r/LocalLLaMA/comments/1nvb62v/looking_for_onpremise_baremetal_gpu_server_rental/ | false | false | self | 4 | null |
Recherche location serveur GPU A6000 baremetal on-premise sur la région parisienne | 1 | Salut à tous,
Je suis actuellement à la recherche d’un serveur GPU (type NVIDIA RTX A6000) en format rackable disponible à la location, sur une courte durée (1 mois ou plus).
Pour préciser : je ne cherche pas un serveur “bare metal” hébergé dans un datacenter (OVH, Scaleway & co), mais bien un serveur physique livré et installé on-premise, dans mes locaux en région parisienne.
L’idée est de disposer de la machine physiquement, comme si c’était un achat, mais en mode location.
Si vous connaissez des prestataires, intégrateurs, revendeurs ou boîtes spécialisées en location de matériel IT/IA qui proposent ce type de service en Île-de-France, je suis preneur de vos contacts ou retours d’expérience.
Merci d’avance pour vos pistes | 2025-10-01T15:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nvb430/recherche_location_serveur_gpu_a6000_baremetal/ | ebkam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvb430 | false | null | t3_1nvb430 | /r/LocalLLaMA/comments/1nvb430/recherche_location_serveur_gpu_a6000_baremetal/ | false | false | self | 1 | null |
Tutorial: Matrix Core Programming on AMD CDNA3 and CDNA4 architecture | 1 | Hi all,
I'm excited to announce my new tutorial on programming Matrix Cores in HIP. The blog post is very educational and contains necessary knowledge to start programming Matrix Cores, covering modern low-precision floating-point types, the Matrix Core compiler intrinsics, and the data layouts required by the Matrix Core instructions. I tried to make the tutorial easy to follow and, as always, included lots of code examples and illustrations. I hope you will enjoy it! Please let me know if there are any other technical ROCm/HIP-related topics you would like to hear more about!
Link: [https://salykova.github.io/matrix-cores-cdna](https://salykova.github.io/matrix-cores-cdna) | 2025-10-01T15:18:22 | salykova_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvb273 | false | null | t3_1nvb273 | /r/LocalLLaMA/comments/1nvb273/tutorial_matrix_core_programming_on_amd_cdna3_and/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'dsjipq6zoisf1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?width=108&crop=smart&auto=webp&s=0be09f1a7fce6b2bc0ff7d85eb8572509e7562d0', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?width=216&crop=smart&auto=webp&s=2f0b8171e260772143cddf9836ba89053be72741', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?width=320&crop=smart&auto=webp&s=50dd8fb64b055091fdac1596940ae28f965cff3f', 'width': 320}, {'height': 571, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?width=640&crop=smart&auto=webp&s=cc55e120ea4da414c302a9328cb1f3a72a25eb54', 'width': 640}, {'height': 857, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?width=960&crop=smart&auto=webp&s=8df7f2b40b25eb08b02dfee37d43a99dd1eaa558', 'width': 960}], 'source': {'height': 931, 'url': 'https://preview.redd.it/dsjipq6zoisf1.png?auto=webp&s=6a8947fe624248572d2259844b65c9835d25f22e', 'width': 1042}, 'variants': {}}]} | |
OLLAMA takes forever to download on a Linux server | 1 | Hi,
I'm trying to download OLLAMA to my Ubuntu 22.04 linux server - The download takes ages, it even shows 6 hours, is this normal?
\-> curl -fsSL [https://ollama.com/install.sh](https://ollama.com/install.sh) | sh
I used the command to display the download time
\-> curl -L --http1.1 -o /tmp/ollama-linux-amd64.tgz [https://ollama.com/download/ollama-linux-amd64.tgz](https://ollama.com/download/ollama-linux-amd64.tgz)
I'm downloading via putty, SFTP protocol, firewall enabled
Hardware parameters:
Processor: AMD EPYC 4464P - 12c/24t - 3.7 GHz/5.4 GHz
Ram: 192 GB 3600 MHz
Disk: 960 GB SSD NVMe
GPU: None
Network bandwidth: 1 Gbps | 2025-10-01T15:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nvastb/ollama_takes_forever_to_download_on_a_linux_server/ | klauses3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvastb | false | null | t3_1nvastb | /r/LocalLLaMA/comments/1nvastb/ollama_takes_forever_to_download_on_a_linux_server/ | false | false | self | 1 | null |
Local is the future | 0 | After what happened with claude code last month, and now this
https://arxiv.org/abs/2509.25559
A study by a radiologist testing different online LLMs (Through the chat interface)... 33% accuracy only
Anyone in healthcare knows current capabilities of AI surpass humans understanding
The online models are simply unreliable... Local is the future | 2025-10-01T14:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nv9y0c/local_is_the_future/ | Conscious_Nobody9571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv9y0c | false | null | t3_1nv9y0c | /r/LocalLLaMA/comments/1nv9y0c/local_is_the_future/ | false | false | self | 0 | null |
I used Llama 3.3 70b versatile to build Examsprint AI | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI. Examsprint AI is a free AI tool that is build to help students form class 9-12 to exceed in their studies by providing all resources free and downloadable.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter\[For Class 11 and 12\]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes\[ Variety from class 9 to 12\]
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
GET BLUEPRINT OF SCHOOL EXAMS
GET BLUEPRINT OF BOARDS EXAMS
GET BLUEPRINT OF NEET-JEE EXAMS
GET BLOGS
GET STUDENTS QUERIES
GET AI CHATBOT THAT CAN ALSO GIVE YOU FLOWCHART AND VISUAL REPRESENTATION WITH YOUR QUESTION FOR BETTER UNDERSTANDING
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
BEST SITE FOR STUDY | 2025-10-01T14:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nv9q7q/i_used_llama_33_70b_versatile_to_build_examsprint/ | Just_Review_7972 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv9q7q | false | null | t3_1nv9q7q | /r/LocalLLaMA/comments/1nv9q7q/i_used_llama_33_70b_versatile_to_build_examsprint/ | false | false | self | 0 | null |
Am i seeing this Right? | 138 | It would be really cool if unsloth provides quants for Apriel-v1.5-15B-Thinker
(Sorted by opensource, small and tiny) | 2025-10-01T13:43:53 | https://www.reddit.com/gallery/1nv8l6o | Brave-Hold-9389 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nv8l6o | false | null | t3_1nv8l6o | /r/LocalLLaMA/comments/1nv8l6o/am_i_seeing_this_right/ | false | false | 138 | null | |
I built a private, multi-user RAG app that runs 100% offline. Here's the technical deep-dive | 0 | 2025-10-01T13:29:05 | https://medium.com/data-science-collective/how-i-built-a-private-multi-user-chat-with-your-documents-app-that-runs-100-offline-713e10da573a | Prudent-Meringue845 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1nv8889 | false | null | t3_1nv8889 | /r/LocalLLaMA/comments/1nv8889/i_built_a_private_multiuser_rag_app_that_runs_100/ | false | false | default | 0 | null | |
GLM-4.5V model locally for computer use | 26 | On OSWorld-V, it scores 35.8% - beating UI-TARS-1.5, matching Claude-3.7-Sonnet-20250219, and setting SOTA for fully open-source computer-use models.
Run it with Cua either: Locally via Hugging Face Remotely via OpenRouter
Github : https://github.com/trycua
Docs + examples: https://docs.trycua.com/docs/agent-sdk/supported-agents/computer-use-agents#glm-45v | 2025-10-01T13:27:48 | https://v.redd.it/6ff5zu1a5isf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv873m | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/6ff5zu1a5isf1/DASHPlaylist.mpd?a=1761917282%2CMzAxZjMyMGU3YTg5NmU1YjZjYWIxZDVjYjMwOTkxMDQ3MDdkMmQ2NDMwMzViYTgxNGZlYzEwOWIxYzgxYjMzOA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/6ff5zu1a5isf1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 278, 'hls_url': 'https://v.redd.it/6ff5zu1a5isf1/HLSPlaylist.m3u8?a=1761917282%2CMmFiNzYzYjkwZWNhNGM0OGQ1ZmVlZDUzZTRhN2VlYWFhZGZmNzU4MWYxMjYzODg3NTQyOWYzMjMzODc1NjAxMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6ff5zu1a5isf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}} | t3_1nv873m | /r/LocalLLaMA/comments/1nv873m/glm45v_model_locally_for_computer_use/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a52c853396f5e2fedafe040b8d836aba035e663', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0.png?width=216&crop=smart&format=pjpg&auto=webp&s=2e46f83c7d07513e7b5cd1f52168b8c8ea0ed680', 'width': 216}, {'height': 139, 'url': 'https://external-preview.redd.it/NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0.png?width=320&crop=smart&format=pjpg&auto=webp&s=aeeafb948ea7648a23a708d1c9055f6401483434', 'width': 320}, {'height': 278, 'url': 'https://external-preview.redd.it/NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0.png?width=640&crop=smart&format=pjpg&auto=webp&s=1684561d53577e1b9609ca211e860541f9336993', 'width': 640}], 'source': {'height': 372, 'url': 'https://external-preview.redd.it/NDc4azRudDk1aXNmMbWADQNBSkImjVESNjfi_q43l9ostHKNGAFX_QJdfnS0.png?format=pjpg&auto=webp&s=579b00d61a69d6575afe6b88c274becbc03e425e', 'width': 854}, 'variants': {}}]} | |
I have an AMD MI100 32GB GPU lying around. Can I put it in a pc? | 3 | I was using the GPU a couple of years ago when it was in a HP server (don't remember the server model), mostly for Stable Diffusion. The server was high-spec cpu and RAM, so the IT guys in our org requisitioned it and ended up creating VMs for multiple users who wanted the CPU and RAM more than the GPU.
MI100 does not work with virtualization and does not support pass-through, so it ended up just sitting in the server but I had no way to access it.
I got a desktop with a 3060 instead and I've been managing my LLM requirements with that.
Pretty much forgot about the MI100 till I recently saw a post about llama.cpp improving speed on ROCM. Now I'm wondering if I could get the GPU out and maybe get it to run on a normal desktop rather than a server.
I'm thinking if I could get something like a HP Z1 G9 with maybe 64gb RAM, an i5 14th gen and a 550W PSU, I could probably fit the MI100 in there. I have the 3060 sitting in a similar system right now. MI100 has a power draw of 300W but the 550W PSU should be good enough considering the CPU only has a TDP of 65W. But the MI100 is an inch longer than the 3060 so I do need to check if it will fit in the chassis.
Aside from that, anyone have any experience with running M100 in a Desktop? Are MI100s compatible only with specific motherboards or will any reasonably recent motherboard work? The MI100 spec sheet gives a small list of servers it is supposed to be verified to work on, so no idea if it works on generic desktop systems as well.
Also any idea what kind of connectors the MI100 needs? It seems to have 2 8-pin connectors. Not sure if regular Desktop PSUs have those. Should I look for a CPU that supports AVX512 - does it really make an appreciable difference?
Anything else I should be watching out for? | 2025-10-01T13:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nv7z4f/i_have_an_amd_mi100_32gb_gpu_lying_around_can_i/ | regstuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv7z4f | false | null | t3_1nv7z4f | /r/LocalLLaMA/comments/1nv7z4f/i_have_an_amd_mi100_32gb_gpu_lying_around_can_i/ | false | false | self | 3 | null |
I spent a few hours prompting LLMs for a pilot study of the "Confidence profile" of GPT-5 vs Qwen3-Max. Findings: GPT-5 is "cosmetically tuned" for confidence. Qwen3, despite meta awareness of its own precision level, defaults towards underconfidence without access to tools. | 65 | See examples of questions used and explanations of scales in the image. I will copy some of the text from the image here:
**GPT-5 findings:**
* Given a normal human prompt style (and the phrase “can you confidently..”), the model will have little meta awareness of its data quality, and will confidently hallucinate.
* Confidence dump / risk maximization prompt (ie. emphasizing risk and reminding the model that it hallucinates):
* Consistently reduces confidence.
* Almost avoids hallucinations for the price of some underconfident refusals (false negatives)
**Suggesting “cosmetic” tuning:** Since hallucinations *can* be avoided in preprompt, and models do have some assumption of precision for a question, it is likely that OpenAI is more afraid of the (“unimpressive”) occasional underconfidence than of the (“seemingly impressive”) consistent confident hallucinations.
**Qwen3-Max findings:**
* Any sense of uncertainty will cause Qwen to want to look up facts.
* Any insinuation of required confidence, when lookup is not available, will cause an “inconfident” reply.
* Qwen generally needs to be clearly prompted with confidence boosting, and that its okay to hallucinate.
**Distrust of weights for hard facts:** In short, Qwen generally does not trust its weights to produce hard facts, except in some cases (thus allowing it to “override” looked up facts). | 2025-10-01T13:08:53 | partysnatcher | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv7quz | false | null | t3_1nv7quz | /r/LocalLLaMA/comments/1nv7quz/i_spent_a_few_hours_prompting_llms_for_a_pilot/ | false | false | 65 | {'enabled': True, 'images': [{'id': 'BpBH0Z3m8G-0HRjvoNZAZER9DbmULRgyEUUDjuIy52Y', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=108&crop=smart&auto=webp&s=52c84d31e686339935e891d8caf157d9bebe7393', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=216&crop=smart&auto=webp&s=d2cbe34aa76b0f43c84dbda2b61bb400b9eba79e', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=320&crop=smart&auto=webp&s=6f765c3e4daf0e47b9085a53649729f7e49ee56f', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=640&crop=smart&auto=webp&s=2f5e9871c689bdb2e7267272c090e15a3fb22e17', 'width': 640}, {'height': 644, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=960&crop=smart&auto=webp&s=11039db7a8a40cf7b6ad7903167c15c291620fa2', 'width': 960}, {'height': 724, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?width=1080&crop=smart&auto=webp&s=b7948336d020c330aed9c8ea5dffa04f4808f20c', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://preview.redd.it/nqtw7wzx0isf1.png?auto=webp&s=e61dc122127e25fcca2a64d54363f9fe080648e4', 'width': 2012}, 'variants': {}}]} | ||
Want to get started with training LLMs for theorem proving (with 500-1000 USD budget), so what are my options? | 8 | Hi everyone,
I recently graduated from a Master program in math at a German University. As I am always interested in AI4Math and formal theorem proving (like Coq and Lean), I want to explore and get hands-on experience with training and applying LLMs to formal math. However, I have a rather limited budget, e.g., around 500 to 1000 USD.
After reading [this 3k post](https://www.reddit.com/r/LocalLLaMA/comments/1nuq4tr/spent_3k_building_the_open_source_models_you/), I realized that it may be possible to train some prover/math LLMs by myself, so I was wondering what are my options?
More specifically, I have the following questions:
1. How many and what size models could I reasonably train or fine-tune for theorem proving tasks (e.g. Lean and/or Coq)?
2. Would fine-tuning existing open models (e.g. LLaMA, Mistral, Qwen, etc.) on theorem-proving data count as “training”? Or do I need to attempt training something from scratch?
Basically, I’m looking for the best path to get meaningful hands-on experience in this area without breaking the bank. Any recommendations from people who’ve done fine-tuning or small-scale training for formal math would be super helpful!
Many thanks! | 2025-10-01T13:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nv7npc/want_to_get_started_with_training_llms_for/ | hedgehog0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv7npc | false | null | t3_1nv7npc | /r/LocalLLaMA/comments/1nv7npc/want_to_get_started_with_training_llms_for/ | false | false | self | 8 | null |
llms.py gets a UI | 1 | [removed] | 2025-10-01T13:01:03 | https://servicestack.net/posts/llms-py-ui | mythz | servicestack.net | 1970-01-01T00:00:00 | 0 | {} | 1nv7k9s | false | null | t3_1nv7k9s | /r/LocalLLaMA/comments/1nv7k9s/llmspy_gets_a_ui/ | false | false | default | 1 | null |
After the last few model releases, I know DeepSeek has the strongest model in the lab right now, but they don't want to release it because they don't want any more unwanted attention. | 0 | playing open ai game ,
this is not the way chinease lab play they achieve and they laucn it instantly but i think deepseek got a damage i think they are waiting | 2025-10-01T12:46:36 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv78m5 | false | null | t3_1nv78m5 | /r/LocalLLaMA/comments/1nv78m5/after_the_last_few_model_releases_i_know_deepseek/ | false | false | 0 | {'enabled': True, 'images': [{'id': '4gDKXVpqV2mTctIESRBMUbvcqDChn56tzA-8jevhYlo', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=108&crop=smart&auto=webp&s=71ae8c08887115c18645e16d3e2cc4a7400d4b44', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=216&crop=smart&auto=webp&s=49642347c192dc2fb1438fdb821bed7b6df184ed', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=320&crop=smart&auto=webp&s=417a3b3f4b66b09832a922a43561e9adf810cde9', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=640&crop=smart&auto=webp&s=bf01156e01072b55df4ae38068a52c24e12310ca', 'width': 640}, {'height': 656, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=960&crop=smart&auto=webp&s=a4206cb7526d4a3efbed68ec4429ba5be56e31ac', 'width': 960}, {'height': 738, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?width=1080&crop=smart&auto=webp&s=61753cf31c0e247b814d27806057074637ecb24c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/ef1n45nywhsf1.png?auto=webp&s=c258b48b02b20d25795b363412fc66273719d658', 'width': 1579}, 'variants': {}}]} | ||
i used llama 3.3 70b to make nexnotes ai | 0 | NexNotes AI is an AI-powered note-taking and study tool that helps students and researchers learn faster. Key features include:
* Instant Note Generation: Paste links or notes and receive clean, smart notes instantly.
* AI-Powered Summarization: Automatically highlights important points within the notes.
* Quiz and Question Paper Generation: Create quizzes and question papers from study notes.
* Handwriting Conversion: Convert handwritten notes into digital text.
Ideal for:
* Students preparing for exams (NEET, JEE, board exams)
* Researchers needing to quickly summarize information
* Teachers looking for automated quiz generation tools
NexNotes AI stands out by offering a comprehensive suite of AI-powered study tools, from note creation and summarization to quiz generation, all in one platform, significantly boosting study efficiency. | 2025-10-01T12:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nv77uv/i_used_llama_33_70b_to_make_nexnotes_ai/ | PutridBerry7521 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv77uv | false | null | t3_1nv77uv | /r/LocalLLaMA/comments/1nv77uv/i_used_llama_33_70b_to_make_nexnotes_ai/ | false | false | self | 0 | null |
What local models are useful for mental and emotional advice? | 0 | Since ChatGPT is broken asf, I want to try open source alternatives. I heard gpt oss 20b is good.
Are there more? | 2025-10-01T12:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nv73ct/what_local_models_are_useful_for_mental_and/ | WideAd1051 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv73ct | false | null | t3_1nv73ct | /r/LocalLLaMA/comments/1nv73ct/what_local_models_are_useful_for_mental_and/ | false | false | self | 0 | null |
Tutorial: Matrix Core Programming on AMD CDNA3 and CDNA4 architecture | 6 | Hi all,
I'm excited to announce my new tutorial on programming Matrix Cores in HIP. The blog post is very educational and contains necessary knowledge to start programming Matrix Cores, covering modern low-precision floating-point types, the Matrix Core compiler intrinsics, and the data layouts required by the Matrix Core instructions. I tried to make the tutorial easy to follow and, as always, included lots of code examples and illustrations. I hope you will enjoy it! Please let me know if there are any other technical ROCm/HIP-related topics you would like to hear more about!
Link: [https://salykova.github.io/matrix-cores-cdna](https://salykova.github.io/matrix-cores-cdna) | 2025-10-01T12:25:03 | salykova_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv6rh7 | false | null | t3_1nv6rh7 | /r/LocalLLaMA/comments/1nv6rh7/tutorial_matrix_core_programming_on_amd_cdna3_and/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': '9x2pt2dwthsf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/9x2pt2dwthsf1.png?width=108&crop=smart&auto=webp&s=db54ca54665c9dba70af31755523fc3feba47bb9', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/9x2pt2dwthsf1.png?width=216&crop=smart&auto=webp&s=697e349404ea1046488579e187fbcad33792c5e0', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/9x2pt2dwthsf1.png?width=320&crop=smart&auto=webp&s=8c28ecf453b319dca6ccabdb9e77771515dbec5f', 'width': 320}, {'height': 449, 'url': 'https://preview.redd.it/9x2pt2dwthsf1.png?width=640&crop=smart&auto=webp&s=78bd9dafd6c6fb3c29acbdf09c8d1b53b12ccbce', 'width': 640}], 'source': {'height': 540, 'url': 'https://preview.redd.it/9x2pt2dwthsf1.png?auto=webp&s=f4b4af5db535589fea8084ea96217caa805997f3', 'width': 769}, 'variants': {}}]} | |
Can anyone help me understand the difference between GLM 4.6 and GLM 4.5? Shall I switch to the new model? Anyone tried both the models side by side | 7 | So [Z.ai](http://Z.ai) has launched GLM 4.6 yesterday. I have been Using GLM 4.5 constantly for a while now, and quite comfortable with the model. But given the benchmarks today, GLM 4.6 definitely looks a great upgrade over GLM 4.5. But is the model actually good? Has anyone used them side-by-side? And can compare whether I should switch from GLM 4.5 to GLM 4.6? This will require a few prompt tunings as well on my end in my pipeline. | 2025-10-01T12:19:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nv6msf/can_anyone_help_me_understand_the_difference/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv6msf | false | null | t3_1nv6msf | /r/LocalLLaMA/comments/1nv6msf/can_anyone_help_me_understand_the_difference/ | false | false | self | 7 | null |
MNN speed is awesome | 5 | I recently heard about the MNN project, so I compared it with llama.cpp and ik_llama.cpp on my phone. Is this magic?
Test environment: Snapdragon 680, Termux proot-distro, GCC 15.2.0 (flags: -O3 -ffast-math -fno-finite-math-only -flto)
Model: Qwen3-4B-Thinking-2507. Quantized to 4-bit (llama.cpp: Q4_0, MNN whatever it is), size is about 2.5GB on both.
I did an additional test on Qwen2.5-1.5B-Instruct, it runs at 24 t/s pp128 and 9.3 t/s tg128. | 2025-10-01T11:44:38 | Hungry_Prune_2605 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv5x9f | false | null | t3_1nv5x9f | /r/LocalLLaMA/comments/1nv5x9f/mnn_speed_is_awesome/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'jef6lklvmhsf1', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=108&crop=smart&auto=webp&s=457276aa92720a81a78ad6052ad27ba38821d1c9', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=216&crop=smart&auto=webp&s=c3d96d315e52135f3941368db237e1a1ed2c51c7', 'width': 216}, {'height': 351, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=320&crop=smart&auto=webp&s=cf59011db7da66894d15d417c0c258ef773299c6', 'width': 320}, {'height': 702, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=640&crop=smart&auto=webp&s=cd6c4b8c5c71eaa61784d1aa3748cb0814d0a7f8', 'width': 640}, {'height': 1054, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=960&crop=smart&auto=webp&s=be4cbdcf64cfb96fdce77f7ccd6305b32d79637c', 'width': 960}, {'height': 1186, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?width=1080&crop=smart&auto=webp&s=5410100befbcf6bf01dc5a245d09e2e8970849eb', 'width': 1080}], 'source': {'height': 1186, 'url': 'https://preview.redd.it/jef6lklvmhsf1.png?auto=webp&s=4afb4ceeb4591a2cc84f5888c2630a7b533a87a5', 'width': 1080}, 'variants': {}}]} | |
don't sleep on Apriel-1.5-15b-Thinker and Snowpiercer | 82 | **Apriel-1.5-15b-Thinker** is a multimodal reasoning model in ServiceNow’s Apriel SLM series which achieves competitive performance against models 10 times it's size. Apriel-1.5 is the second model in the reasoning series. It introduces enhanced textual reasoning capabilities and adds image reasoning support to the previous text model. It has undergone extensive continual pretraining across both text and image domains. In terms of post-training this model has **undergone text-SFT only**. Our research demonstrates that with a strong mid-training regimen, we are able to achive SOTA performance on text and image reasoning tasks without having any image SFT training or RL.
**Highlights**
* Achieves a score of **52** on the Artificial Analysis index and is competitive with Deepseek R1 0528, Gemini-Flash etc.
* It is **AT LEAST 1 / 10** the size of any other model that scores > 50 on the Artificial Analysis index.
* Scores **68** on Tau2 Bench Telecom and **62** on IFBench, which are key benchmarks for the enterprise domain.
* At 15B parameters, the model fits on a single GPU, making it highly memory-efficient.
it was published yesterday
[https://huggingface.co/ServiceNow-AI/Apriel-1.5-15b-Thinker](https://huggingface.co/ServiceNow-AI/Apriel-1.5-15b-Thinker)
their previous model was
[https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker](https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker)
which is a base model for
[https://huggingface.co/TheDrummer/Snowpiercer-15B-v3](https://huggingface.co/TheDrummer/Snowpiercer-15B-v3)
which was published earlier this week :)
let's hope mr u/TheLocalDrummer will continue Snowpiercing
| 2025-10-01T11:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nv5uw8/dont_sleep_on_apriel1515bthinker_and_snowpiercer/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv5uw8 | false | null | t3_1nv5uw8 | /r/LocalLLaMA/comments/1nv5uw8/dont_sleep_on_apriel1515bthinker_and_snowpiercer/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=108&crop=smart&auto=webp&s=36ecdcac74aff522067481431085191481a9f947', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=216&crop=smart&auto=webp&s=ecdf9fbbb77c620cd253769aa3dc2a7ef564d4e9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=320&crop=smart&auto=webp&s=2241caddb9469bf7e3efadf484b5445becc6127a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=640&crop=smart&auto=webp&s=b4fcfb8af922849dbfd5835a6e4afe904a9cde66', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=960&crop=smart&auto=webp&s=1f50a1c7a721a880f07b3107ea41b844f324b2db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?width=1080&crop=smart&auto=webp&s=d5971ecf197ccefab77d57dbfd88eb51dc2dba77', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uWUiUpFbD1JOvAnNejNWCf8yD8zWNuISJgMWQPhc3cQ.png?auto=webp&s=f7a97baddf1813517cc82ee0cf5b4bfde2cd3443', 'width': 1200}, 'variants': {}}]} |
Looking for a web-based open-source Claude agent/orchestration framework (not for coding, just orchestration) | 2 | Hey folks,
I’m trying to find a **web-based, open-source agent framework that works like Anthropic’s Claude code** but my use case is **orchestration**, not code-gen or autonomous coding.
**What I’m after**
* A JS/python framework where I can define **multi-step workflows / tools**, wire them into **agents**, and trigger runs.
* First-class **tool/function calling** (HTTP, DB, filesystem adapters, webhooks, etc.).
* **Stateful runs** with logs, trace/graph view, retries, and simple guardrails.
* **Self-hostable**, OSS license preferred.
* Plays nicely with paid ones but obviously bonus if it can swap in local models for some steps. The idea is that soon OS ones would also adhere to prompts so win-win.
**What I’ve looked at**
* Tooling-heavy stacks like LangChain/LangGraph, Autogen, CrewAI, etc., powerful, but I’m there are naucens that somebody may have taken care of.
* Coding agents (OpenDevin/OpenHands), great for code workflows, not what I need, and likely overengineered for coding.
**Question**
* Does anything OSS fit this niche?
* Pointers to repos/templates are super welcome. If nothing exists, what are you all composing together to get close?
Thanks! | 2025-10-01T11:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nv5uqk/looking_for_a_webbased_opensource_claude/ | keniget | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv5uqk | false | null | t3_1nv5uqk | /r/LocalLLaMA/comments/1nv5uqk/looking_for_a_webbased_opensource_claude/ | false | false | self | 2 | null |
Train a SLM from scratch (not fine tune) | 7 |
I want to train a Smal language model from scratch. There adome books and some material over the internet about it, but most of them are just for education purposes and don't highlight the real challenges.
Over the web it's a consensus that it's it's possible to train a model like GPT2 124M on domestic hardware, there is a lot of examples. But I would like to train it on real data in my language (Brazilian Portuguese) creating a foundation model to be fine tuned in different domains.
Have any of you tried? I am stuck on problems like the amount of necessary data, how to make data domain-diverse enough and how to decide the correct number of parameters for my domain.
Do you have any tips? | 2025-10-01T11:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nv5ppk/train_a_slm_from_scratch_not_fine_tune/ | andreclaudino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv5ppk | false | null | t3_1nv5ppk | /r/LocalLLaMA/comments/1nv5ppk/train_a_slm_from_scratch_not_fine_tune/ | false | false | self | 7 | null |
Train from scratch (hot fine tune) | 1 | I want to train a Smal language model from scratch. There adome books and some material over the internet about it, but most of them are just for education purposes and don't highlight the real challenges.
Over the web it's a consensus that it's it's possible to train a model like GPT2 124M on domestic hardware, there is a lot of examples. But I would like to train it on real data in my language (Brazilian Portuguese) creating a foundation model to be fine tuned in different domains.
Have any of you tried? I am stuck on problems like the amount of necessary data, how to make data domain-diverse enough and how to decide the correct number of parameters for my domain.
Do you have any tips? | 2025-10-01T11:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nv5mt4/train_from_scratch_hot_fine_tune/ | andreclaudino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv5mt4 | false | null | t3_1nv5mt4 | /r/LocalLLaMA/comments/1nv5mt4/train_from_scratch_hot_fine_tune/ | false | false | self | 1 | null |
InfiniteGPU - Open source Distributed AI Inference Platform | 4 | Hey! I've been working on a platform that addresses a problem many of us face: needing more compute power for AI inference without breaking the bank on cloud GPUs.
What is InfiniteGPU?
It's a distributed compute marketplace where people can:
As Requestors: Run ONNX models on a distributed network of providers' hardware at an interesting price
As Providers: Monetize idle GPU/CPU/NPU time by running inference tasks in the background
Think of it as "Uber for AI compute" - but actually working and with real money involved.
The platform is functional for ONNX model inference tasks. Perfect for:
* Running inference when your local GPU is maxed out
* Distributed batch processing of images/data
* Earning passive income from idle hardware
How It Works
* Requestors upload ONNX models and input data
* Platform splits work into subtasks and distributes to available providers
* Providers (desktop clients) automatically claim and execute subtasks
* Results stream back in real-time
What Makes This Different?
* Real money: Not crypto tokens
* Native performance optimized with access to neural processing unit or gpu when available
Try It Out
GitHub repo: [https://github.com/Scalerize/Scalerize.InfiniteGpu](https://github.com/Scalerize/Scalerize.InfiniteGpu)
The entire codebase is available - backend API, React frontend, and Windows desktop client.
Happy to answer any technical questions about the project! | 2025-10-01T11:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1nv5el5/infinitegpu_open_source_distributed_ai_inference/ | franklbt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nv5el5 | false | null | t3_1nv5el5 | /r/LocalLLaMA/comments/1nv5el5/infinitegpu_open_source_distributed_ai_inference/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=108&crop=smart&auto=webp&s=f3e023e712ffe2914b1633618f007798769ea53b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=216&crop=smart&auto=webp&s=e44e6133216f0e702a73f535ad5ea13b2a0bf887', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=320&crop=smart&auto=webp&s=06a07f11b50cc7e4c8b07bd5569b09128cd10237', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=640&crop=smart&auto=webp&s=a2cd9d93aa922511cce8bcd44a1bbebdd5abbd96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=960&crop=smart&auto=webp&s=1f9717b5744762bf938be06e0d85ea0b5161fb0c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?width=1080&crop=smart&auto=webp&s=663742b78b70fcf50d1df37121e3a2f43febff11', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nCEcnPCLLvnFfC1ipUwEsdmr9fjCn4xuvtJynjO7DKQ.png?auto=webp&s=c4d71f5b7703834b86245e12cf152f77d9bb2f44', 'width': 1200}, 'variants': {}}]} |
GLM-4.6-GGUF is out! | 1,039 | 2025-10-01T11:00:52 | TheAndyGeorge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv53rb | false | null | t3_1nv53rb | /r/LocalLLaMA/comments/1nv53rb/glm46gguf_is_out/ | false | false | default | 1,039 | {'enabled': True, 'images': [{'id': 'kptmc2f0fhsf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/kptmc2f0fhsf1.jpeg?width=108&crop=smart&auto=webp&s=cb2894e78eddbed8826df5ee9f1b30d7bf050b40', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/kptmc2f0fhsf1.jpeg?width=216&crop=smart&auto=webp&s=8893df30eb039153a091c6a6dc912d3ed9554498', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/kptmc2f0fhsf1.jpeg?width=320&crop=smart&auto=webp&s=9344c531abf7cb2d05a64a1d2ee461b6106008bb', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/kptmc2f0fhsf1.jpeg?auto=webp&s=8f10474f9e91671cab99288df0bc0c8d893ce3ce', 'width': 500}, 'variants': {}}]} | ||
Codex is amazing, it can fix code issues without the need of constant approver. my setup: gpt-oss-20b on lm_studio. | 234 | 2025-10-01T10:37:00 | https://v.redd.it/1lusu36n9hsf1 | kyeoh1 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nv4oy9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1lusu36n9hsf1/DASHPlaylist.mpd?a=1761907035%2COWZmYWZkNmUyODg2NmVjNjgxNDgyN2U0YjFlOWQwY2QzOTU1NmUxZGMzNWEyODVlOGM3ZTMwZTMyMDBhN2IwMQ%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/1lusu36n9hsf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1lusu36n9hsf1/HLSPlaylist.m3u8?a=1761907035%2CMjBlZDRlNDdhODIyNWE2MTg3NTA1MDViNzdiZDE4Yjg2NWU4MWYwZWQxYjJkYzQ1MzFmZGQxMmYxODA5NjYyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1lusu36n9hsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nv4oy9 | /r/LocalLLaMA/comments/1nv4oy9/codex_is_amazing_it_can_fix_code_issues_without/ | false | false | 234 | {'enabled': False, 'images': [{'id': 'ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=b3f930a03fa85f823fd6b5851ff2bcbbdd4ca13f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=216&crop=smart&format=pjpg&auto=webp&s=11b416f5f2f494b03f595fb35ca0144a70e76198', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=320&crop=smart&format=pjpg&auto=webp&s=bf3b391ef16da8c36a3c077a39a88a202423afc4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=640&crop=smart&format=pjpg&auto=webp&s=77203c4283b7d89cf7e58c015cec4673d1030656', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=960&crop=smart&format=pjpg&auto=webp&s=ade3b1ee7095247c13a2e716c7ba19e2e534f531', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=eee5e22492e19632f3362dcc03c57381b4eb6121', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ODFtbnEzNm45aHNmMTG3bHLe9xXVwwNl3KvP1Qzcgr5dnq8C6Rg-wDqEIF5Y.png?format=pjpg&auto=webp&s=e1cafd57c6cd6e1cc6b90fa335c22a22be87ec2b', 'width': 1920}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.