title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
🇨🇳 Next great opensource model from China. | 1 | [removed] | 2025-08-05T03:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mhyg0e/next_great_opensource_model_from_china/ | xuejiazhao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhyg0e | false | null | t3_1mhyg0e | /r/LocalLLaMA/comments/1mhyg0e/next_great_opensource_model_from_china/ | false | false | 1 | null | |
Qwen-Image | 1 | [removed] | 2025-08-05T03:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mhycnc/qwenimage/ | xuejiazhao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhycnc | false | null | t3_1mhycnc | /r/LocalLLaMA/comments/1mhycnc/qwenimage/ | false | false | 1 | null | |
I built a tool that got 16K downloads, but no one uses the charts. Here's what they're missing. | 0 | A few months ago, I shared a GitHub CLI tool here for optimizing local LLM prompts. It quietly grew to 16K+ downloads — but most users skip the dashboard where all the real insights are.
Now, I’ve brought it back as a SaaS-powered prompt analytics layer — still CLI-first, still dev-friendly.
I recently built a tool c... | 2025-08-05T03:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mhxzdy/i_built_a_tool_that_got_16k_downloads_but_no_one/ | MobiLights | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhxzdy | false | null | t3_1mhxzdy | /r/LocalLLaMA/comments/1mhxzdy/i_built_a_tool_that_got_16k_downloads_but_no_one/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4LsBzhGZ9Abov7rw0R6iAiqHWhYFbNfYU9nhF4zmqkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4LsBzhGZ9Abov7rw0R6iAiqHWhYFbNfYU9nhF4zmqkU.jpeg?width=108&crop=smart&auto=webp&s=eee87d373fea3c9717fca18d6d6a4ca45e50a554', 'width': 108}, {'height': 108, 'url': '... | |
Light-IF-32B weights | 8 | I saw the weights were released (they were supposed to be [a couple days ago](https://www.reddit.com/r/LocalLLaMA/comments/1mghy1u/qihoo360lightif32b/), but [the upload failed](https://huggingface.co/qihoo360/Light-IF-32B/discussions/1#68901831d6b49f34311d92d3)), but nobody made GGUFs, so I used GGUF-my-repo (no imatri... | 2025-08-05T02:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mhxhi5/lightif32b_weights/ | DeProgrammer99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhxhi5 | false | null | t3_1mhxhi5 | /r/LocalLLaMA/comments/1mhxhi5/lightif32b_weights/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'tI1sapCmqbZGukHzbMC9_a1hbFiuJ7D5e252D-gJnrc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tI1sapCmqbZGukHzbMC9_a1hbFiuJ7D5e252D-gJnrc.png?width=108&crop=smart&auto=webp&s=5a54e357184df8fd6f26574e2c0b7b571a1e22ce', 'width': 108}, {'height': 116, 'url': 'h... |
A simple python module to scrape github api for agentic code generation | 4 | I just created this tool that might be useful to some. It's a multithreaded github scraper that can query the github api for different repository structures and label file contents in json format. It's meant for creating agentic ais which can query the files in the github repo to decide which file contents should be ad... | 2025-08-05T02:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mhx9d2/a_simple_python_module_to_scrape_github_api_for/ | Infinite_Mix_31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhx9d2 | false | null | t3_1mhx9d2 | /r/LocalLLaMA/comments/1mhx9d2/a_simple_python_module_to_scrape_github_api_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'pQdqaglLDwop7G3Q27fprUkgVqo8SOyl1xOdSjlOJSg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pQdqaglLDwop7G3Q27fprUkgVqo8SOyl1xOdSjlOJSg.png?width=108&crop=smart&auto=webp&s=2db040b2f1f6176ffd5713a6d7238bf73bd1f885', 'width': 108}, {'height': 108, 'url': 'h... |
[Student Project Help] Gemma 3 Vision (Unsloth) giving nonsense output — used official notebook | 1 | Hi everyone,
I'm a student working on a summer project involving multimodal models, and I’m currently testing **Gemma 3 Vision** with **Unsloth**. I used the **official vision inference notebook** (no major changes), loaded the model using `FastVisionModel.for_inference()`, and passed an image + prompt, but the output... | 2025-08-05T02:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mhx8cn/student_project_help_gemma_3_vision_unsloth/ | LeastExperience1579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhx8cn | false | null | t3_1mhx8cn | /r/LocalLLaMA/comments/1mhx8cn/student_project_help_gemma_3_vision_unsloth/ | false | false | 1 | null | |
I see people rushing to GLM Air GGUF's on this repo - what does this warning usually mean? I haven't seen a model flagged since we passed around pickled weights | 38 | 2025-08-05T02:17:40 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhx1kc | false | null | t3_1mhx1kc | /r/LocalLLaMA/comments/1mhx1kc/i_see_people_rushing_to_glm_air_ggufs_on_this/ | false | false | default | 38 | {'enabled': True, 'images': [{'id': '8xcfsxcl14hf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/8xcfsxcl14hf1.jpeg?width=108&crop=smart&auto=webp&s=1843680fc5241a1208ee966b3390bd7243129de3', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/8xcfsxcl14hf1.jpeg?width=216&crop=smart&auto=w... | ||
A simple github scraper for toolformer integration | 1 | I just created this tool that might be useful to some. It's a multithreaded github scraper that can query the github api for different repository structures and label file contents in json format. It's meant for creating agentic ais which can query the files in the github repo to decide which file contents should be ad... | 2025-08-05T02:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mhx0ae/a_simple_github_scraper_for_toolformer_integration/ | Infinite_Mix_31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhx0ae | false | null | t3_1mhx0ae | /r/LocalLLaMA/comments/1mhx0ae/a_simple_github_scraper_for_toolformer_integration/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pQdqaglLDwop7G3Q27fprUkgVqo8SOyl1xOdSjlOJSg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pQdqaglLDwop7G3Q27fprUkgVqo8SOyl1xOdSjlOJSg.png?width=108&crop=smart&auto=webp&s=2db040b2f1f6176ffd5713a6d7238bf73bd1f885', 'width': 108}, {'height': 108, 'url': 'h... |
how to do a self-play data generation system | 1 | In the era of LLM, however it is still hard for llm to synthetize the specific domain data. I found the diversity is very terrible for these generation data from LLM. Through reading some paper, I mean a self-play data generation agent may be useful. It need a data generation block and a self-evaluation block. However... | 2025-08-05T01:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mhwb9g/how_to_do_a_selfplay_data_generation_system/ | tangbasky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhwb9g | false | null | t3_1mhwb9g | /r/LocalLLaMA/comments/1mhwb9g/how_to_do_a_selfplay_data_generation_system/ | false | false | self | 1 | null |
CODE RED RISKS IN PENTAGON SECURITY. | 0 | "CODE RED RISKS IN PENTAGON SECURITY." by Frederick Wakulyaka, posted on Aug 05, 2025:
Main Topic
The video addresses severe national security risks posed by the involvement of Chinese engineers in maintaining critical United States defense systems, particularly through contracts with technology giant Microsoft.... | 2025-08-05T01:16:20 | https://youtube.com/watch?v=X7uIFhU21CY&si=eV5yJ3mzWThjmjP6 | Curious_Candy851 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mhvp8s | false | {'oembed': {'author_name': 'Frederick Wakulyaka', 'author_url': 'https://www.youtube.com/@frederickwakulyaka4561', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/X7uIFhU21CY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encryp... | t3_1mhvp8s | /r/LocalLLaMA/comments/1mhvp8s/code_red_risks_in_pentagon_security/ | true | false | spoiler | 0 | null |
I built a one stop AI powered study solution | 6 | NexNotes AI is an AI-powered tool that helps you streamline your study and learning process. With a suite of features including mind maps, study plans, flowcharts, summaries, and quizzes, NexNotes AI empowers you to grasp complex information quickly and effectively. Whether you're a student, professional, or lifelong l... | 2025-08-05T00:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mhv99h/i_built_a_one_stop_ai_powered_study_solution/ | pls_Do_not_ban | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhv99h | false | null | t3_1mhv99h | /r/LocalLLaMA/comments/1mhv99h/i_built_a_one_stop_ai_powered_study_solution/ | false | false | self | 6 | null |
Open Discord Chat Dataset (+ Model): Internet Tone Dataset for LLMs | 1 | [removed] | 2025-08-05T00:33:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mhurp9/open_discord_chat_dataset_model_internet_tone/ | mookiezistudio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhurp9 | false | null | t3_1mhurp9 | /r/LocalLLaMA/comments/1mhurp9/open_discord_chat_dataset_model_internet_tone/ | false | false | self | 1 | null |
Help building an price efficient inference server (no fine tuning) + multi 5090 setup | 2 | Hey everyone,
I'm planning a new build primarily for running a 72B model using vLLM (qwen 72b vl) with FP8.I think about using four RTX 5090s for the highest throughput. I was also thinking about one rtx pro 6000 but the inference speed is much slower and the cost similar. I get about 3x the throughput with 4x 5090 (... | 2025-08-05T00:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mhu9tx/help_building_an_price_efficient_inference_server/ | Civil-Image5411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhu9tx | false | null | t3_1mhu9tx | /r/LocalLLaMA/comments/1mhu9tx/help_building_an_price_efficient_inference_server/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'PkgpYMq-5_74dXyko9Zh9FOppnVlxdQc6WIclap5mWY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PkgpYMq-5_74dXyko9Zh9FOppnVlxdQc6WIclap5mWY.jpeg?width=108&crop=smart&auto=webp&s=d2ec29473ad9a43f57f6de38e719603168628711', 'width': 108}, {'height': 113, 'url': '... |
Qwen Image quantization idea | 5 | This might be somewhat unusual, but if the goal of model quantization is to reduce the model size, what about quantizing only the text encoder? I found that the Qwen Image model consists of text encoders(Qwen2.5 VL) and diffusion transformers. If the text encoder is more robust to quantization than the diffusion, would... | 2025-08-04T23:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mhtjqo/qwen_image_quantization_idea/ | ExcuseAccomplished97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhtjqo | false | null | t3_1mhtjqo | /r/LocalLLaMA/comments/1mhtjqo/qwen_image_quantization_idea/ | false | false | self | 5 | null |
GLM 4.5 GGUFs are coming | 175 | FINALLY | 2025-08-04T23:26:08 | https://huggingface.co/mradermacher/GLM-4.5-Air-GGUF | Pro-editor-1105 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mht910 | false | null | t3_1mht910 | /r/LocalLLaMA/comments/1mht910/glm_45_ggufs_are_coming/ | false | false | default | 175 | {'enabled': False, 'images': [{'id': 'mPuzW0dQMIeYzrva9cFzmx9vYSbfW4-X3nbfzwnmTUI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mPuzW0dQMIeYzrva9cFzmx9vYSbfW4-X3nbfzwnmTUI.png?width=108&crop=smart&auto=webp&s=13c5f6e652403be83b71873e1bbc87da605d3006', 'width': 108}, {'height': 116, 'url': 'h... |
Need help optimizing GLM-4.5-Air 110B (Q4_K_M) on RTX 4090 + 64GB RAM - Getting only 3.37 TPS | 12 | Hey r/LocalLLaMA! I'm struggling to get decent performance from the new GLM-4.5-Air model and could use some help finding the optimal config.
**Hardware:**
* RTX 4090 (24GB VRAM)
* 64GB DDR4 RAM
* Using latest llama.cpp build (6088 with clang 19.1.5)
**Model:**
* DevQuasar/zai-org.GLM-4.5-Air-GGUF (Q4\_K\_M, 6 shar... | 2025-08-04T23:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mhsyv9/need_help_optimizing_glm45air_110b_q4_k_m_on_rtx/ | Pro-editor-1105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhsyv9 | false | null | t3_1mhsyv9 | /r/LocalLLaMA/comments/1mhsyv9/need_help_optimizing_glm45air_110b_q4_k_m_on_rtx/ | false | false | self | 12 | null |
How to use your Local Models to watch your screen. Open Source and Completely Free!! | 107 | **TLDR:** I built this **open source** and **local** app that lets your local models **watch your screen** and do stuff! It is now **suuuper easy to install** and use, to make local AI **accessible to** **everybody**!
Hey r/LocalLLaMA! I'm back with some Observer updates c: first of all **Thank You** so much for all o... | 2025-08-04T22:30:05 | https://v.redd.it/g3pod2zlw2hf1 | Roy3838 | /r/LocalLLaMA/comments/1mhrx3m/how_to_use_your_local_models_to_watch_your_screen/ | 1970-01-01T00:00:00 | 0 | {} | 1mhrx3m | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g3pod2zlw2hf1/DASHPlaylist.mpd?a=1757068214%2CYWZiOGIxNzIxMTk3ZTlkZjEzN2JiNTA5NGU0MmIxZTk1N2U5YTRiZjM1Y2Y3M2EwNDcxMmY5NjcxNmQ3MzAwMw%3D%3D&v=1&f=sd', 'duration': 248, 'fallback_url': 'https://v.redd.it/g3pod2zlw2hf1/DASH_1080.mp4?source=fallback', '... | t3_1mhrx3m | /r/LocalLLaMA/comments/1mhrx3m/how_to_use_your_local_models_to_watch_your_screen/ | false | false | 107 | {'enabled': False, 'images': [{'id': 'czVxbDlzeWx3MmhmMTeXrb7fw6xx0BNL_5u8ms92EIrAoiEjzO7YOwhlTBj3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/czVxbDlzeWx3MmhmMTeXrb7fw6xx0BNL_5u8ms92EIrAoiEjzO7YOwhlTBj3.png?width=108&crop=smart&format=pjpg&auto=webp&s=0cafd483e776962ac85274492a6320c14f4f9... | |
is there an actually useful ai model for coding tasks and workflows? | 0 | I'm new into the local AI world, what kind of pc specs would i need to run a useful ai agent specialized in coding? | 2025-08-04T22:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mhrryp/is_there_an_actually_useful_ai_model_for_coding/ | Comfortable-Smoke672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhrryp | false | null | t3_1mhrryp | /r/LocalLLaMA/comments/1mhrryp/is_there_an_actually_useful_ai_model_for_coding/ | false | false | self | 0 | null |
Artificial analysis meta-benchmark update | 5 | It seems they're updating their meta-benchmark with some less saturated ones which is good.
A bit strange to continue using MMLU pro as it's quite saturated.
This update will make comparisons across time invalid.
Grok and o3 are now tied. It's not clear if they are done updating. | 2025-08-04T22:15:49 | https://artificialanalysis.ai/models/ | nomorebuttsplz | artificialanalysis.ai | 1970-01-01T00:00:00 | 0 | {} | 1mhrke2 | false | null | t3_1mhrke2 | /r/LocalLLaMA/comments/1mhrke2/artificial_analysis_metabenchmark_update/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... |
Open Source ObserverAI App demo! It's free and it's now suuuuper easy to install and use! | 1 | **TLDR:** I built this **open source** and **local** app that lets your local models **watch your screen** and do stuff! It is now super **easy to install** and use, to make local AI **accessible to** **everybody**!
Hey guys! I'm back with some Observer updates c: first of all thank you so much for all of your supp... | 2025-08-04T22:10:29 | https://v.redd.it/6k8em8zbs2hf1 | Roy3838 | /r/LocalLLaMA/comments/1mhrfqq/open_source_observerai_app_demo_its_free_and_its/ | 1970-01-01T00:00:00 | 0 | {} | 1mhrfqq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6k8em8zbs2hf1/DASHPlaylist.mpd?a=1757067036%2CYmMyYjU5NDBiZWE2NTQ1OWQ3Y2VjYTEzYTRhYjk4MWQxYmE0NTI5MWMxOTk1NGU1NzFiYTlmMWEyZDQyMzMxOA%3D%3D&v=1&f=sd', 'duration': 248, 'fallback_url': 'https://v.redd.it/6k8em8zbs2hf1/DASH_1080.mp4?source=fallback', '... | t3_1mhrfqq | /r/LocalLLaMA/comments/1mhrfqq/open_source_observerai_app_demo_its_free_and_its/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eHQ4NjFtemJzMmhmMTeXrb7fw6xx0BNL_5u8ms92EIrAoiEjzO7YOwhlTBj3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eHQ4NjFtemJzMmhmMTeXrb7fw6xx0BNL_5u8ms92EIrAoiEjzO7YOwhlTBj3.png?width=108&crop=smart&format=pjpg&auto=webp&s=59df9f2f7342c73db909c286c34ec5dd7bcde... | |
Export/Backup AnythingLLM Workspace? | 2 | Is there a way to export or backup an AnythingLLM workspace/RAG? I have one that is well developed and want to deploy it so others can mess around with it, but since it keeps track of chat history, I want a backup and a way to import it into a new workspace if the interaction changes its dynamic too much.
Also just in... | 2025-08-04T22:09:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mhrey9/exportbackup_anythingllm_workspace/ | Nuvious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhrey9 | false | null | t3_1mhrey9 | /r/LocalLLaMA/comments/1mhrey9/exportbackup_anythingllm_workspace/ | false | false | self | 2 | null |
What's the largest openweights LLM? non-MoE and MoE? | 0 | 😶🌫️ | 2025-08-04T21:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mhqt77/whats_the_largest_openweights_llm_nonmoe_and_moe/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhqt77 | false | null | t3_1mhqt77 | /r/LocalLLaMA/comments/1mhqt77/whats_the_largest_openweights_llm_nonmoe_and_moe/ | false | false | self | 0 | null |
Maxed out M3 Mac studio as an LLM server for local employees? | 9 | Hey r/LocalLLaMA, I am between building a server or buying an M3 mac studio.
The needs are as follows
\>run LLM models LOCALLY (locality is non-negotiable)
\>stream files, videos across multiple computers, emails and other basic server operations
The big limitation is, currently, we (currently) don't have the infra... | 2025-08-04T21:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mhqgv1/maxed_out_m3_mac_studio_as_an_llm_server_for/ | Manderbillt2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhqgv1 | false | null | t3_1mhqgv1 | /r/LocalLLaMA/comments/1mhqgv1/maxed_out_m3_mac_studio_as_an_llm_server_for/ | false | false | self | 9 | null |
Tried Mistral-Small3.1-24B-Instruct with Open-WebUI and got this | 3 | is this normal? what's happening? | 2025-08-04T21:05:36 | Juanouo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhprfk | false | null | t3_1mhprfk | /r/LocalLLaMA/comments/1mhprfk/tried_mistralsmall3124binstruct_with_openwebui/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'pbw43rp0i2hf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/pbw43rp0i2hf1.png?width=108&crop=smart&auto=webp&s=637c5a65b67429fd45aa9db51e2e1e85724fd95c', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/pbw43rp0i2hf1.png?width=216&crop=smart&auto=web... | |
Quick Qwen Image Gen with 4090+3060 | 56 | Just tested the new **Qwen-Image** model from Alibaba using 🤗 Diffusers with bfloat16 + dual-GPU memory config (4090 + 3060). Prompted it to generate a **cyberpunk night market scene**—complete with neon signs, rainy pavement, futuristic street food vendors, and a monorail in the background.
Ran at `1472x832`, 32 ste... | 2025-08-04T20:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mhpm02/quick_qwen_image_gen_with_40903060/ | fp4guru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhpm02 | false | null | t3_1mhpm02 | /r/LocalLLaMA/comments/1mhpm02/quick_qwen_image_gen_with_40903060/ | false | false | self | 56 | null |
How to use Deepseek R1 0528? | 0 | Is it simply the website chatbot? Or do I need to go to open router and use the free chat there .
Also I am new to AI chatbots , what is API? And if deepseek is free what are all these tokens and prices ??
Am I using the best model (R1 0528) In the deepseek chatbot on the website ?? Or am I getting a weaker version ... | 2025-08-04T20:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mhp3to/how_to_use_deepseek_r1_0528/ | DryMistake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhp3to | false | null | t3_1mhp3to | /r/LocalLLaMA/comments/1mhp3to/how_to_use_deepseek_r1_0528/ | false | false | self | 0 | null |
What's your 'primary' model and why? Do you run a secondary model? | 29 | With all the new models coming out recently, I've been more and more curious about this. It seems like a few months ago we were all running Gemma 3, now everybody seems to be running Qwen 3, but with recent model releases, which is your go-to daily-driver and why, and if you have secondary model(s), what do you use the... | 2025-08-04T20:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mhp2e5/whats_your_primary_model_and_why_do_you_run_a/ | ayylmaonade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhp2e5 | false | null | t3_1mhp2e5 | /r/LocalLLaMA/comments/1mhp2e5/whats_your_primary_model_and_why_do_you_run_a/ | false | false | self | 29 | null |
Horizon Beta Free or not on Openrouter | 4 | 2025-08-04T20:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mhok5i/horizon_beta_free_or_not_on_openrouter/ | Training-Surround228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhok5i | false | null | t3_1mhok5i | /r/LocalLLaMA/comments/1mhok5i/horizon_beta_free_or_not_on_openrouter/ | false | false | 4 | null | ||
Offering Benchmarks on my New RIG for Larger Models | 17 | I’ve just finished building a desk-side powerhouse and I want to run a community-driven series of inference latency benchmarks (in ms) on the latest high-performance open-source LLMs (> 70 B parameters), including models like QWEN, GLM 4.5, and others you recommend.
I’m also keen to try any RAM/CPU tricks for models ... | 2025-08-04T20:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mhoaxs/offering_benchmarks_on_my_new_rig_for_larger/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhoaxs | false | null | t3_1mhoaxs | /r/LocalLLaMA/comments/1mhoaxs/offering_benchmarks_on_my_new_rig_for_larger/ | false | false | self | 17 | null |
Requesting Model and Benchmark Requests for New RIG | 1 | I’ve just finished building a desk-side powerhouse and I want to run a community-driven series of inference latency benchmarks (in ms) on the latest high-performance open-source LLMs (> 70 B parameters), including models like NUQUEN, GLM 4.5, and others you recommend. I’m also keen to try any RAM/CPU tricks for models ... | 2025-08-04T20:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mho7cx/requesting_model_and_benchmark_requests_for_new/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mho7cx | false | null | t3_1mho7cx | /r/LocalLLaMA/comments/1mho7cx/requesting_model_and_benchmark_requests_for_new/ | false | false | self | 1 | null |
Evalproject for Local LLMs & Quants | 5 | Lately I started using more local LLMs again but after playing around with the latest Qwen MOE with A3B I found out the hard way how fast it falls apart due to hallucination and similar, especially when context gets a little bit longer (were talking \~1k t). Might be because the model is just not good, because of the q... | 2025-08-04T20:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mho569/evalproject_for_local_llms_quants/ | nore_se_kra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mho569 | false | null | t3_1mho569 | /r/LocalLLaMA/comments/1mho569/evalproject_for_local_llms_quants/ | false | false | self | 5 | null |
Building local LLMs that remember? Here’s a memory layer that doesn’t suck. | 1 | If you’re working with local LLMs or agents, you’ve probably dealt with this pain:
* Stateless sessions that lose context
* RAG pipelines that break or leak info
* No clean way to store/retrieve memory scoped per user/project
We built [**Recallio**](https://recallio.ai) to fix it:
A simple API that gives you persis... | 2025-08-04T20:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mho26i/building_local_llms_that_remember_heres_a_memory/ | GardenCareless5991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mho26i | false | null | t3_1mho26i | /r/LocalLLaMA/comments/1mho26i/building_local_llms_that_remember_heres_a_memory/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VQsVngyDjYxfnIA8BZbKvwP_4QObPRr0Duzu51eNaJA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VQsVngyDjYxfnIA8BZbKvwP_4QObPRr0Duzu51eNaJA.png?width=108&crop=smart&auto=webp&s=63077f1a187a2f57f1972c25ac8a7974a563f1ad', 'width': 108}, {'height': 113, 'url': 'h... |
how are you guys getting data for fine-tuning? | 1 | it just seems a bit ridiculous to use existing LLMs to output fine-tuning data
like how are you getting the full set of data of what you need for fine-tuning?
do you just set temperature to high? | 2025-08-04T19:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mhnhol/how_are_you_guys_getting_data_for_finetuning/ | backlinkbento | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhnhol | false | null | t3_1mhnhol | /r/LocalLLaMA/comments/1mhnhol/how_are_you_guys_getting_data_for_finetuning/ | false | false | self | 1 | null |
Built my own Copilot-style assistant that works offline, with screen/mic/text input | 19 | I got frustrated trying to use GitHub Copilot in restricted environments, airgapped networks, legacy IDEs, places with no extensions allowed. So I built [**QuietPrompt**](https://github.com/viktorfaubl/QuietPrompt), a local-first Copilot-style helper. It comes with an installer to Windows too, [Installer](https://githu... | 2025-08-04T19:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mhng5b/built_my_own_copilotstyle_assistant_that_works/ | Natural-Ad6682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhng5b | false | null | t3_1mhng5b | /r/LocalLLaMA/comments/1mhng5b/built_my_own_copilotstyle_assistant_that_works/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'ImW7G_vbNEu0XJjxOj4kPw7DmdqNHY5IpJynpgEQTrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ImW7G_vbNEu0XJjxOj4kPw7DmdqNHY5IpJynpgEQTrI.png?width=108&crop=smart&auto=webp&s=08780895207f09503731d5f70b6b0303fc1c85e9', 'width': 108}, {'height': 108, 'url': 'h... |
Mindforge — Ollama-like local LLM runner with HF + GGUF, OpenAI-compatible API | 14 | After being frustrated with Ollama about their slow new model support thair configs etc,
I built Mindforge, a tiny Python tool that runs Hugging Face and GGUF models locally with an OpenAI-compatible API. It’s like a minimal Ollama but in Python, using transformers and llama.cpp.
Highlights:
• Run HF (transformer) ... | 2025-08-04T19:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mhmgci/mindforge_ollamalike_local_llm_runner_with_hf/ | Exw00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhmgci | false | null | t3_1mhmgci | /r/LocalLLaMA/comments/1mhmgci/mindforge_ollamalike_local_llm_runner_with_hf/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'EZzTTg1hdOCFMDwS0zyBmN2ARykDd0eCrL3pSZxiib8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EZzTTg1hdOCFMDwS0zyBmN2ARykDd0eCrL3pSZxiib8.png?width=108&crop=smart&auto=webp&s=d06f68b2553cb89685c7d7b8d1c373c7cba8d93e', 'width': 108}, {'height': 108, 'url': 'h... |
Tool for chat branching & selective-context control exist? | 7 | Hey all, I've been experimenting with various LLM apps and have an idea for a small open-source project to address a frustration I'm hitting repeatedly. But before I dive deep, I wanted to quickly check if it already exists (fingers crossed)!
**My Pain Point:**
I'm tired of being stuck with linear conversations. Whe... | 2025-08-04T18:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mhlxe1/tool_for_chat_branching_selectivecontext_control/ | IsWired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhlxe1 | false | null | t3_1mhlxe1 | /r/LocalLLaMA/comments/1mhlxe1/tool_for_chat_branching_selectivecontext_control/ | false | false | self | 7 | null |
Google introduces a new Benchmark: Game Arena and they're streaming your favorite open weight models playing chess against close source models. | 105 | Here is the original blog post: [https://blog.google/technology/ai/kaggle-game-arena/](https://blog.google/technology/ai/kaggle-game-arena/)
About the benchmark, I personally prefer game as a head-to-head benchmark to LMArena. At least if they do benchmaxxing, we might have models that's more intelligent comparing to... | 2025-08-04T18:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mhlo6g/google_introduces_a_new_benchmark_game_arena_and/ | mtmttuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhlo6g | false | null | t3_1mhlo6g | /r/LocalLLaMA/comments/1mhlo6g/google_introduces_a_new_benchmark_game_arena_and/ | false | false | 105 | {'enabled': False, 'images': [{'id': 'UFZqcZq9i0MOfYzexLrIUEZbhTVVNRWezkExEe_y0E4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UFZqcZq9i0MOfYzexLrIUEZbhTVVNRWezkExEe_y0E4.png?width=108&crop=smart&auto=webp&s=e6c4c1e50bf3d40b8b76be77b34dbecd15f1ff79', 'width': 108}, {'height': 121, 'url': 'h... | |
support for GLM 4.5 family of models has been merged into llama.cpp | 311 | 2025-08-04T18:30:31 | https://github.com/ggml-org/llama.cpp/pull/14939 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mhlkyx | false | null | t3_1mhlkyx | /r/LocalLLaMA/comments/1mhlkyx/support_for_glm_45_family_of_models_has_been/ | false | false | default | 311 | {'enabled': False, 'images': [{'id': 'c5JLWNvDayy9hBNWlkTcKlG0BX-MgLkUBV-jJh9mTeo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c5JLWNvDayy9hBNWlkTcKlG0BX-MgLkUBV-jJh9mTeo.png?width=108&crop=smart&auto=webp&s=78369c4a613d24a26f628c7b0d0788fbd02727b4', 'width': 108}, {'height': 108, 'url': 'h... | |
What's the verdict on using quantized KV cache? | 26 | I've been debating the use of quantized KV cache with llama.cpp (no less than q8) for a long time, but I still can't tell if it's a good idea:
* On one hand, the [original PR](https://github.com/ggml-org/llama.cpp/pull/7412) mentions that perplexity is more sensitive to model weight quants than to KV cache. Or in othe... | 2025-08-04T18:28:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mhlj69/whats_the_verdict_on_using_quantized_kv_cache/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhlj69 | false | null | t3_1mhlj69 | /r/LocalLLaMA/comments/1mhlj69/whats_the_verdict_on_using_quantized_kv_cache/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'OvzkbSQGd0ValXuS9jVeaqGHriS1NS11UNljNQO8XGQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OvzkbSQGd0ValXuS9jVeaqGHriS1NS11UNljNQO8XGQ.png?width=108&crop=smart&auto=webp&s=719eefafd72ad9dfefe61307fd88e8316866de92', 'width': 108}, {'height': 108, 'url': 'h... |
I built state-of-the-art AI memory, try it with any LLM of your choice! | 0 | I got tired of poor memory features on AI chat platforms, didn't work out that well and had to constantly repeat my context over and over again.
This led us to build state-of-the-art AI memory infrastructure: goal is to make memory systems more effective and performant for a highly-personalized chat experience. Better... | 2025-08-04T18:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mhlhto/i_built_stateoftheart_ai_memory_try_it_with_any/ | Sybilz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhlhto | false | null | t3_1mhlhto | /r/LocalLLaMA/comments/1mhlhto/i_built_stateoftheart_ai_memory_try_it_with_any/ | false | false | 0 | {'enabled': False, 'images': [{'id': '5gTDZsE2cgDjmw2HjboczeXXlQjpy_QJbg52MVy6Ucc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5gTDZsE2cgDjmw2HjboczeXXlQjpy_QJbg52MVy6Ucc.jpeg?width=108&crop=smart&auto=webp&s=bb0504f27a57c96e8e3d6856195ba43933824d50', 'width': 108}, {'height': 113, 'url': '... | |
Gemini 3 is coming?.. | 211 | [https://x.com/OfficialLoganK/status/1952430214375493808](https://x.com/OfficialLoganK/status/1952430214375493808) | 2025-08-04T18:15:40 | SlerpE | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhl5yo | false | null | t3_1mhl5yo | /r/LocalLLaMA/comments/1mhl5yo/gemini_3_is_coming/ | false | false | 211 | {'enabled': True, 'images': [{'id': 'uhYd1Q9dnVcQrpLekV8fk4gEIwtVrTg3elgEcFfUKqM', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/59joqndkn1hf1.png?width=108&crop=smart&auto=webp&s=318f53c18e18eee36e48194ccf41cec21ffc52db', 'width': 108}, {'height': 37, 'url': 'https://preview.redd.it/59joqndkn1hf1.png?... | ||
Qwen-Image tested | 4 | Just tried out Qwen-Image on qwen chat and it looks good. Though, the way they have boasted about text rendering, it's not that perfect at times. But still, pretty good. At par with GPT image or Flux.1 Context? Not sure
Here is the test samples : https://youtu.be/kU-TyGPET0A?si=r-59v_GIgzchSEYG
Platform : https://cha... | 2025-08-04T18:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mhl59e/qwenimage_tested/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhl59e | false | null | t3_1mhl59e | /r/LocalLLaMA/comments/1mhl59e/qwenimage_tested/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Hud9M29ELrEat4PaloA6RjHPTbTHxHEsU45TQB2mdng', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Hud9M29ELrEat4PaloA6RjHPTbTHxHEsU45TQB2mdng.jpeg?width=108&crop=smart&auto=webp&s=7afd4093e3720f2bcf71b14ca4929ad86cec2793', 'width': 108}, {'height': 162, 'url': '... |
Qwen3-Coder-30B nailed Snake game in one shot on my MacBook | 6 | I downloaded Qwen3-Coder-30B-A3B-Instruct this morning and it surprised me. The model wrote a working Snake game on the first try.
Here's what I did:
1. Converted the model to MLX format with one command:mlx\_lm.convert --hf-path Qwen/Qwen3-Coder-30B-A3B-Instruct --mlx-path \~/models/Qwen3-Coder-30B-A3B-Instruct.mlx ... | 2025-08-04T18:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mhl49l/qwen3coder30b_nailed_snake_game_in_one_shot_on_my/ | txgsync | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhl49l | false | null | t3_1mhl49l | /r/LocalLLaMA/comments/1mhl49l/qwen3coder30b_nailed_snake_game_in_one_shot_on_my/ | false | false | 6 | null | |
Run 0.6B LLM 100token/s locally on iPhone | 6 |
Vector Space now runs Qwen3 0.6B with up to 100 token/second on Apple Neural Engine.
The Neural Engine is a new kind of hardware unlike GPU or CPU that requires extensive changes to model architecture to make the model run on it - but we could get a significant speed gain and 1/4 energy consumption.
🎉 Try it now ... | 2025-08-04T18:09:57 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhl06m | false | null | t3_1mhl06m | /r/LocalLLaMA/comments/1mhl06m/run_06b_llm_100tokens_locally_on_iphone/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'pzO2nUXm5T7ElM7k5qjpe9WHFUaEPttFlE2rHWKrYK0', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/lls41nzqm1hf1.jpeg?width=108&crop=smart&auto=webp&s=bda34bbec8fdf3925252d345c21af2d53ce861e2', 'width': 108}, {'height': 264, 'url': 'https://preview.redd.it/lls41nzqm1hf1.j... | ||
What's the best model for writing full BDSM stories on 12gb gram and 32gb ram? | 0 |
I want something that could write it all in one go, with me only giving it a few direction adjustments, instead of having a full conversation | 2025-08-04T17:52:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mhkhz5/whats_the_best_model_for_writing_full_bdsm/ | Jex42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhkhz5 | false | null | t3_1mhkhz5 | /r/LocalLLaMA/comments/1mhkhz5/whats_the_best_model_for_writing_full_bdsm/ | false | false | self | 0 | null |
Version 1 open source | 0 | This is an invitation to collaborators, you are welcome to participate. https://github.com/Mainframework/AI-Ripper
| 2025-08-04T17:43:52 | https://v.redd.it/69h9uaeth1hf1 | Intelligent_Face_788 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhk9g3 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/69h9uaeth1hf1/DASHPlaylist.mpd?a=1756921450%2CN2I1ZDQ5NWMwZTQ4OTg5MTg3MTBmNDg1NzYzMTRhMTA1NDc1YTcyMzc0YzhhY2FiMTA1ZDVmOWQxNTgzYWEwNw%3D%3D&v=1&f=sd', 'duration': 111, 'fallback_url': 'https://v.redd.it/69h9uaeth1hf1/DASH_480.mp4?source=fallback', 'h... | t3_1mhk9g3 | /r/LocalLLaMA/comments/1mhk9g3/version_1_open_source/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bnZqcXFhZXRoMWhmMU_RkrImi8DiEqZZAQU9nhbuj6kvEpy1rO8Uz7Q3mpI5', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/bnZqcXFhZXRoMWhmMU_RkrImi8DiEqZZAQU9nhbuj6kvEpy1rO8Uz7Q3mpI5.png?width=108&crop=smart&format=pjpg&auto=webp&s=d99677f81fc45c19626642e2540da921d4310... | |
NVIDIA AI-Q Achieves Top Score for Open, Portable AI Deep Research (LLM with Search Category) | 4 | NVIDIA AI-Q, a blueprint for building AI agents with advanced reasoning skills, is now the top-rated open, portable AI agent in the **LLM with search category** on Hugging Face’s [Deep Research Bench leaderboard](https://huggingface.co/spaces/Ayanami0730/DeepResearch-Leaderboard).
AI-Q, the top open and portable AI a... | 2025-08-04T17:38:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mhk4it/nvidia_aiq_achieves_top_score_for_open_portable/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhk4it | false | null | t3_1mhk4it | /r/LocalLLaMA/comments/1mhk4it/nvidia_aiq_achieves_top_score_for_open_portable/ | false | false | 4 | null | |
Is There Any GooD Bencmarking Website for LLMs for Specific Tasks??? | 1 | Is there any unbiased llm benchmakr for specific tasks like OCR, audio understanding etc. rather than resoning, solving maths and coding. | 2025-08-04T17:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mhjyt8/is_there_any_good_bencmarking_website_for_llms/ | iamaseem1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhjyt8 | false | null | t3_1mhjyt8 | /r/LocalLLaMA/comments/1mhjyt8/is_there_any_good_bencmarking_website_for_llms/ | false | false | self | 1 | null |
Horizon Beta is probably gpt-5 | 0 | 2025-08-04T17:30:01 | https://x.com/sama/status/1952071832972186018 | boxingdog | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mhjvkh | false | null | t3_1mhjvkh | /r/LocalLLaMA/comments/1mhjvkh/horizon_beta_is_probably_gpt5/ | false | false | default | 0 | null | |
What kind of setup should I get? | 0 | I am looking for a setup that can run 30b local models no problem and preferably 70b, even if they’re pretty slow. Any recommendations for what graphics cards to get and how many? Also good places to look for used ones? Also any insights into how much ram, vram, hdd/ssd space etc. I’m fairly new. Thanks! | 2025-08-04T17:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mhjtvv/what_kind_of_setup_should_i_get/ | SnooLemons5892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhjtvv | false | null | t3_1mhjtvv | /r/LocalLLaMA/comments/1mhjtvv/what_kind_of_setup_should_i_get/ | false | false | self | 0 | null |
Is it difficult to get into the field of building LLMs? | 4 | I'm a web developer who has just started to dabble with running LLMs locally, I feel like I have a high-level understanding of how they work and how to run and train them but I still feel very clueless on they actually work. I know it would be extremely difficult to actually be on the bleeding edge of building them, su... | 2025-08-04T17:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mhjk0z/is_it_difficult_to_get_into_the_field_of_building/ | Traditional_Bet8239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhjk0z | false | null | t3_1mhjk0z | /r/LocalLLaMA/comments/1mhjk0z/is_it_difficult_to_get_into_the_field_of_building/ | false | false | self | 4 | null |
Handwritten Prescription to Text | 0 | I want to make a model that analyzes Handwritten Prescriptions and converts them to Text. But I am having a hard time in what to use ? Should I go with an OCR or should I go with a VLM like ColQwen ?
Also I don't have the ground truth for these Prescriptions so how can I verify them ?
Additionally should I use s... | 2025-08-04T17:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mhj1cr/handwritten_prescription_to_text/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhj1cr | false | null | t3_1mhj1cr | /r/LocalLLaMA/comments/1mhj1cr/handwritten_prescription_to_text/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'udT4QZgiVbOTaDnIXiD8_EZNt_N6762SRpnpYtnpNlw', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/lTAzdTn-1K5Dl5JRUESU8hj75sJCED-FuJIqnwwHKsg.jpg?width=108&crop=smart&auto=webp&s=13b5ca96fac11a3ca06a4eabbe530e77d9f56ced', 'width': 108}, {'height': 324, 'url': '... |
GLM ranks #2 for chat according to lmarena | 48 | Style control removed.
|Rank (UB)|Model|Score|95% CI (±)|Votes|Company|License|
|:-|:-|:-|:-|:-|:-|:-|
|1|gemini-2.5-pro|1470|±5|26,019|Google|Closed|
|2|grok-4-0709|1435|±6|13,058|xAI|Closed|
|2|glm-4.5|1435|±9|4,112|[Z.ai](http://Z.ai)|MIT|
|2|chatgpt-4o-latest-20250326|1430|±5|30,777|Closed AI||
|2|o3-2025-04-16|14... | 2025-08-04T16:55:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mhix7d/glm_ranks_2_for_chat_according_to_lmarena/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhix7d | false | null | t3_1mhix7d | /r/LocalLLaMA/comments/1mhix7d/glm_ranks_2_for_chat_according_to_lmarena/ | false | false | self | 48 | null |
Suggestion for upgrading hardware for MOE inference and fine-tuning. | 4 | I am just getting started with serious research, I wanted to work on MOE models. Here are my assumptions and thinking of buying hardware based on that.
Current hardware: i7(13th gen 8 cores) + 64 RAM + RTX 4060. Current GPU hardware is pretty limited 8GB VRAM - not suited for any serious work. Also I do not reside in... | 2025-08-04T16:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mhitwa/suggestion_for_upgrading_hardware_for_moe/ | Icy_Gas8807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhitwa | false | null | t3_1mhitwa | /r/LocalLLaMA/comments/1mhitwa/suggestion_for_upgrading_hardware_for_moe/ | false | false | self | 4 | null |
Qwen-Image is out | 816 | https://x.com/Alibaba_Qwen/status/1952398250121756992
It's better than Flux Kontext, gpt-image level | 2025-08-04T16:49:14 | https://v.redd.it/4077mfg081hf1 | BoJackHorseMan53 | /r/LocalLLaMA/comments/1mhiqqn/qwenimage_is_out/ | 1970-01-01T00:00:00 | 0 | {} | 1mhiqqn | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4077mfg081hf1/DASHPlaylist.mpd?a=1757047764%2COTRmZmRjYWRkYmU5MzUzNWJlMjI4N2FhMTZmYTQ3N2NhNjg1ZjIwMmZmYjNhZjdjNGMzYzE0NTU1MzZmM2E0MQ%3D%3D&v=1&f=sd', 'duration': 116, 'fallback_url': 'https://v.redd.it/4077mfg081hf1/DASH_1080.mp4?source=fallback', '... | t3_1mhiqqn | /r/LocalLLaMA/comments/1mhiqqn/qwenimage_is_out/ | false | false | 816 | {'enabled': False, 'images': [{'id': 'ZXc5MXNwZzA4MWhmMcYwrNxnpQZAm2APaO7BYeGkDJuDLh9yjPxalcFVQ96q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZXc5MXNwZzA4MWhmMcYwrNxnpQZAm2APaO7BYeGkDJuDLh9yjPxalcFVQ96q.png?width=108&crop=smart&format=pjpg&auto=webp&s=e13de73d5d88711e89158aadf32e2b1b9d0f8... | |
Qwen Image Japanese and Chinese text generation test | 69 | The results are a mix of real and made up characters. The signs are meaningless gibberish. | 2025-08-04T16:44:57 | https://www.reddit.com/gallery/1mhimmj | shokuninstudio | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mhimmj | false | null | t3_1mhimmj | /r/LocalLLaMA/comments/1mhimmj/qwen_image_japanese_and_chinese_text_generation/ | false | false | 69 | null | |
How do people in industry benchmark models? | 5 | Hi guys!
So recently as a learning exercise I tuned a Qwen3 model for coding task. I was now interested in understanding how to properly benchmark these tunes models using wellknown benchmarks. But, I'm a bit unsure about how this is exactly done, and was curious about how this is typically done in the industry.
Do e... | 2025-08-04T16:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mhieis/how_do_people_in_industry_benchmark_models/ | Spiritual_Process575 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhieis | false | null | t3_1mhieis | /r/LocalLLaMA/comments/1mhieis/how_do_people_in_industry_benchmark_models/ | false | false | self | 5 | null |
A free goldmine of tutorials for the components you need to create production-level agents
Extensive open source resource with tutorials for creating robust AI agents | 49 | **I’ve worked really hard and launched a FREE resource with 30+ detailed tutorials for building comprehensive production-level AI agents, as part of my Gen AI educational initiative.**
The tutorials cover all the key components you need to create agents that are ready for real-world deployment. I plan to keep adding m... | 2025-08-04T16:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mhhy47/a_free_goldmine_of_tutorials_for_the_components/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhhy47 | false | null | t3_1mhhy47 | /r/LocalLLaMA/comments/1mhhy47/a_free_goldmine_of_tutorials_for_the_components/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': '23K9xyykQzGugCXJyC20OMixBbwPZe-S5vv1or7jJHM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/23K9xyykQzGugCXJyC20OMixBbwPZe-S5vv1or7jJHM.png?width=108&crop=smart&auto=webp&s=45b07e7616995751a757eb80d771bad2ee406619', 'width': 108}, {'height': 108, 'url': 'h... |
3090Ti - 38 tokens/sec? | 5 | Qwen3 32b on a 3090Ti = 38tps
I was expecting more? Like at least 50tps and more like 60? Am I tripping?
`C:\>llama-bench.exe -m Qwen_Qwen3-32B-GGUF\Qwen_Qwen3-32B-Q4_K_L.gguf --flash-attn 1`
`ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no`
`ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no`
`ggml_cuda_init: found 1 CUDA ... | 2025-08-04T16:11:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mhhpy9/3090ti_38_tokenssec/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhhpy9 | false | null | t3_1mhhpy9 | /r/LocalLLaMA/comments/1mhhpy9/3090ti_38_tokenssec/ | false | false | self | 5 | null |
GPU-enabled Llama3 inference in Java now runs Qwen3, Phi-3, Mistral and Llama3 models in FP16, Q8 and Q4 | 19 | 2025-08-04T16:04:00 | mikebmx1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhhiw2 | false | null | t3_1mhhiw2 | /r/LocalLLaMA/comments/1mhhiw2/gpuenabled_llama3_inference_in_java_now_runs/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': 'r887j3c401hf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/r887j3c401hf1.png?width=108&crop=smart&auto=webp&s=edc47e70ed41dfecf98c112cda83ad38681ebd5c', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/r887j3c401hf1.png?width=216&crop=smart&auto=we... | ||
Qwen-Image — a 20B MMDiT model | 156 | 🚀 Meet Qwen-Image — a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source.
🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully integr... | 2025-08-04T16:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mhhhpi/qwenimage_a_20b_mmdit_model/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhhhpi | false | null | t3_1mhhhpi | /r/LocalLLaMA/comments/1mhhhpi/qwenimage_a_20b_mmdit_model/ | false | false | self | 156 | null |
QWEN-IMAGE is released! | 963 | and it's better than Flux Kontext Pro (according to their benchmarks). That's insane. Really looking forward to it. | 2025-08-04T15:58:55 | https://huggingface.co/Qwen/Qwen-Image | TheIncredibleHem | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mhhdig | false | null | t3_1mhhdig | /r/LocalLLaMA/comments/1mhhdig/qwenimage_is_released/ | false | false | default | 963 | {'enabled': False, 'images': [{'id': 'qvxd1_x1PaBd3IEw-2xS9ifjngcFBwLHvsX1ihQDi64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qvxd1_x1PaBd3IEw-2xS9ifjngcFBwLHvsX1ihQDi64.png?width=108&crop=smart&auto=webp&s=55a19a341313ab08b43f3737ad0171a6dc27a3a6', 'width': 108}, {'height': 116, 'url': 'h... |
🚀 Meet Qwen-Image | 696 | 🚀 Meet Qwen-Image — a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source.
🔍 Key Highlights:
🔹 SOTA text rendering — rivals GPT-4o in English, best-in-class for Chinese
🔹 In-pixel text generation — no overlays, fully inte... | 2025-08-04T15:58:11 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhhctd | false | null | t3_1mhhctd | /r/LocalLLaMA/comments/1mhhctd/meet_qwenimage/ | false | false | default | 696 | {'enabled': True, 'images': [{'id': '7a463it8z0hf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/7a463it8z0hf1.jpeg?width=108&crop=smart&auto=webp&s=fa3d443b56ce6f98c27acfab23a04897cc07af2c', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/7a463it8z0hf1.jpeg?width=216&crop=smart&auto=w... | |
Profanity: QwenCode... but is Devstral in the background. And it works. Just slower than Coder-30b-a3b... but it works. | 11 | 2025-08-04T15:57:07 | JLeonsarmiento | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhhbrr | false | null | t3_1mhhbrr | /r/LocalLLaMA/comments/1mhhbrr/profanity_qwencode_but_is_devstral_in_the/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'wa8tbrp0z0hf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/wa8tbrp0z0hf1.png?width=108&crop=smart&auto=webp&s=ec77a68e170a79bdacd792c9b8fc588ef1dc83b7', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/wa8tbrp0z0hf1.png?width=216&crop=smart&auto=web... | ||
Qwen/Qwen-Image · Hugging Face | 85 | 2025-08-04T15:51:47 | https://huggingface.co/Qwen/Qwen-Image | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mhh6se | false | null | t3_1mhh6se | /r/LocalLLaMA/comments/1mhh6se/qwenqwenimage_hugging_face/ | false | false | default | 85 | {'enabled': False, 'images': [{'id': 'qvxd1_x1PaBd3IEw-2xS9ifjngcFBwLHvsX1ihQDi64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qvxd1_x1PaBd3IEw-2xS9ifjngcFBwLHvsX1ihQDi64.png?width=108&crop=smart&auto=webp&s=55a19a341313ab08b43f3737ad0171a6dc27a3a6', 'width': 108}, {'height': 116, 'url': 'h... | |
Spot the difference | 0 | 3.9 million views. This is how the CEO of "Openai" writes. I have been scolded and grounded so many times for grammar mistakes. Speechless. | 2025-08-04T15:50:26 | Icy-Body4373 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhh5g3 | false | null | t3_1mhh5g3 | /r/LocalLLaMA/comments/1mhh5g3/spot_the_difference/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bmlzstq2x0hf1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/bmlzstq2x0hf1.png?width=108&crop=smart&auto=webp&s=fa74b9cc51dbf2fd117717bc27a25ca11bef3a28', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/bmlzstq2x0hf1.png?width=216&crop=smart&auto=we... | |
Sam Altman watching Qwen drop model after model | 945 | 2025-08-04T15:38:39 | TheRealSerdra | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhgu6t | false | null | t3_1mhgu6t | /r/LocalLLaMA/comments/1mhgu6t/sam_altman_watching_qwen_drop_model_after_model/ | false | false | default | 945 | {'enabled': True, 'images': [{'id': 'g7t8cmgrv0hf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/g7t8cmgrv0hf1.jpeg?width=108&crop=smart&auto=webp&s=44351caaa0ea4099b9344000e708dfe04a848bdc', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/g7t8cmgrv0hf1.jpeg?width=216&crop=smart&auto=w... | ||
Get ready for GLM-4-5 local gguf woot woot | 171 | This model is insane! I have been testing the ongoing llama.cpp PR and this morning has been amazing! GLM can spit out LOOOOOOOOOOOOOOOOOONG tokens! The original was a beast, and the new one is even better. I gave it 2500 lines of python code, told it to refactor it, it do so without dropping anything! The... | 2025-08-04T15:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mhg8rt/get_ready_for_glm45_local_gguf_woot_woot/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhg8rt | false | null | t3_1mhg8rt | /r/LocalLLaMA/comments/1mhg8rt/get_ready_for_glm45_local_gguf_woot_woot/ | false | false | self | 171 | null |
What do you guys estimate the parameter count of Imagen 4 Ultra to be? | 0 | It's created by Google and is almost without a doubt the best image generator currently available, but not open source. It's very intelligent and even possesses higher-level cognition and reasoning. By the way, the model is pure diffusion based. | 2025-08-04T15:16:25 | https://www.reddit.com/gallery/1mhg8du | Longjumping_Spot5843 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mhg8du | false | null | t3_1mhg8du | /r/LocalLLaMA/comments/1mhg8du/what_do_you_guys_estimate_the_parameter_count_of/ | false | false | 0 | null | |
Best local model for using with Cursor | 0 | I've set up Qwen3 30b quant 4 on a home server running a single 3090. It really struggles with tool calls and can't seem to interact with the Cursor APIs effectively. What are some good models (if any) that will fit within 24gb of VRAM but still be able to utilize the Cursor tool calls in agent mode? I'm planning to tr... | 2025-08-04T15:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mhg0ts/best_local_model_for_using_with_cursor/ | Traditional_Bet8239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhg0ts | false | null | t3_1mhg0ts | /r/LocalLLaMA/comments/1mhg0ts/best_local_model_for_using_with_cursor/ | false | false | self | 0 | null |
Bolt Graphics’ Zeus GPU Makes Bold Claim of Outperforming NVIDIA’s RTX 5090 by 10x in Rendering Workloads, That Too Using Laptop-Grade Memory | 32 | 2025-08-04T14:42:27 | https://wccftech.com/bolt-graphics-zeus-gpu-makes-bold-claim-of-outperforming-rtx-5090-by-10x-in-rendering-workloads/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mhfbsi | false | null | t3_1mhfbsi | /r/LocalLLaMA/comments/1mhfbsi/bolt_graphics_zeus_gpu_makes_bold_claim_of/ | false | false | default | 32 | {'enabled': False, 'images': [{'id': 'NOVvMZCeb6MJYdVCo6dYfR7H6AIJZt8iOvly0YwXZZ0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NOVvMZCeb6MJYdVCo6dYfR7H6AIJZt8iOvly0YwXZZ0.png?width=108&crop=smart&auto=webp&s=7bb50bf5bad3f11662fc8e79df150764f625417a', 'width': 108}, {'height': 121, 'url': 'h... | |
Which one is faster in LLM inference, 7900 XTX or RTX Pro 4000 ? | 3 | 7900 XTX 24Gb or RTX Pro 4000 24GB blackwell ?
AMD is 303TDP about 800€ and RTX is 140W TDP about 1200 euros, not yet available much?
vLLM or Ollama like gemma3?
Can anyone estimate? I have 5090 and 7900 XTX and 5090 gives 66t/s while 7900 xtx 29t/s in gemma3-27b.
I guess RTX pro 4000 at least twice slower than... | 2025-08-04T14:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mhf5jq/which_one_is_faster_in_llm_inference_7900_xtx_or/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhf5jq | false | null | t3_1mhf5jq | /r/LocalLLaMA/comments/1mhf5jq/which_one_is_faster_in_llm_inference_7900_xtx_or/ | false | false | self | 3 | null |
Qwen image 20B is coming! | 350 | 2025-08-04T14:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mhf0kl/qwen_image_20b_is_coming/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhf0kl | false | null | t3_1mhf0kl | /r/LocalLLaMA/comments/1mhf0kl/qwen_image_20b_is_coming/ | false | false | 350 | {'enabled': False, 'images': [{'id': 'qRUOYZYoHmNR9iE-xb-2D9P2t108utUKO0BsEEFsXs0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qRUOYZYoHmNR9iE-xb-2D9P2t108utUKO0BsEEFsXs0.png?width=108&crop=smart&auto=webp&s=5be1c7fe4a021337cd223c3f63b471eaee167539', 'width': 108}, {'height': 108, 'url': 'h... | ||
Deepseek R1's reasoning just feels less intelligent/efficient than more recent models | 5 | It feels like they made it reason too naturally, better than the previous r1 of course, but it has a tendency to constantly write entire chunks over and over as it works on them and it feels like the model thinks it has a scratch pad and is a normal human talking out loud, constantly proposing different ideas that ulti... | 2025-08-04T14:17:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mhepca/deepseek_r1s_reasoning_just_feels_less/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhepca | false | null | t3_1mhepca | /r/LocalLLaMA/comments/1mhepca/deepseek_r1s_reasoning_just_feels_less/ | false | false | self | 5 | null |
Looks like GGUF for GLM 4.5 may be getting closer to a reality. | 36 | [https://github.com/ggml-org/llama.cpp/pull/14939](https://github.com/ggml-org/llama.cpp/pull/14939) | 2025-08-04T14:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mheij9/looks_like_gguf_for_glm_45_may_be_getting_closer/ | jeffwadsworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mheij9 | false | null | t3_1mheij9 | /r/LocalLLaMA/comments/1mheij9/looks_like_gguf_for_glm_45_may_be_getting_closer/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'c5JLWNvDayy9hBNWlkTcKlG0BX-MgLkUBV-jJh9mTeo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c5JLWNvDayy9hBNWlkTcKlG0BX-MgLkUBV-jJh9mTeo.png?width=108&crop=smart&auto=webp&s=78369c4a613d24a26f628c7b0d0788fbd02727b4', 'width': 108}, {'height': 108, 'url': 'h... |
Any Open-Source TTS Models That Can Handle Long-Form (1+ Hour) Audio Generation? | 1 | [removed] | 2025-08-04T13:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mhe4j6/any_opensource_tts_models_that_can_handle/ | ARNOEREBUS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhe4j6 | false | null | t3_1mhe4j6 | /r/LocalLLaMA/comments/1mhe4j6/any_opensource_tts_models_that_can_handle/ | false | false | self | 1 | null |
Best document parser | 11 | I am in quest of finding SOTA document parser for PDF/Docx files. I have about 100k pages with tables, text, images(with text) that I want to convert to markdown format.
What is the best open source document parser available right now? That reaches near to Azure document intelligence accruacy.
I have explored
* Docl... | 2025-08-04T13:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mhe2h9/best_document_parser/ | aiwtl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhe2h9 | false | null | t3_1mhe2h9 | /r/LocalLLaMA/comments/1mhe2h9/best_document_parser/ | false | false | self | 11 | null |
r/LocalLLaMA right now | 809 | 2025-08-04T13:52:26 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhe1rl | false | null | t3_1mhe1rl | /r/LocalLLaMA/comments/1mhe1rl/rlocalllama_right_now/ | false | false | default | 809 | {'enabled': True, 'images': [{'id': 'f0xr7mshc0hf1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/f0xr7mshc0hf1.png?width=108&crop=smart&auto=webp&s=8e4fb1ed4f97bf1ef6b619b68d73f264e3545abe', 'width': 108}, {'height': 285, 'url': 'https://preview.redd.it/f0xr7mshc0hf1.png?width=216&crop=smart&auto=we... | ||
New Qwen model has vision | 164 | 2025-08-04T13:37:05 | Relative_Rope4234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhdnye | false | null | t3_1mhdnye | /r/LocalLLaMA/comments/1mhdnye/new_qwen_model_has_vision/ | false | false | 164 | {'enabled': True, 'images': [{'id': 't3HB7rEFEpA_c84oL5rMK_DhP-86PeLTNUvQX50FdoU', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/vypcvak2a0hf1.jpeg?width=108&crop=smart&auto=webp&s=812d44c02d8ace8217583425393f7b465984241c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/vypcvak2a0hf1.j... | |||
New Qwen has with vision | 1 | 2025-08-04T13:34:59 | Relative_Rope4234 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhdm12 | false | null | t3_1mhdm12 | /r/LocalLLaMA/comments/1mhdm12/new_qwen_has_with_vision/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'BqV28_mc-UceWZpaNznWRtsYXXdKpgBh1M-gPy_CNqg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ys9eeyxo90hf1.jpeg?width=108&crop=smart&auto=webp&s=83cd133ba95a7a9e1658cc11ce59874d535abf25', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ys9eeyxo90hf1.j... | |||
How can I allow a local model to search the web? | 6 | I am currently using ollama from the command line in linux and it works well. But one thing I miss from chatgpt is having the model supplement its knowledge with web search. How can I allow a local model to search the web when it thinks it would be helpful? | 2025-08-04T13:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mhcyu0/how_can_i_allow_a_local_model_to_search_the_web/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhcyu0 | false | null | t3_1mhcyu0 | /r/LocalLLaMA/comments/1mhcyu0/how_can_i_allow_a_local_model_to_search_the_web/ | false | false | self | 6 | null |
Poor performance from llama.cpp in text-generation-webui? | 2 | Just recently updated text generation web ui and am running Deepseek Distill Llama 3.3 70b with llama.cpp. Its one of those imatrix quants or whatever but it’s labeled Q4_KM I think. I am utilizing the full 131k context length and making it possible with K_V cache quantization. The weird part is the same exact configur... | 2025-08-04T13:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mhcy5r/poor_performance_from_llamacpp_in/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhcy5r | false | null | t3_1mhcy5r | /r/LocalLLaMA/comments/1mhcy5r/poor_performance_from_llamacpp_in/ | false | false | self | 2 | null |
Are you more interested in local opensource (I.e running models on your own hardware) or SOTA opensource like Qwen 3 models, Kimi K2, Deepseek, ect.. | 2 | Title | 2025-08-04T13:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mhcx2k/are_you_more_interested_in_local_opensource_ie/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhcx2k | false | null | t3_1mhcx2k | /r/LocalLLaMA/comments/1mhcx2k/are_you_more_interested_in_local_opensource_ie/ | false | false | self | 2 | null |
Huawei released weights of Pangu Ultra,a 718B model. | 336 | 2025-08-04T13:02:04 | https://ai.gitcode.com/ascend-tribe/openpangu-ultra-moe-718b-model/blob/main/README_EN.md | Overflow_al | ai.gitcode.com | 1970-01-01T00:00:00 | 0 | {} | 1mhctvk | false | null | t3_1mhctvk | /r/LocalLLaMA/comments/1mhctvk/huawei_released_weights_of_pangu_ultraa_718b_model/ | false | false | default | 336 | {'enabled': False, 'images': [{'id': 'b4vAhRXu0QISFGzKNI7MMEeFrdQG1UWQqC8GhQPUCNU', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/b4vAhRXu0QISFGzKNI7MMEeFrdQG1UWQqC8GhQPUCNU.png?auto=webp&s=da6a5e01cd36a70882b4a98dbc5b14b02b19a809', 'width': 96}, 'variants': {}}]} | |
Deepseek r2 but from Alibaba 😏 | 2 | 2025-08-04T12:52:53 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhcm05 | false | null | t3_1mhcm05 | /r/LocalLLaMA/comments/1mhcm05/deepseek_r2_but_from_alibaba/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'wercrdu120hf1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/wercrdu120hf1.png?width=108&crop=smart&auto=webp&s=bc7136857b9597f6ba6668c34bf1f410f18718fd', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/wercrdu120hf1.png?width=216&crop=smart&auto=webp... | ||
I actually hope that it's a large model - maybe ~700B params, but with a new architectural change that sets it apart. Something that could outperform o3 | 1 | 2025-08-04T12:49:47 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhcjce | false | null | t3_1mhcjce | /r/LocalLLaMA/comments/1mhcjce/i_actually_hope_that_its_a_large_model_maybe_700b/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'y24m9aco00hf1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/y24m9aco00hf1.png?width=108&crop=smart&auto=webp&s=b668c548572eb99ffc27a0bba47ab0b37ea80660', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/y24m9aco00hf1.png?width=216&crop=smart&auto=webp... | ||
Qwen 3 - 7B has a rival - Hunyuan. | 30 | ERROR: type should be string, got "\n\nhttps://youtu.be/YR0KYO1YxsM?si=PEZJci3xJXITSuHM&utm_source=ZTQxO" | 2025-08-04T12:47:21 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhchdb | false | null | t3_1mhchdb | /r/LocalLLaMA/comments/1mhchdb/qwen_3_7b_has_a_rival_hunyuan/ | false | false | default | 30 | {'enabled': True, 'images': [{'id': 'sfrqq83710hf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/sfrqq83710hf1.jpeg?width=108&crop=smart&auto=webp&s=480e2bd14f64df792e7eaa9981b21f6d929c3d90', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/sfrqq83710hf1.jpeg?width=216&crop=smart&auto=w... | |
What models have the least likelihood of hallucinations? | 0 | I'm new to local LLM and all I have right now is that a GTX 1060 6g from 2017, when I get an upgrade in the 4000 series, I would like to know what are your suggested models that hallucinate the least? | 2025-08-04T12:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mhcfe4/what_models_have_the_least_likelihood_of/ | vulgar1171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhcfe4 | false | null | t3_1mhcfe4 | /r/LocalLLaMA/comments/1mhcfe4/what_models_have_the_least_likelihood_of/ | false | false | self | 0 | null |
We built Usely because no one else is protecting founders from $1,000+ API bills on $20 plans | 1 | [removed] | 2025-08-04T12:30:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mhc3jk/we_built_usely_because_no_one_else_is_protecting/ | Jotadesito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhc3jk | false | null | t3_1mhc3jk | /r/LocalLLaMA/comments/1mhc3jk/we_built_usely_because_no_one_else_is_protecting/ | false | false | self | 1 | null |
Running, fine tuning and converting LLMs on new Ryzen AI 7 or 9 APUs - 64-128GB RAM - 75% VRAM | 4 | Hey
Does anybody have some experience working with those newer Ryzen AI Chips in case of running Models (up to 70B Q4 ? )
Fine tuning LLMs using LoRA or converting models from/into GGUF?
Saw those are more affordable than going for a maxed out mac book pro and would be quite interested in their performance and semi p... | 2025-08-04T12:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mhc31j/running_fine_tuning_and_converting_llms_on_new/ | IngloriousBastrd7908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhc31j | false | null | t3_1mhc31j | /r/LocalLLaMA/comments/1mhc31j/running_fine_tuning_and_converting_llms_on_new/ | false | false | self | 4 | null |
What kind of Qwen 2508 do you want tonight? ;) | 134 | 2025-08-04T12:19:39 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhbvig | false | null | t3_1mhbvig | /r/LocalLLaMA/comments/1mhbvig/what_kind_of_qwen_2508_do_you_want_tonight/ | false | false | default | 134 | {'enabled': True, 'images': [{'id': '3f5by1b8wzgf1', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/3f5by1b8wzgf1.png?width=108&crop=smart&auto=webp&s=5c79933ab77e3d643053f37ab6382908ac1eb9af', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/3f5by1b8wzgf1.png?width=216&crop=smart&auto=webp... | ||
New Qwen Models Today!!! | 760 | 2025-08-04T12:12:00 | R46H4V | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhbpmo | false | null | t3_1mhbpmo | /r/LocalLLaMA/comments/1mhbpmo/new_qwen_models_today/ | false | false | default | 760 | {'enabled': True, 'images': [{'id': 'qemmgysvuzgf1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/qemmgysvuzgf1.png?width=108&crop=smart&auto=webp&s=3f9e5dff4613eb055af874621d1a213848bf522f', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/qemmgysvuzgf1.png?width=216&crop=smart&auto=webp... | ||
How to maximise Prompt processing speed for long context usage? | 4 | I need a local llm to be able to process 100-200k context in a reasonable timeframe. Does anyone know best llama.cpp flags to maximise Prompt processing?
I have 16gb vram 9070 amd GPU and 32gb ram, which I may upgrade to 192gb ram at some point. Ubuntu and windows. Had some driver problems with rocm but would that b... | 2025-08-04T12:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mhbp73/how_to_maximise_prompt_processing_speed_for_long/ | Ok-Kangaroo6055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhbp73 | false | null | t3_1mhbp73 | /r/LocalLLaMA/comments/1mhbp73/how_to_maximise_prompt_processing_speed_for_long/ | false | false | self | 4 | null |
What are some use cases to send multiple messages in one LLM API request? | 0 | Hi. The messages field in the payload of an (OpenAI-compatible) API call is an array, meaning there can be multiple messages, of system, user, or assistant roles. I normally just send a system message and then the user message.
What are some use cases where it's preferable to send multiple messages in one API call i... | 2025-08-04T12:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mhbk4f/what_are_some_use_cases_to_send_multiple_messages/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhbk4f | false | null | t3_1mhbk4f | /r/LocalLLaMA/comments/1mhbk4f/what_are_some_use_cases_to_send_multiple_messages/ | false | false | self | 0 | null |
Build a Small Language Model from Scratch | Free 6 hour live workshop | 3 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/2062bdjfszgf1.jpg?width=3024&format=pjpg&auto=webp&s=d90a50b645e50511d0d6eea016fa89918bdfc9e2\n\nOn 9th August 2025, I am starting a Small Language Model Workshop. It will be a 5-6 hour live workshop. This is purely for teaching and sharing knowledge. Think of it as a 3 times expanded and live version of Karpathy's repository and video. \n\nIn this workshop, we will build a production ready Small Language Model (SLM) fully from scratch. \n\nTowards the end of this workshop, we will chain 8 GPUs and actually replicate the results of GPT-2. \n\nIt will be like building GPT-2 fully from scratch, and getting results which OpenAI got in their classical GPT-2 paper. \n\nThe workshop will start from tokenisation and end at multi-GPU programming. \n\nWe will work with 2 datasets: \n\n\\- TinyStories\n\n\\- FineWeb Edu\n\nWe will go through the following: \n\n\\- Loading datasets\n\n\\- Tokenization\n\n\\- Creating input-target pairs\n\n\\- Assembling the entire SLM architecture\n\n\\- Defining the training loop\n\n\\- Running inference\n\n\\- Multi-GPU version of training\n\nRegister for free here: [https://slm-from-scratch.vercel.app/](https://slm-from-scratch.vercel.app/)" | 2025-08-04T12:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mhbhn3/build_a_small_language_model_from_scratch_free_6/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhbhn3 | false | null | t3_1mhbhn3 | /r/LocalLLaMA/comments/1mhbhn3/build_a_small_language_model_from_scratch_free_6/ | false | false | 3 | null | |
GLM-4.5 llama.cpp PR is nearing completion | 107 | Current status:
https://github.com/ggml-org/llama.cpp/pull/14939#issuecomment-3150197036
Everyone get ready to fire up your GPUs... | 2025-08-04T11:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mhb5el/glm45_llamacpp_pr_is_nearing_completion/ | DistanceSolar1449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhb5el | false | null | t3_1mhb5el | /r/LocalLLaMA/comments/1mhb5el/glm45_llamacpp_pr_is_nearing_completion/ | false | false | self | 107 | null |
Local database agent | 0 | As the title suggests, I am trying to build a database agent for a custom Erp software with all the tables already inside a Postgres server.
As it’s an erp software it deals with a variety of tables such as sales/inventory/packaging and so on.
I want to make an agent such that if I ask a question related to sales it ... | 2025-08-04T11:30:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mhaw4g/local_database_agent/ | Whywhoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhaw4g | false | null | t3_1mhaw4g | /r/LocalLLaMA/comments/1mhaw4g/local_database_agent/ | false | false | self | 0 | null |
No new open source models from US? | 1 | [removed] | 2025-08-04T11:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mhaudy/no_new_open_source_models_from_us/ | Exact_Tip_8497 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhaudy | false | null | t3_1mhaudy | /r/LocalLLaMA/comments/1mhaudy/no_new_open_source_models_from_us/ | false | false | self | 1 | null |
Open Music Foundation Models for Full-Song Generation | 22 | YuE: Open Full-song Music Generation Foundation Model, something similar to [Suno.ai](http://Suno.ai) but open | 2025-08-04T10:48:51 | https://map-yue.github.io/ | phone_radio_tv | map-yue.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mha439 | false | null | t3_1mha439 | /r/LocalLLaMA/comments/1mha439/open_music_foundation_models_for_fullsong/ | false | false | default | 22 | null |
RAG with 30k documents, some with 300 pages each. | 14 | What's the best approach for this? Tried it in open webui with ollama backend but it's too slow.
All docs are pdf, all are OCRs already. Ingestion to knowledgebase is the bottleneck now.
Anybody done this and what was the best approach for you? | 2025-08-04T10:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mha1g1/rag_with_30k_documents_some_with_300_pages_each/ | dennisitnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mha1g1 | false | null | t3_1mha1g1 | /r/LocalLLaMA/comments/1mha1g1/rag_with_30k_documents_some_with_300_pages_each/ | false | false | self | 14 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.