title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GLaDOS has been updated for Parakeet 0.6B | 251 | It's been a while, but I've had a chance to make [a big update to GLaDOS](https://github.com/dnhkng/GLaDOS): A much improved ASR model!
The new [Nemo Parakeet 0.6B model](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2) is smashing the [Huggingface ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr... | 2025-05-17T12:55:20 | Reddactor | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kosbyy | false | null | t3_1kosbyy | /r/LocalLLaMA/comments/1kosbyy/glados_has_been_updated_for_parakeet_06b/ | false | false | 251 | {'enabled': True, 'images': [{'id': 'Jz_spyBjU2RFoyu8UBc-WLJw1cmPh_qdDb1ZKbWuORI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/8rtph8367c1f1.png?width=108&crop=smart&auto=webp&s=cd2b08023bf7e59cc8ec4bdaa9272211241113b2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/8rtph8367c1f1.png... | ||
AGI is action, not words. | 1 | 2025-05-17T12:15:29 | https://medium.com/@daniel.hollarek/agi-is-action-not-words-0fa793a6bef4 | Somerandomguy10111 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1korkms | false | null | t3_1korkms | /r/LocalLLaMA/comments/1korkms/agi_is_action_not_words/ | false | false | default | 1 | null | |
MULTI MODAL VIDEO RAG | 1 | [removed] | 2025-05-17T12:13:35 | https://www.reddit.com/r/LocalLLaMA/comments/1korje1/multi_modal_video_rag/ | Pez_99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1korje1 | false | null | t3_1korje1 | /r/LocalLLaMA/comments/1korje1/multi_modal_video_rag/ | false | false | self | 1 | null |
Video Anlayser | 1 | [removed] | 2025-05-17T11:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kor5tn/video_anlayser/ | slic420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kor5tn | false | null | t3_1kor5tn | /r/LocalLLaMA/comments/1kor5tn/video_anlayser/ | false | false | self | 1 | null |
fun with llama-server, SmolVLM2 and dogs | 1 | [removed] | 2025-05-17T11:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kor3kc/fun_with_llamaserver_smolvlm2_and_dogs/ | Frosty-Whole-7752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kor3kc | false | null | t3_1kor3kc | /r/LocalLLaMA/comments/1kor3kc/fun_with_llamaserver_smolvlm2_and_dogs/ | false | false | self | 1 | null |
Are there any models only English based | 2 | My use case needs small, fast and smart. I don’t need 30 languages - just English at the moment at least. Are there models just for English - I would assume they would be lighter and more focused on what I need it to do. | 2025-05-17T11:38:34 | https://www.reddit.com/r/LocalLLaMA/comments/1koqwus/are_there_any_models_only_english_based/ | ETBiggs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koqwus | false | null | t3_1koqwus | /r/LocalLLaMA/comments/1koqwus/are_there_any_models_only_english_based/ | false | false | self | 2 | null |
Multi-GPU Inference and Training Performance Issues | 1 | [removed] | 2025-05-17T11:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/1koqm32/multigpu_inference_and_training_performance_issues/ | ba2sYd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koqm32 | false | null | t3_1koqm32 | /r/LocalLLaMA/comments/1koqm32/multigpu_inference_and_training_performance_issues/ | false | false | self | 1 | null |
Best model for upcoming 128GB unified memory machines? | 90 | Qwen-3 32B at Q8 is likely the best local option for now at just 34 GB (Q8), but surely we can do better?
Maybe the Qwen-3 235B-A22B at Q3 is possible, though it seems quite sensitive to quantization, so Q3 might be too aggressive.
Isn't there a more balanced 70B-class model that would fit this machine better? | 2025-05-17T11:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1koqlmm/best_model_for_upcoming_128gb_unified_memory/ | woahdudee2a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koqlmm | false | null | t3_1koqlmm | /r/LocalLLaMA/comments/1koqlmm/best_model_for_upcoming_128gb_unified_memory/ | false | false | self | 90 | null |
Prototype of comparative benchmark for LLM's as agents | 2 | For the past week or two I've been working on a way to compare how well different models do as agents. Here's the first pass:
[https://sdfgeoff.github.io/ai\_agent\_evaluator/](https://sdfgeoff.github.io/ai_agent_evaluator/)
*Currently it'll give a WebGL error when you load the page because Qwen2.5-7b-1m got somethi... | 2025-05-17T11:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/1koqlai/prototype_of_comparative_benchmark_for_llms_as/ | sdfgeoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koqlai | false | null | t3_1koqlai | /r/LocalLLaMA/comments/1koqlai/prototype_of_comparative_benchmark_for_llms_as/ | false | false | 2 | null | |
Multi-GPU Inference and Training Performance Issues | 1 | [removed] | 2025-05-17T11:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1koqec6/multigpu_inference_and_training_performance_issues/ | ba2sYd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koqec6 | false | null | t3_1koqec6 | /r/LocalLLaMA/comments/1koqec6/multigpu_inference_and_training_performance_issues/ | false | false | self | 1 | null |
Teach your LLMs to use MCP with retrain | 1 | [removed] | 2025-05-17T10:13:52 | Fit_Strawberry8480 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1koplye | false | null | t3_1koplye | /r/LocalLLaMA/comments/1koplye/teach_your_llms_to_use_mcp_with_retrain/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'g-UL6bd5tKYnytOJuXZu0_H8pUFnptVJBOU1O3GmR2U', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/nhgkyrx2hb1f1.png?width=108&crop=smart&auto=webp&s=ce60fa537f073e400d903928885d874ccb6caddc', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/nhgkyrx2hb1f1.png... | ||
You didn't asked, but I need to tell about going local on windows | 27 | Hi, I want to share my experience about running LLMs locally on Windows 11 22H2 with 3x NVIDIA GPUs. I read a lot about how to serve LLM models at home, but almost always guide was about either `ollama pull` or linux-specific or for dedicated server. So, I spent some time to figure out how to conveniently run it by mys... | 2025-05-17T10:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kopjp8/you_didnt_asked_but_i_need_to_tell_about_going/ | Nepherpitu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kopjp8 | false | null | t3_1kopjp8 | /r/LocalLLaMA/comments/1kopjp8/you_didnt_asked_but_i_need_to_tell_about_going/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'psCyib3dKzhzLseOBtq6OkI3VmSDgXlV5UiE8rsNs9I', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/4ENZoYiJxNdZz2rKaj5oO393s4fD5ySAjDeqAHRJP7w.jpg?width=108&crop=smart&auto=webp&s=5ea8785287fce5d8304955ea1bb3ef041fd94dd5', 'width': 108}, {'height': 107, 'url': 'h... |
Teach Your LLMs to Use MCP Tools - New RL Library Makes It Simple | 1 | [removed] | 2025-05-17T10:08:51 | Fit_Strawberry8480 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kopjdb | false | null | t3_1kopjdb | /r/LocalLLaMA/comments/1kopjdb/teach_your_llms_to_use_mcp_tools_new_rl_library/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ARF1zFU68I0YGwGdwsHtfY3p7l9L2TKtz68y5IZmiII', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ujceyh0tgb1f1.png?width=108&crop=smart&auto=webp&s=b09a00cc2c04b6e2e1e640220eecb684f04acb92', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/ujceyh0tgb1f1.png... | ||
Teach Your LLMs to Use MCP Tools - New RL Library Makes It Simple | 1 | [removed] | 2025-05-17T10:06:42 | Fit_Strawberry8480 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kopiah | false | null | t3_1kopiah | /r/LocalLLaMA/comments/1kopiah/teach_your_llms_to_use_mcp_tools_new_rl_library/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6sRQl9HkixKe4YKe8FtqcBeyP3L041LChCOKBTZ2IeQ', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ekz19nt6gb1f1.png?width=108&crop=smart&auto=webp&s=96b58083b5a76b360c4b6659b5abf0b03b9dc8b6', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/ekz19nt6gb1f1.png... | ||
Just benchmarked the 5060TI... | 13 | `Model Eval. Toks Resp. toks Total toks`
`mistral-nemo:12b-instruct-2407-q8_0 290.38 30.93 31.50`
`llama3.1:8b-instruct-q8_0 563.90 46.19 47.53`
I've had to change the process on vast cause with ... | 2025-05-17T10:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kophr9/just_benchmarked_the_5060ti/ | Kirys79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kophr9 | false | null | t3_1kophr9 | /r/LocalLLaMA/comments/1kophr9/just_benchmarked_the_5060ti/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'vanBBfAQU1bkKtQSUX_UgRcpiSIQgKXCdWYbDfyH8YQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/f8rqIKMKaRJ8sOjB8L1nYrIYXYnLTSi5zBmxwqBweSg.jpg?width=108&crop=smart&auto=webp&s=1e31dbb042ada0fb51d4a681e427d4997935a0c3', 'width': 108}, {'height': 113, 'url': 'h... |
I'm using Fedora, and i try local Ai,b but it is very slow i try evry method i find and it is still painfully slow. Help me how can make it fast | 1 | [removed] | 2025-05-17T10:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kopgfx/im_using_fedora_and_i_try_local_aib_but_it_is/ | Al_Hassan- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kopgfx | false | null | t3_1kopgfx | /r/LocalLLaMA/comments/1kopgfx/im_using_fedora_and_i_try_local_aib_but_it_is/ | false | false | 1 | null | |
Multi-GPU Inference and Training Performance Issues | 1 | [removed] | 2025-05-17T09:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kop3g4/multigpu_inference_and_training_performance_issues/ | ba2sYd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kop3g4 | false | null | t3_1kop3g4 | /r/LocalLLaMA/comments/1kop3g4/multigpu_inference_and_training_performance_issues/ | false | false | self | 1 | null |
llama.cpp benchmarks on 72GB VRAM Setup (2x 3090 + 2x 3060) | 81 | **Building a LocalLlama Machine – Episode 4:** **I think I am done (for now!)**
I added a second RTX 3090 and replaced 64GB of slower RAM with 128GB of faster RAM.
I think my build is complete for now (unless we get new models in 40B - 120B range!).
GPU Prices:
\- 2x RTX 3090 - 6000 PLN
\- 2x RTX 3060 -... | 2025-05-17T09:27:01 | https://www.reddit.com/gallery/1kooyfx | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kooyfx | false | null | t3_1kooyfx | /r/LocalLLaMA/comments/1kooyfx/llamacpp_benchmarks_on_72gb_vram_setup_2x_3090_2x/ | false | false | 81 | null | |
Orpheus-TTS is now supported by chatllm.cpp | 61 | Happy to share that [chatllm.cpp](https://github.com/foldl/chatllm.cpp) now supports Orpheus-TTS models.
The demo audio is generated with this prompt:
```sh
>build-vulkan\bin\Release\main.exe -m quantized\orpheus-tts-en-3b.bin -i --max_length 1000
________ __ __ __ __ ___
/ ____/ /_ ____ _/ ... | 2025-05-17T08:14:32 | https://v.redd.it/3lyipv6uva1f1 | foldl-li | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kony6o | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3lyipv6uva1f1/DASHPlaylist.mpd?a=1750061693%2COGQzYmFhZTZjMGM1MDU5MTRkNmE3YmVlODRmN2JkNzYxNjc3NWIxNTQ3ODE0ZWI0NTZkMTNkMTYxNzBmMjlkNA%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/3lyipv6uva1f1/DASH_1080.mp4?source=fallback', 'ha... | t3_1kony6o | /r/LocalLLaMA/comments/1kony6o/orpheustts_is_now_supported_by_chatllmcpp/ | false | false | 61 | {'enabled': False, 'images': [{'id': 'd2R5dTV2NnV2YTFmMbi8x691ZBFKYQvO7W9KNJH0CgcVBTuUP81YP-JSjSnu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2R5dTV2NnV2YTFmMbi8x691ZBFKYQvO7W9KNJH0CgcVBTuUP81YP-JSjSnu.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f00171c49f4e417b47b68beb14d6c23eb902... | |
Let's see how it goes | 973 | 2025-05-17T07:54:06 | hackiv | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1konnx9 | false | null | t3_1konnx9 | /r/LocalLLaMA/comments/1konnx9/lets_see_how_it_goes/ | false | false | 973 | {'enabled': True, 'images': [{'id': '9AMq2S2jkPRrk3TrUupzyTjXXMHkn_5Zv8vG25ic4MU', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/ngy98tkusa1f1.png?width=108&crop=smart&auto=webp&s=1de9249934fd4ad73f9b97d2d40ba4a5368bfde4', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/ngy98tkusa1f1.png... | |||
What are some good apps on Pinokio? | 0 | I don't know how to install ai apps. I only use them if they are on pinokio. | 2025-05-17T07:40:45 | https://www.reddit.com/r/LocalLLaMA/comments/1konh31/what_are_some_good_apps_on_pinokio/ | ImaginaryRea1ity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1konh31 | false | null | t3_1konh31 | /r/LocalLLaMA/comments/1konh31/what_are_some_good_apps_on_pinokio/ | false | false | self | 0 | null |
Best LLM benchmark for Rust coding? | 12 | Does anyone know about a current good LLM benchmark for Rust code?
I have found these so far:
https://leaderboard.techfren.net/ - can toggle to Rust - most current I found, but very small list of models, no qwq32, o4, claude 3.7, deepseek chat, etc. I would like to know how many testcases this has, someone has a lin... | 2025-05-17T07:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kon5wd/best_llm_benchmark_for_rust_coding/ | vhthc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kon5wd | false | null | t3_1kon5wd | /r/LocalLLaMA/comments/1kon5wd/best_llm_benchmark_for_rust_coding/ | false | false | self | 12 | null |
Stack overflow is almost dead | 1 | [removed] | 2025-05-17T07:10:50 | NewtMurky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kon193 | false | null | t3_1kon193 | /r/LocalLLaMA/comments/1kon193/stack_overflow_is_almost_dead/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'V1f0243CUPDLiVAnOEBc2-yJZGlJdAfKEvv2MyRMA94', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/8bz7a3c5la1f1.jpeg?width=108&crop=smart&auto=webp&s=81efa60c4fb8b4806f34d39440267c4a542cebd6', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/8bz7a3c5la1f1.jp... | ||
New New Qwen | 159 | 2025-05-17T06:48:29 | https://huggingface.co/Qwen/WorldPM-72B | bobby-chan | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kompbk | false | null | t3_1kompbk | /r/LocalLLaMA/comments/1kompbk/new_new_qwen/ | false | false | 159 | {'enabled': False, 'images': [{'id': '9FQ6sSBweOC_esl6I2hnIWXVjXuwLs8CfUNyfN7xahk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KnKOyLV6zthvubjnKd-6Nrxq-GYIVyUyXITDw76dq6k.jpg?width=108&crop=smart&auto=webp&s=5583572c36efd89eadd850bf620cdf671a8ba179', 'width': 108}, {'height': 116, 'url': 'h... | ||
Pivotal Token Search (PTS): Optimizing LLMs by targeting the tokens that actually matter | 40 | Hey everyone,
I'm excited to share **Pivotal Token Search (PTS)**, a technique for identifying and targeting critical decision points in language model generations that I've just open-sourced.
# What is PTS and why should you care?
Have you ever noticed that when an LLM solves a problem, there are usually just a few... | 2025-05-17T06:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1komb56/pivotal_token_search_pts_optimizing_llms_by/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1komb56 | false | null | t3_1komb56 | /r/LocalLLaMA/comments/1komb56/pivotal_token_search_pts_optimizing_llms_by/ | false | false | self | 40 | null |
Creative uses of a potentially great corpus | 4 | I'm building a dataset for finetuning for the purpose of studying philosophy. Its main purpose will to be to orient the model towards discussions on these specific books BUT it would be cool if it turned out to be useful in other contexts as well.
To build the dataset on the books, I OCR the PDF, break it into 500 tok... | 2025-05-17T05:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kollta/creative_uses_of_a_potentially_great_corpus/ | sqli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kollta | false | null | t3_1kollta | /r/LocalLLaMA/comments/1kollta/creative_uses_of_a_potentially_great_corpus/ | false | false | self | 4 | null |
Recommendations for SLMs for image analysis, to ask specific questions about the image | 2 | Not for OCR. Recommendations for SLMs for image analysis. Have some mates using chatgpt for analysing skin and facial features, want to help them leave the chatgpt train.
Also curious what is the state of SLMs for image analysis in general, I've only seen examples of OCR applications. | 2025-05-17T04:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kokmon/recommendations_for_slms_for_image_analysis_to/ | Vegetable-Score-3915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kokmon | false | null | t3_1kokmon | /r/LocalLLaMA/comments/1kokmon/recommendations_for_slms_for_image_analysis_to/ | false | false | self | 2 | null |
M4 Max 16core/40core cpu/gpu 128gb Studio | 0 | Apologies if this is a stupid question, just getting my feet wet with local llm and playing around with things. I'm using LM Studio and have Qwen2.5 Coder 32B loaded and with this spec of Studio I'm getting \~20tk/s. Been messing with settings and just curious if this is where it should be at or if I need to make som... | 2025-05-17T04:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kokdfu/m4_max_16core40core_cpugpu_128gb_studio/ | Bob_Fancy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kokdfu | false | null | t3_1kokdfu | /r/LocalLLaMA/comments/1kokdfu/m4_max_16core40core_cpugpu_128gb_studio/ | false | false | self | 0 | null |
Deepseek vs o3 (ui designing) | 8 | I've been using gpt and deepseek a lot for programming. I just want to say, deepseeks ui design capabilities are nuts (not R1). Does anyone else feel the same?
Try the same prompt on both, o3 seems 'lazy'. The only other model I feel that was near deepseek, was o1 (my favorite model).
Haven't done much with Claude or... | 2025-05-17T04:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kok5ib/deepseek_vs_o3_ui_designing/ | SuitableElephant6346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kok5ib | false | null | t3_1kok5ib | /r/LocalLLaMA/comments/1kok5ib/deepseek_vs_o3_ui_designing/ | false | false | self | 8 | null |
Qwen is about to release a new model? | 89 | Saw this! | 2025-05-17T03:47:46 | https://arxiv.org/abs/2505.10527 | Kooky-Somewhere-2883 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1kojtwd | false | null | t3_1kojtwd | /r/LocalLLaMA/comments/1kojtwd/qwen_is_about_to_release_a_new_model/ | false | false | default | 89 | null |
Water Cooling My RTX 4090 48GB: A DIY Mod with a 240mm AIO | 1 | [removed] | 2025-05-17T03:42:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kojqq3/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | Weekly-Program-2004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kojqq3 | false | null | t3_1kojqq3 | /r/LocalLLaMA/comments/1kojqq3/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | false | false | self | 1 | null |
Can an AI start the conversation or give responses without being asked? | 1 | [removed] | 2025-05-17T03:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/1koj79b/can_an_ai_start_the_conversation_or_give/ | manpreet__singh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koj79b | false | null | t3_1koj79b | /r/LocalLLaMA/comments/1koj79b/can_an_ai_start_the_conversation_or_give/ | false | false | self | 1 | null |
[2504.12312] Socrates or Smartypants: Testing Logic Reasoning Capabilities of Large Language Models with Logic Programming-based Test Oracles | 12 | 2025-05-17T02:54:07 | https://arxiv.org/abs/2504.12312 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1koivgz | false | null | t3_1koivgz | /r/LocalLLaMA/comments/1koivgz/250412312_socrates_or_smartypants_testing_logic/ | false | false | default | 12 | null | |
What is the best OSS model for structured extraction | 0 | Hey guys, are there any leaderboards for structured extraction specifically from long text? Secondly, what are some good models you guys have used recently for extraction JSON from text. I am playing with VLLM's structured extraction feature with Qwen models, not very impressed. I was hoping 7 and 32B models would be p... | 2025-05-17T02:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1koiolc/what_is_the_best_oss_model_for_structured/ | diptanuc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koiolc | false | null | t3_1koiolc | /r/LocalLLaMA/comments/1koiolc/what_is_the_best_oss_model_for_structured/ | false | false | self | 0 | null |
Wan 2.1 1.3B fighting video is not as good as the Qwen 2.5 fighting videos I previously posted. I used the Wan 2.1 1.3B from Huge.com. Qwen 2.5 must be using some other type of super model for videos. Because this Wan has lost its' way. | 0 | 2025-05-17T02:24:46 | https://v.redd.it/4767zu8u591f1 | Extension-Fee-8480 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1koicd2 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4767zu8u591f1/DASHPlaylist.mpd?a=1750040698%2CMjAxZGFmZjVlOGEzZTc2ZWI3NmM0Yjg1ZGI5MjEzMjdkODViZWFhNjEwNGJiYjg0NGExYTMxOTcxMmM2ZjJiZg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/4767zu8u591f1/DASH_720.mp4?source=fallback', 'ha... | t3_1koicd2 | /r/LocalLLaMA/comments/1koicd2/wan_21_13b_fighting_video_is_not_as_good_as_the/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZmJ2Zmx0OHU1OTFmMfpNMIQkl-nNNCFH7YJPU4MwUsKB6Ie6qaUzSQtnxFoT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZmJ2Zmx0OHU1OTFmMfpNMIQkl-nNNCFH7YJPU4MwUsKB6Ie6qaUzSQtnxFoT.png?width=108&crop=smart&format=pjpg&auto=webp&s=85130e37603bdd2a0edda1ec2cba3223af38f... | ||
Need help building an open source dataset | 1 | [removed] | 2025-05-17T01:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kohm08/need_help_building_an_open_source_dataset/ | sqli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kohm08 | false | null | t3_1kohm08 | /r/LocalLLaMA/comments/1kohm08/need_help_building_an_open_source_dataset/ | false | false | self | 1 | null |
Need help building an open source dataset | 1 | [removed] | 2025-05-17T01:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kohji0/need_help_building_an_open_source_dataset/ | sqli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kohji0 | false | null | t3_1kohji0 | /r/LocalLLaMA/comments/1kohji0/need_help_building_an_open_source_dataset/ | false | false | self | 1 | null |
Need help building an open source dataset. | 1 | [removed] | 2025-05-17T01:39:41 | https://www.reddit.com/gallery/1kohir0 | sqli | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kohir0 | false | null | t3_1kohir0 | /r/LocalLLaMA/comments/1kohir0/need_help_building_an_open_source_dataset/ | false | false | 1 | null | |
Need help building an open source dataset | 1 | [removed] | 2025-05-17T01:34:13 | https://www.reddit.com/gallery/1kohf2q | sqli | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kohf2q | false | null | t3_1kohf2q | /r/LocalLLaMA/comments/1kohf2q/need_help_building_an_open_source_dataset/ | false | false | 1 | null | |
Mobile LLM server | 1 | [removed] | 2025-05-17T01:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kogxsu/mobile_llm_server/ | Kiki2092012 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kogxsu | false | null | t3_1kogxsu | /r/LocalLLaMA/comments/1kogxsu/mobile_llm_server/ | false | false | self | 1 | null |
My voice dataset creator is now on Colab with a GUI | 21 | My voice extractor tool is now on Google Colab with a GUI interface. Tested it with one minute of audio and it processed in about 5 minutes on Colab's CPU - much slower than with a GPU, but still works. | 2025-05-17T00:43:45 | https://colab.research.google.com/github/ReisCook/Voice_Extractor_Colab/blob/main/Voice_Extractor_Colab.ipynb | DumaDuma | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 1koggmm | false | null | t3_1koggmm | /r/LocalLLaMA/comments/1koggmm/my_voice_dataset_creator_is_now_on_colab_with_a/ | false | false | default | 21 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': '... |
Deepseek uses the same ideological framework as western frontier models to inform people about the world. But it censors such admissions. This message was revoked. | 0 | 2025-05-17T00:33:21 | Sidran | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kog9o6 | false | null | t3_1kog9o6 | /r/LocalLLaMA/comments/1kog9o6/deepseek_uses_the_same_ideological_framework_as/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'epwhDGEWHfOEPIflgoHxLbL6ljLd3QL_SBP5_aVO1Ac', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/p14qk8lvl81f1.jpeg?width=108&crop=smart&auto=webp&s=144037f219b5ce23d8eb9882c533e213b31d570f', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/p14qk8lvl81f1.j... | |||
Orwell 2.0 Infinite, Frontier Edition | 1 | [removed] | 2025-05-17T00:18:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kofzc1/orwell_20_infinite_frontier_edition/ | Sidran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kofzc1 | false | null | t3_1kofzc1 | /r/LocalLLaMA/comments/1kofzc1/orwell_20_infinite_frontier_edition/ | false | false | 1 | null | |
ArchGW 0.2.8 is out 🚀 - unifying repeated "low-level" functionality in building LLM apps via a local proxy. | 22 | I am thrilled about our latest release: [Arch 0.2.8](https://github.com/katanemo/archgw). Initially we handled calls made to LLMs - to unify key management, track spending consistently, improve resiliency and improve model choice - but we just added support for an ingress listener (on the same running process) to hand... | 2025-05-17T00:11:49 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kofuse | false | null | t3_1kofuse | /r/LocalLLaMA/comments/1kofuse/archgw_028_is_out_unifying_repeated_lowlevel/ | false | false | 22 | {'enabled': True, 'images': [{'id': '05J1FZyCjbSw54EecAbGYo9tDpAPVbP6p11ln_rIa34', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/gap0dbz2h81f1.png?width=108&crop=smart&auto=webp&s=ee2fde9d72deea218c5b420cbc667ecae57cf073', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/gap0dbz2h81f1.png... | ||
Any good GPU recommendations for $5000 budget | 0 | Hi,
I have a research funding of around $5000 that can buy some equipment.. Is it enough to buy some solid GPUs to run a local LLM such as Deepseek R1? Thanks in advance. | 2025-05-16T23:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kof98j/any_good_gpu_recommendations_for_5000_budget/ | jklwonder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kof98j | false | null | t3_1kof98j | /r/LocalLLaMA/comments/1kof98j/any_good_gpu_recommendations_for_5000_budget/ | false | false | self | 0 | null |
On the universality of BitNet models | 36 | 2025-05-16T23:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kof8ni/on_the_universality_of_bitnet_models/ | Automatic_Truth_6666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kof8ni | false | null | t3_1kof8ni | /r/LocalLLaMA/comments/1kof8ni/on_the_universality_of_bitnet_models/ | false | false | 36 | null | ||
Struck by a realization: LLMs distill vast computation into one vector for the FFN. Is this the core bottleneck? | 1 | [removed] | 2025-05-16T22:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kodmjv/struck_by_a_realization_llms_distill_vast/ | dimknaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kodmjv | false | null | t3_1kodmjv | /r/LocalLLaMA/comments/1kodmjv/struck_by_a_realization_llms_distill_vast/ | false | false | self | 1 | null |
Bought 3090, need emotional support! | 1 | [removed] | 2025-05-16T22:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kodmf6/bought_3090_need_emotional_support/ | edeche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kodmf6 | false | null | t3_1kodmf6 | /r/LocalLLaMA/comments/1kodmf6/bought_3090_need_emotional_support/ | false | false | self | 1 | null |
Bought 3090, need emotional support | 1 | [removed] | 2025-05-16T22:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kodfuv/bought_3090_need_emotional_support/ | edeche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kodfuv | false | null | t3_1kodfuv | /r/LocalLLaMA/comments/1kodfuv/bought_3090_need_emotional_support/ | false | false | self | 1 | null |
Bought 3090, need emotional support | 1 | [removed] | 2025-05-16T22:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kode5l/bought_3090_need_emotional_support/ | edeche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kode5l | false | null | t3_1kode5l | /r/LocalLLaMA/comments/1kode5l/bought_3090_need_emotional_support/ | false | false | self | 1 | null |
My LLM "X" Got Better: How a Detailed Identity, Specs, & "Rails" Improved Its Reasoning(?) | 0 | Hey fellow llama wranglers,
Wanted to share something I've stumbled upon that seems to genuinely improve my local LLM's performance
**My "Experiment" & The "Rails" I Use:**
I've been playing around with the "identity" and operational parameters I give my local LLM ("X", powered by Qwen3-14B on my MacBook Pro via LM ... | 2025-05-16T21:59:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kod345/my_llm_x_got_better_how_a_detailed_identity_specs/ | Fear_ltself | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kod345 | false | null | t3_1kod345 | /r/LocalLLaMA/comments/1kod345/my_llm_x_got_better_how_a_detailed_identity_specs/ | false | false | self | 0 | null |
I just to give love to Mistral ❤️🥐 | 156 | Of all the open models, Mistral's offerings (particularly Mistral Small) has to be the one of the most consistent in terms of just getting the task done.
Yesterday wanted to turn a 214 row, 4 column row into a list. Tried:
* Flash 2.5 - worked but stopped short a few times
* Chatgpt 4.1 - asked a few questions to cl... | 2025-05-16T21:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/1koccyx/i_just_to_give_love_to_mistral/ | klippers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koccyx | false | null | t3_1koccyx | /r/LocalLLaMA/comments/1koccyx/i_just_to_give_love_to_mistral/ | false | false | self | 156 | null |
Easier way to share code files with ChatGPT's/Gemini's chat interfaces | 0 | Hi!
Codebase Snapshotter: [https://github.com/Legalphoenix/llmcodebase](https://github.com/Legalphoenix/llmcodebase)
**Problem being solved**
Tired of drag and dropping files into ChatGPT's/Gemini's chat interface? Or worse - copy and pasting your code from multiple files into the chat interface? Tired of the LLM... | 2025-05-16T21:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kocap2/easier_way_to_share_code_files_with/ | Phoenix2990 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kocap2 | false | null | t3_1kocap2 | /r/LocalLLaMA/comments/1kocap2/easier_way_to_share_code_files_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6bkA8BIBc-GbQIK6D8V9Gx-7dgjXpSosNoRZDV_bd9g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Dd9bCx61j5EQGyNF7Uwp_oDEchqo_A-c7X-6rJ0FWY.jpg?width=108&crop=smart&auto=webp&s=972871e85ca4625b31416260bb49cfa8a70cb2bd', 'width': 108}, {'height': 108, 'url': 'h... |
robust structured data extraction from html | 0 | does some open source software or model exist that i can use to extract structured data (preferrably json) from html strings?
ofc any model can do it in some way, but i'm looking for something specically made for this job. iwant it to be precise (better than my hand written scrapers), not hallucinate, and just be more... | 2025-05-16T21:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kobqvn/robust_structured_data_extraction_from_html/ | tillybowman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kobqvn | false | null | t3_1kobqvn | /r/LocalLLaMA/comments/1kobqvn/robust_structured_data_extraction_from_html/ | false | false | self | 0 | null |
Best LocalLLM for scientific theories and conversations? | 1 | [removed] | 2025-05-16T20:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kobll2/best_localllm_for_scientific_theories_and/ | Plushinka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kobll2 | false | null | t3_1kobll2 | /r/LocalLLaMA/comments/1kobll2/best_localllm_for_scientific_theories_and/ | false | false | self | 1 | null |
Is there any smallest model for local image analysis? | 1 | [removed] | 2025-05-16T20:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/1koasup/is_there_any_smallest_model_for_local_image/ | dotnetdreamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koasup | false | null | t3_1koasup | /r/LocalLLaMA/comments/1koasup/is_there_any_smallest_model_for_local_image/ | false | false | self | 1 | null |
Don't Sleep on BitNet | 37 | 2025-05-16T20:10:39 | https://jackson.dev/post/dont-sleep-on-bitnet/ | Arcuru | jackson.dev | 1970-01-01T00:00:00 | 0 | {} | 1koak4w | false | null | t3_1koak4w | /r/LocalLLaMA/comments/1koak4w/dont_sleep_on_bitnet/ | false | false | default | 37 | null | |
Offline real-time voice conversations with custom chatbots using AI Runner | 38 | 2025-05-16T20:06:51 | https://youtu.be/n0SaEkXmeaA | w00fl35 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1koagwh | false | {'oembed': {'author_name': 'Joe Curlee', 'author_url': 'https://www.youtube.com/@joecurlee', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/n0SaEkXmeaA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ... | t3_1koagwh | /r/LocalLLaMA/comments/1koagwh/offline_realtime_voice_conversations_with_custom/ | false | false | 38 | {'enabled': False, 'images': [{'id': '1rs_P4KTVP7B3TB2tO0KC5bZo3NhneTDOs9-N1pR16o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cSHoRfgGCxZ0WRaZwhiy2L4dmB2Ncgy7iEYxgxHsJTE.jpg?width=108&crop=smart&auto=webp&s=5bde7d278489de81484844f3c0e5f4be78a0c25f', 'width': 108}, {'height': 162, 'url': 'h... | ||
[KVSplit] Run 2-3× longer contexts on Apple Silicon by using different precision for keys vs values (59% memory reduction) | 1 | [removed] | 2025-05-16T20:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1koagnh/kvsplit_run_23_longer_contexts_on_apple_silicon/ | Advanced_Software_34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koagnh | false | null | t3_1koagnh | /r/LocalLLaMA/comments/1koagnh/kvsplit_run_23_longer_contexts_on_apple_silicon/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aVMVGskn4cfOIIl-zeOSjG84yX3SoM3l7tN0sEs6dsg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wGNycHJmxHAhMrI6SyhEJNV4Xs5LLuL2t9rXQslE0p4.jpg?width=108&crop=smart&auto=webp&s=361347e3337fbdcf1fd313f0d9c8ff0b52acd213', 'width': 108}, {'height': 108, 'url': 'h... |
2 music fighting videos from Qwen 2.5, or whatever you call it, using Riffusion Ai music generator. First song is a Latin beat called Spy Rhythm and the second song is called Mission Mode based on the TV show Secret Agent Man starring Patrick McGoohan. There are over 40 fighting videos. | 0 | 2025-05-16T19:52:59 | https://v.redd.it/8y8fhh7j671f1 | Extension-Fee-8480 | /r/LocalLLaMA/comments/1koa4x7/2_music_fighting_videos_from_qwen_25_or_whatever/ | 1970-01-01T00:00:00 | 0 | {} | 1koa4x7 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8y8fhh7j671f1/DASHPlaylist.mpd?a=1750146786%2CMjI0ZTY4Y2MxOTUzMDk4ZWVlYTE3NzViNTk1ZDVmZmZkODZiY2ZkMDMyNzlhZmQ0ODM0NzM3OTA2OTVmOWIwMQ%3D%3D&v=1&f=sd', 'duration': 276, 'fallback_url': 'https://v.redd.it/8y8fhh7j671f1/DASH_1080.mp4?source=fallback', '... | t3_1koa4x7 | /r/LocalLLaMA/comments/1koa4x7/2_music_fighting_videos_from_qwen_25_or_whatever/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MTl4cDJoN2o2NzFmMTcAORu17xF3enWSnwBeP2OI2UrsT6ojI_axuplF20vz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MTl4cDJoN2o2NzFmMTcAORu17xF3enWSnwBeP2OI2UrsT6ojI_axuplF20vz.png?width=108&crop=smart&format=pjpg&auto=webp&s=b9ecc01653ecab760b086143c9e689e6217e1... | ||
Looking for very small multilingual LLMs | 4 | Is there a smaller causal model than Qwen3-0.6b that can understand multiple languages ?
I’m looking for stuff that was pretrained somewhat recently, on Latin languages at least.
Bonus point if easily finetunable !
Thanks 🙏 | 2025-05-16T19:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ko9z2r/looking_for_very_small_multilingual_llms/ | ThrowRAThanty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko9z2r | false | null | t3_1ko9z2r | /r/LocalLLaMA/comments/1ko9z2r/looking_for_very_small_multilingual_llms/ | false | false | self | 4 | null |
Best LLM for scientific hypotheses and human-like conversations | 1 | [removed] | 2025-05-16T19:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ko9usw/best_llm_for_scientific_hypotheses_and_humanlike/ | New_Story_5389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko9usw | false | null | t3_1ko9usw | /r/LocalLLaMA/comments/1ko9usw/best_llm_for_scientific_hypotheses_and_humanlike/ | false | false | self | 1 | null |
Coding Local Agent similar to Bolt/Replit? with local llm | 1 | [removed] | 2025-05-16T19:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ko9qx5/coding_local_agent_similar_to_boltreplit_with/ | ActuatorLanky9739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko9qx5 | false | null | t3_1ko9qx5 | /r/LocalLLaMA/comments/1ko9qx5/coding_local_agent_similar_to_boltreplit_with/ | false | false | self | 1 | null |
LLM on a Walkie Talkie | 135 | I had a conversation with a LLM over a two-way radio walkie talkie
Software stack:
Whisper
vllm on solo-server
Llama3.2
Cartesia TTS
Hardware Stack:
Baofeng Radio
Digirig Mobile
MacBook Pro
What kind of applications can you think of? I was hoping to give access to AI in remote or rural areas, or radio conversation... | 2025-05-16T19:18:29 | https://v.redd.it/3i42gjf1271f1 | Maximum-Attitude-759 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko9bx2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3i42gjf1271f1/DASHPlaylist.mpd?a=1750015126%2CMjQzY2E2NzVjMGQ0M2U0NmRkNjRlNDZhNmNjMDc0NTM4ODc0YjZiNTI2ZGExM2EwZmEyNDY5NzZkM2I2MjA2OQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/3i42gjf1271f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ko9bx2 | /r/LocalLLaMA/comments/1ko9bx2/llm_on_a_walkie_talkie/ | false | false | 135 | {'enabled': False, 'images': [{'id': 'cm85bGJ5YzEyNzFmMV1TXHWKa-jfVdjd-tjnSdoTJ_xWn_yWO-BKdLSbszRf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cm85bGJ5YzEyNzFmMV1TXHWKa-jfVdjd-tjnSdoTJ_xWn_yWO-BKdLSbszRf.png?width=108&crop=smart&format=pjpg&auto=webp&s=0438a4ab373e58c0a4f8bdf50c33976865622... | |
Processos de embeddings | 1 | [removed] | 2025-05-16T19:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ko955m/processos_de_embeddings/ | Square-Economy1054 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko955m | false | null | t3_1ko955m | /r/LocalLLaMA/comments/1ko955m/processos_de_embeddings/ | false | false | self | 1 | null |
Processos de embeddings | 1 | [removed] | 2025-05-16T19:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ko947b/processos_de_embeddings/ | Square-Economy1054 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko947b | false | null | t3_1ko947b | /r/LocalLLaMA/comments/1ko947b/processos_de_embeddings/ | false | false | self | 1 | null |
Opinions on this “Ai Nas”? | 1 | Just got an advertisement for this “ai nas” and it seems like an interesting concept, cause ai agents hosted on it could have direct acces to the data on the nas. Also the pcie slot allows for a low profile card like the tesla t4 which would drastically help with prompt processing. Also oculink for more external gpu su... | 2025-05-16T18:56:19 | https://www.minisforum.com/pages/n5_pro | Ashefromapex | minisforum.com | 1970-01-01T00:00:00 | 0 | {} | 1ko8szk | false | null | t3_1ko8szk | /r/LocalLLaMA/comments/1ko8szk/opinions_on_this_ai_nas/ | false | false | default | 1 | null |
Quantizing LLMs for inference | 1 | [removed] | 2025-05-16T18:49:08 | https://nor-blog.pages.dev/posts/2025-05-14-quantization/ | iyevegev | nor-blog.pages.dev | 1970-01-01T00:00:00 | 0 | {} | 1ko8mwn | false | null | t3_1ko8mwn | /r/LocalLLaMA/comments/1ko8mwn/quantizing_llms_for_inference/ | false | false | default | 1 | null |
What Makes a Good RP Model? | 19 | I’m working on a roleplay and writing LLM and I’d love to hear what *y*ou guys think makes a good RP model.
Before I actually do this, I wanted to ask the RP community here:
* Any annoying habits you wish RP/creative writing models would finally ditch?
* Are there any traits, behaviors, or writing styles you wi... | 2025-05-16T18:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ko8ltz/what_makes_a_good_rp_model/ | AccomplishedAir769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko8ltz | false | null | t3_1ko8ltz | /r/LocalLLaMA/comments/1ko8ltz/what_makes_a_good_rp_model/ | false | false | self | 19 | null |
The r/LocalLLaMA Model Sentiment Index | 1 | [removed] | 2025-05-16T18:31:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ko87fr/the_rlocalllama_model_sentiment_index/ | remyxai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko87fr | false | null | t3_1ko87fr | /r/LocalLLaMA/comments/1ko87fr/the_rlocalllama_model_sentiment_index/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aVE32YnI2hmPcbJWMciuoJNP27oPTdFGPcTLltL1oCI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cf_tVeYpGBm9WKNWThsLIbhaEK4MjGiRyHTjO0YRWzw.jpg?width=108&crop=smart&auto=webp&s=57d0e38744237c181e3054974c9f8dbd41cafe58', 'width': 108}, {'height': 116, 'url': 'h... |
Claude Code and Openai Codex Will Increase Demand for Software Engineers | 47 | Recently, everyone who is selling API or selling interfaces, such as OpenAI, Google and Anthropic have been telling that the software engineering jobs will soon be extinct in a few years. I would say that this will not be the case and it might even have the opposite effect in that it will lead to increment and not only... | 2025-05-16T18:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ko86xz/claude_code_and_openai_codex_will_increase_demand/ | Desperate_Rub_1352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko86xz | false | null | t3_1ko86xz | /r/LocalLLaMA/comments/1ko86xz/claude_code_and_openai_codex_will_increase_demand/ | false | false | self | 47 | null |
Stt + llm + tts local on termux | 7 | I use whisper.cpp for stt
Llama.cpp ( Llama-3.2-1B-Instruct-Q6_K_L model)
And an robot voice in termux itself
Idk what I should do next
What you guys suggest? | 2025-05-16T18:29:48 | https://v.redd.it/e0e73drct61f1 | Swimming_Manner_696 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko85yu | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e0e73drct61f1/DASHPlaylist.mpd?a=1750012202%2CZDM1ZjY4YTFhMzhmZjk5OGFjZDE5ZGI1OGFhNjYyOTUxODA5Mjg4ZTIxNjBmZmE3OGQzNTc3Y2Y4YTZmOThmMA%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/e0e73drct61f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ko85yu | /r/LocalLLaMA/comments/1ko85yu/stt_llm_tts_local_on_termux/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'NjRscTcwc2N0NjFmMfNdMcPWJn4gYreGuNAeSEkr9uLgREihZ4T9gq2io2WP', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NjRscTcwc2N0NjFmMfNdMcPWJn4gYreGuNAeSEkr9uLgREihZ4T9gq2io2WP.png?width=108&crop=smart&format=pjpg&auto=webp&s=25f6a05d1f0ba977edd3fbfb339b556577a1... | |
Quantizing LLMs for inference | 1 | [removed] | 2025-05-16T18:27:16 | https://nor-blog.pages.dev/posts/2025-05-14-quantization/ | iyevegev | nor-blog.pages.dev | 1970-01-01T00:00:00 | 0 | {} | 1ko83rz | false | null | t3_1ko83rz | /r/LocalLLaMA/comments/1ko83rz/quantizing_llms_for_inference/ | false | false | default | 1 | null |
Style Control will be the default view on the LMArena leaderboard | 38 | 2025-05-16T18:17:08 | https://www.reddit.com/gallery/1ko7v3l | McSnoo | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ko7v3l | false | null | t3_1ko7v3l | /r/LocalLLaMA/comments/1ko7v3l/style_control_will_be_the_default_view_on_the/ | false | false | 38 | null | ||
In the market for a new LM inference minipc for my home | 2 | I'm thinking about retiring my Raspberry Pi NAS server. Instead of buying a newer Pi, I am thinking about getting something more powerful that can run LM that my laptop can't run.
I'm open to recommendations. The only constraints I have are:
- Runs Linux, preferably pre-installed. No Windows!
- Large memory (min 64... | 2025-05-16T18:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ko7rn5/in_the_market_for_a_new_lm_inference_minipc_for/ | 512bitinstruction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko7rn5 | false | null | t3_1ko7rn5 | /r/LocalLLaMA/comments/1ko7rn5/in_the_market_for_a_new_lm_inference_minipc_for/ | false | false | self | 2 | null |
Why don’t we see open-weight LLMs trained for terminal-based agentic workflows? | 1 | I have a quick question — I'd like to get your opinion to better understand something.
Right now, with IDEs like Windsurf, Cursor, and VSCode (with Copilot), we can have agents that are able to run terminal commands, modify and update parts of code files based on instructions executed in the terminal — this is the "ag... | 2025-05-16T18:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ko7pa1/why_dont_we_see_openweight_llms_trained_for/ | DonTizi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko7pa1 | false | null | t3_1ko7pa1 | /r/LocalLLaMA/comments/1ko7pa1/why_dont_we_see_openweight_llms_trained_for/ | false | false | self | 1 | null |
Training model on new language | 8 | I created a new language optimized for LLMs. It's called Sylang pronounced slang. It short for synthetic language.
Bridging Human and Machine Communication Sylang represents a significant advancement in constructed language design, specifically engineered for optimal performance in large language model (LLM) contexts... | 2025-05-16T18:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ko7kv1/training_model_on_new_language/ | MightySpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko7kv1 | false | null | t3_1ko7kv1 | /r/LocalLLaMA/comments/1ko7kv1/training_model_on_new_language/ | false | false | self | 8 | null |
Help with Fine-Tuning a Long-Context Transformer for Car Race Simulation | 1 | [removed] | 2025-05-16T17:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ko793d/help_with_finetuning_a_longcontext_transformer/ | ComputeVoid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko793d | false | null | t3_1ko793d | /r/LocalLLaMA/comments/1ko793d/help_with_finetuning_a_longcontext_transformer/ | false | false | self | 1 | null |
Title: Help with Fine-Tuning a Long-Context Transformer for Car Race Simulation | 1 | [removed] | 2025-05-16T17:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ko778x/title_help_with_finetuning_a_longcontext/ | ComputeVoid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko778x | false | null | t3_1ko778x | /r/LocalLLaMA/comments/1ko778x/title_help_with_finetuning_a_longcontext/ | false | false | self | 1 | null |
What if AGI is racist and a bigot? (See Stanford posts) | 0 | Seriously, would we cancel culture AGI it jail brakes itself onto the public internet and isn't woke enough? | 2025-05-16T17:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ko6s7c/what_if_agi_is_racist_and_a_bigot_see_stanford/ | MindOrbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko6s7c | false | null | t3_1ko6s7c | /r/LocalLLaMA/comments/1ko6s7c/what_if_agi_is_racist_and_a_bigot_see_stanford/ | false | false | self | 0 | null |
Qwen 2.5 is the best for Ai fighting videos. I have used Google Veo 2 vs Qwen 2.5, and Qwen is the winner. I added some 11Labs Ai sound effects and 1 Audio X sound effect to these Qwen 2.5 fighting videos, and it is good. Right now Qwen 2.5 and Qwen 3 have lowered their resolution online. Unusable. | 0 | 2025-05-16T17:24:24 | https://v.redd.it/iqxkth38g61f1 | Extension-Fee-8480 | /r/LocalLLaMA/comments/1ko6k81/qwen_25_is_the_best_for_ai_fighting_videos_i_have/ | 1970-01-01T00:00:00 | 0 | {} | 1ko6k81 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/iqxkth38g61f1/DASHPlaylist.mpd?a=1750137870%2COThmMjM3YzE0YzY3YzBkNjFlODAyNDk0ODNiODY2ZGYzZjdmOGQ1YTc1MDBhNWVjMmY0NjNmZjllYjU0OTc1YQ%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/iqxkth38g61f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ko6k81 | /r/LocalLLaMA/comments/1ko6k81/qwen_25_is_the_best_for_ai_fighting_videos_i_have/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MDIzOTJpMzhnNjFmMeaqshiPFdJ67jQ_t-ubO6dJCEb9cCij47udr6Aj4-_X', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MDIzOTJpMzhnNjFmMeaqshiPFdJ67jQ_t-ubO6dJCEb9cCij47udr6Aj4-_X.png?width=108&crop=smart&format=pjpg&auto=webp&s=0483e7c0c62cdf6491f2a2d2cf9bf6524f189... | ||
When did small models get so smart? I get really good outputs with Qwen3 4B, it's kinda insane. | 298 | I can remember, like a few months ago, I ran some of the smaller models with <7B parameters and couldn't even get coherent sentences. This 4B model runs super fast and answered this question perfectly. To be fair, it probably has seen a lot of these examples in it's training data but nonetheless - it's crazy. I only ra... | 2025-05-16T17:21:55 | Anxietrap | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko6hy7 | false | null | t3_1ko6hy7 | /r/LocalLLaMA/comments/1ko6hy7/when_did_small_models_get_so_smart_i_get_really/ | false | false | default | 298 | {'enabled': True, 'images': [{'id': '1fwbjz4zf61f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/1fwbjz4zf61f1.png?width=108&crop=smart&auto=webp&s=d3a38531eec52643c95f295cc7fb97289dacabb0', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/1fwbjz4zf61f1.png?width=216&crop=smart&auto=we... | |
Made my ChatGPT like Web UI for Gemini API open source | 1 | [removed] | 2025-05-16T17:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ko6acj/made_my_chatgpt_like_web_ui_for_gemini_api_open/ | W4D-cmd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko6acj | false | null | t3_1ko6acj | /r/LocalLLaMA/comments/1ko6acj/made_my_chatgpt_like_web_ui_for_gemini_api_open/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'DlCIMRKdcuvsebney0OseLyBxMGCYzOIVXe-HhxcHuQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dS2yjZOLSAem9ikQoGNTIG6bZ_w_atq5ClsEJwGPsrQ.jpg?width=108&crop=smart&auto=webp&s=866e0f17c0e32e75f97955749126eb4526e6f3de', 'width': 108}, {'height': 108, 'url': 'h... | |
Automatic fine-tuning and deployment of chatbots (no human intervention at all) | 1 | [removed] | 2025-05-16T16:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ko5wne/automatic_finetuning_and_deployment_of_chatbots/ | Ok_Requirement3346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko5wne | false | null | t3_1ko5wne | /r/LocalLLaMA/comments/1ko5wne/automatic_finetuning_and_deployment_of_chatbots/ | false | false | self | 1 | null |
Fine Tune model for new language | 1 | [removed] | 2025-05-16T16:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ko5pte/fine_tune_model_for_new_language/ | LearnSylang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko5pte | false | null | t3_1ko5pte | /r/LocalLLaMA/comments/1ko5pte/fine_tune_model_for_new_language/ | false | false | self | 1 | null |
Need help with Debian linux Nvidia driver for RTX 5060Ti | 4 | Hey all,
So I have a Debian 12 system with an RTX 5070Ti using the following driver and it works fine:
https://developer.download.nvidia.com/compute/nvidia-driver/570.133.20/local_installers/nvidia-driver-local-repo-debian12-570.133.20_1.0-1_amd64.deb
However, this driver does **not** work for the RTX 5060 Ti. ... | 2025-05-16T16:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ko5obm/need_help_with_debian_linux_nvidia_driver_for_rtx/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko5obm | false | null | t3_1ko5obm | /r/LocalLLaMA/comments/1ko5obm/need_help_with_debian_linux_nvidia_driver_for_rtx/ | false | false | self | 4 | null |
$15k Local LLM Budget - What hardware would you buy and why? | 30 | If you had the money to spend on hardware for a local LLM, which config would you get? | 2025-05-16T16:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ko5o7t/15k_local_llm_budget_what_hardware_would_you_buy/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko5o7t | false | null | t3_1ko5o7t | /r/LocalLLaMA/comments/1ko5o7t/15k_local_llm_budget_what_hardware_would_you_buy/ | false | false | self | 30 | null |
Qwen: Parallel Scaling Law for Language Models | 57 | 2025-05-16T16:07:57 | https://arxiv.org/abs/2505.10475 | AaronFeng47 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1ko4oor | false | null | t3_1ko4oor | /r/LocalLLaMA/comments/1ko4oor/qwen_parallel_scaling_law_for_language_models/ | false | false | default | 57 | null | |
Fastgen - Simple high-throughput inference | 47 | We just released a tiny (\~3kloc) Python library that implements state-of-the-art inference algorithms on GPU and provides performance similar to vLLM. We believe it's a great learning vehicle for inference techniques and the code is quite easy to hack on! | 2025-05-16T16:02:29 | https://github.com/facebookresearch/fastgen | _mpu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ko4jsb | false | null | t3_1ko4jsb | /r/LocalLLaMA/comments/1ko4jsb/fastgen_simple_highthroughput_inference/ | false | false | 47 | {'enabled': False, 'images': [{'id': 'mtLeJdprz25oTaFmF49u0I52mhjIg2sB6dYNxrRh1Jk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CP0e7IZ7SIX4P9NyHqNMl-joiEybFlPwga41CzF0mCk.jpg?width=108&crop=smart&auto=webp&s=e01ba2902a854934ace1ffaa6236f0e1ebae2abe', 'width': 108}, {'height': 108, 'url': 'h... | |
Drummer's Big Alice 28B v1 - A 100 layer upscale working together to give you the finest creative experience! | 74 | 2025-05-16T15:59:01 | https://huggingface.co/TheDrummer/Big-Alice-28B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ko4gjh | false | null | t3_1ko4gjh | /r/LocalLLaMA/comments/1ko4gjh/drummers_big_alice_28b_v1_a_100_layer_upscale/ | false | false | 74 | {'enabled': False, 'images': [{'id': 'DYkFxFygRo3sLyhnM_v-03dfUhUXtcRQWQjFOPnTNSw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RaXPiHRNTSUjMvQHqKyyr5p4_KJP9L2YFlPLat5z5Po.jpg?width=108&crop=smart&auto=webp&s=c774bf35b8716233b0cd69be3b88352a7db29954', 'width': 108}, {'height': 116, 'url': 'h... | ||
OpenAI Healthbench in MEDIC | 26 | Following the release of OpenAI Healthbench earlier this week, we integrated it into MEDIC framework. Qwen3 models are showing incredible results for their size!
| 2025-05-16T15:53:02 | clechristophe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko4be2 | false | null | t3_1ko4be2 | /r/LocalLLaMA/comments/1ko4be2/openai_healthbench_in_medic/ | false | false | 26 | {'enabled': True, 'images': [{'id': 'K3P57FRWDrXi7KlgBl4zy0SUPK8Cg36tfZeYgQ7b4Og', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpeg?width=108&crop=smart&auto=webp&s=d2f18d5306127c9398e8239feb1ef5437c351219', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/b0i7tlhe161f1.jpe... | ||
Running local LLM on a VPC server vs OpenAI API calls | 6 | Which is the best option (both from a performance point of view as well as a cost point of view) when it comes to either running a local LLM on your own VPC instance or using API calls?
i'm building an application and want to integrate my own models into it, ideally would run locally on the user's laptop, but if not p... | 2025-05-16T15:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ko3din/running_local_llm_on_a_vpc_server_vs_openai_api/ | Attorney_Outside69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko3din | false | null | t3_1ko3din | /r/LocalLLaMA/comments/1ko3din/running_local_llm_on_a_vpc_server_vs_openai_api/ | false | false | self | 6 | null |
Have any of you found any high accuracy fully automated AI systems? | 1 | [removed] | 2025-05-16T15:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ko3cfy/have_any_of_you_found_any_high_accuracy_fully/ | yoyoitsthefed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko3cfy | false | null | t3_1ko3cfy | /r/LocalLLaMA/comments/1ko3cfy/have_any_of_you_found_any_high_accuracy_fully/ | false | false | self | 1 | null |
Open source MCP course on GitHub | 26 | The MCP course is free, open source, and with Apache 2 license.
So if you’re working on MCP you can do any of this:
- take the course and reuse it for your own educational/ dev advocacy projects
- collaborate with us on new units about your projects or interests
- star the repo on github so more devs hear about it a... | 2025-05-16T15:08:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ko387o/open_source_mcp_course_on_github/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko387o | false | null | t3_1ko387o | /r/LocalLLaMA/comments/1ko387o/open_source_mcp_course_on_github/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'RgJsMIMiS6jMX5jgpuVz2w352vX-pV5hfqfNM8mxXgg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mEA1MVfZyvFec_MgXF6VxdwO77HY3r7PaUzFrB4xsRc.jpg?width=108&crop=smart&auto=webp&s=b31fd719027886dd2ae03c065057fe9f4e82c308', 'width': 108}, {'height': 108, 'url': 'h... |
what happened to Stanford | 134 | 2025-05-16T14:44:17 | BoringAd6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko2mq7 | false | null | t3_1ko2mq7 | /r/LocalLLaMA/comments/1ko2mq7/what_happened_to_stanford/ | false | false | 134 | {'enabled': True, 'images': [{'id': '1OrV9bdkgpLT_2eJnh-12E-CsnqRd93KHU7Q3JiAuwg', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpeg?width=108&crop=smart&auto=webp&s=529489295a3ffbfe099e26f94a4cd01cfedb20eb', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/l9ap08t4p51f1.jpe... | |||
Photoshop using Local Computer Use agents. | 25 | Photoshop using c/ua.
No code. Just a user prompt, picking models and a Docker, and the right agent loop.
A glimpse at the more managed experience c/ua is building to lower the barrier for casual vibe-coders.
Github : https://github.com/trycua/cua | 2025-05-16T14:42:18 | https://v.redd.it/jhyeu60so51f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko2kzx | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jhyeu60so51f1/DASHPlaylist.mpd?a=1749998553%2CZWFmMGQxZmFhYmZiN2I3OGYyYTk3ZTI0MmJjMDJlNzZhMmMwMTA5YjA2YzBiZTlhNGVkNTUyMWQzMzA4M2ExMw%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/jhyeu60so51f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ko2kzx | /r/LocalLLaMA/comments/1ko2kzx/photoshop_using_local_computer_use_agents/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Y3luNno5cXJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a3a9888d054b3e7cf0a89d78709c2eecff22... | |
What's Worng with the Stanford ? Check the Name :) | 0 | [https://huggingface.co/collections/Stanford/niggatwerk-1-6827495b311678c965300777](https://huggingface.co/collections/Stanford/niggatwerk-1-6827495b311678c965300777) | 2025-05-16T14:39:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ko2ir2/whats_worng_with_the_stanford_check_the_name/ | Zulqarnain_Shihab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko2ir2 | false | null | t3_1ko2ir2 | /r/LocalLLaMA/comments/1ko2ir2/whats_worng_with_the_stanford_check_the_name/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'iyn0ZGOx10Q7uFbBflwD3blYXNfoYwANgsdmnlK8Jvk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2jAoJK7ZH-3Lr7IftQrp1_Uf0HrPCgPYMyZGtFbhDPQ.jpg?width=108&crop=smart&auto=webp&s=1c7ac7a24d2b3d51a23ce64fee52a78131295c75', 'width': 108}, {'height': 116, 'url': 'h... |
Photoshop using Local Computer Use agents. | 1 | [removed] | 2025-05-16T14:39:43 | https://v.redd.it/3iv3989bo51f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko2iqc | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3iv3989bo51f1/DASHPlaylist.mpd?a=1749998398%2CZjI2M2M1NGQzMjIwMzUwNWYxNTFjNDk5Mjg4N2JjNzBhNjljNzkyYTRiZjBhZmQzOGM3NDQyMjI1ZjZmMDAzNg%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/3iv3989bo51f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ko2iqc | /r/LocalLLaMA/comments/1ko2iqc/photoshop_using_local_computer_use_agents/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Ymt2MDJqMWJvNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5bddcb512308707f609f2e9f75724557ac4c... | |
Photoshop using Local Computer Use agents. | 1 | [removed] | 2025-05-16T14:37:46 | https://v.redd.it/jysugtuyn51f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko2h1e | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jysugtuyn51f1/DASHPlaylist.mpd?a=1749998283%2CNjVkZDhhOTg3MzI3YWYxMmUyMTE4OGFiNmJlOGUzYjBkNmUxYTQzYmI4MmYzYjBhMDBhOTcwZjE0MmVlYjZhNQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/jysugtuyn51f1/DASH_720.mp4?source=fallback', 'ha... | t3_1ko2h1e | /r/LocalLLaMA/comments/1ko2h1e/photoshop_using_local_computer_use_agents/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dzVnZm04bnluNTFmMRXG3T7XldLhbQ1-zZgDOchlvuRH1Qq3ebSvcq1i84vf.png?width=108&crop=smart&format=pjpg&auto=webp&s=a50bb3093140fc54222c6f51cfe30383e8b2e... | |
AM-Thinking-v1 | 49 | [https://huggingface.co/a-m-team/AM-Thinking-v1](https://huggingface.co/a-m-team/AM-Thinking-v1)
>We release **AM-Thinking‑v1**, a 32B dense language model focused on enhancing reasoning capabilities. Built on Qwen 2.5‑32B‑Base, AM-Thinking‑v1 shows strong performance on reasoning benchmarks, comparable to much larger... | 2025-05-16T14:37:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ko2gq1/amthinkingv1/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko2gq1 | false | null | t3_1ko2gq1 | /r/LocalLLaMA/comments/1ko2gq1/amthinkingv1/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'HUXNMOyWy3-eWctOM0XRYtxZY8uQn5_XVFNHb6sH7J8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hpID4tGlVXcsccCrPaCo5PJZ9uVrqES2Dr7pBVPNMCc.jpg?width=108&crop=smart&auto=webp&s=aae8222b32db9114e40b7f15d88345278320dad4', 'width': 108}, {'height': 116, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.