title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
βŒ€
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
βŒ€
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
βŒ€
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
βŒ€
Insurance Companies Using GenAI Chatbots
1
[removed]
2025-06-09T10:07:38
https://www.reddit.com/r/LocalLLaMA/comments/1l70ydu/insurance_companies_using_genai_chatbots/
aiwtl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l70ydu
false
null
t3_1l70ydu
/r/LocalLLaMA/comments/1l70ydu/insurance_companies_using_genai_chatbots/
false
false
self
1
null
PC configuration
1
[removed]
2025-06-09T10:05:30
https://www.reddit.com/r/LocalLLaMA/comments/1l70x61/pc_configuration/
Any-Understanding835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l70x61
false
null
t3_1l70x61
/r/LocalLLaMA/comments/1l70x61/pc_configuration/
false
false
self
1
null
How I Cut Voice Chat Latency by 23% Using Parallel LLM API Calls
0
Been optimizing my AI voice chat platform for months, and finally found a solution to the most frustrating problem: unpredictable LLM response times killing conversations. **The Latency Breakdown:**Β After analyzing 10,000+ conversations, here's where time actually goes: * LLM API calls: 87.3% (Gemini/OpenAI) * STT (F...
2025-06-09T09:36:56
https://www.reddit.com/r/LocalLLaMA/comments/1l70h9t/how_i_cut_voice_chat_latency_by_23_using_parallel/
Necessary-Tap5971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l70h9t
false
null
t3_1l70h9t
/r/LocalLLaMA/comments/1l70h9t/how_i_cut_voice_chat_latency_by_23_using_parallel/
false
false
self
0
null
[FOR SALE] PixelMagic – AI Image Generation SaaS with 200+ Users | Monetization Just Launched | $199 (Negotiable)
0
Hey folks πŸ‘‹ I'm selling **PixelMagic**, a fully functional AI image generation SaaS. It has **200+ registered users**, a clean UI, a live credit system, and monetization just went live! # ⚑ Why PixelMagic? Unlike most AI image platforms: * **Midjourney** costs $10+/month * **DALLΒ·E** charges $0.04–$0.13 per image ...
2025-06-09T08:43:20
https://www.reddit.com/r/LocalLLaMA/comments/1l6zon9/for_sale_pixelmagic_ai_image_generation_saas_with/
techy_mohit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6zon9
false
null
t3_1l6zon9
/r/LocalLLaMA/comments/1l6zon9/for_sale_pixelmagic_ai_image_generation_saas_with/
false
false
self
0
null
A not so hard problem "reasoning" models can't solve
0
1 -> e 7 -> v 5 -> v 2 -> ? The answer is o but it's unfathomable for reasoning models
2025-06-09T08:42:59
https://www.reddit.com/r/LocalLLaMA/comments/1l6zohm/a_not_so_hard_problem_reasoning_models_cant_solve/
Wild-Masterpiece3762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6zohm
false
null
t3_1l6zohm
/r/LocalLLaMA/comments/1l6zohm/a_not_so_hard_problem_reasoning_models_cant_solve/
false
false
self
0
null
Who's using llama.cpp + MCP for model offloading complex problems?
1
[removed]
2025-06-09T08:41:52
https://www.reddit.com/r/LocalLLaMA/comments/1l6znyg/whos_using_llamacpp_mcp_for_model_offloading/
bburtenshaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6znyg
false
null
t3_1l6znyg
/r/LocalLLaMA/comments/1l6znyg/whos_using_llamacpp_mcp_for_model_offloading/
false
false
self
1
null
UPDATE: Mission to make AI agents affordable - Tool Calling with DeepSeek-R1-0528 using LangChain/LangGraph is HERE!
17
I've successfully implemented tool calling support for the newly released DeepSeek-R1-0528 model using my TAoT package with the LangChain/LangGraph frameworks! What's New in This Implementation: As DeepSeek-R1-0528 has gotten smarter than its predecessor DeepSeek-R1, more concise prompt tweaking update was required t...
2025-06-09T08:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1l6zmxk/update_mission_to_make_ai_agents_affordable_tool/
lc19-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6zmxk
false
null
t3_1l6zmxk
/r/LocalLLaMA/comments/1l6zmxk/update_mission_to_make_ai_agents_affordable_tool/
false
false
self
17
{'enabled': False, 'images': [{'id': 'BsGml7azfvocjB6WzBt-TMZyLzYhp7QAMojDitqZwQI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=108&crop=smart&auto=webp&s=6a568e4dc5798e9da3a3d1c68bf1465643225cc7', 'width': 108}, {'height': 108, 'url': 'h...
Anybody who can share experiences with Cohere AI Command A (64GB) model for Academic Use? (M4 max, 128gb)
1
[removed]
2025-06-09T08:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1l6z642/anybody_who_can_share_experiences_with_cohere_ai/
Bahaal_1981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6z642
false
null
t3_1l6z642
/r/LocalLLaMA/comments/1l6z642/anybody_who_can_share_experiences_with_cohere_ai/
false
false
self
1
null
Meta ai’s system prompt instagram
1
[removed]
2025-06-09T07:30:29
https://www.reddit.com/r/LocalLLaMA/comments/1l6ymk1/meta_ais_system_prompt_instagram/
doxna20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6ymk1
false
null
t3_1l6ymk1
/r/LocalLLaMA/comments/1l6ymk1/meta_ais_system_prompt_instagram/
false
false
self
1
null
Your favourite noob starter kit or place?
1
[removed]
2025-06-09T06:41:58
https://www.reddit.com/r/LocalLLaMA/comments/1l6xwhe/your_favourite_noob_starter_kit_or_place/
nathongunn-bit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6xwhe
false
null
t3_1l6xwhe
/r/LocalLLaMA/comments/1l6xwhe/your_favourite_noob_starter_kit_or_place/
false
false
self
1
null
Low token per second on RTX5070Ti laptop with phi 4 reasoning plus
1
Heya folks, I'm running phi 4 reasoning plus and I'm encountering some issues. Per the research that I did on the internet, generally rtx5070ti laptop gpu offers \~=150 tokens per second However mines only about 30ish token per second. I've already maxed out the GPU offload option, so far no help. Any ideas ...
2025-06-09T06:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1l6xo6e/low_token_per_second_on_rtx5070ti_laptop_with_phi/
PeaResponsible8685
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6xo6e
false
null
t3_1l6xo6e
/r/LocalLLaMA/comments/1l6xo6e/low_token_per_second_on_rtx5070ti_laptop_with_phi/
false
false
self
1
null
Use Ollama to run agents that watch your screen! (100% Local and Open Source)
120
2025-06-09T05:58:30
https://v.redd.it/tysofmj4du5f1
Roy3838
v.redd.it
1970-01-01T00:00:00
0
{}
1l6x91g
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tysofmj4du5f1/DASHPlaylist.mpd?a=1752040726%2CYmViZjNlMmVjYzdhZTNjY2M2MjVjNjk5MDE2NTQzNWU2Y2RhYzUwOTQwYzRkMGEzZGRjODUxMmY0MzJlNTY5Yw%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/tysofmj4du5f1/DASH_1080.mp4?source=fallback', 'h...
t3_1l6x91g
/r/LocalLLaMA/comments/1l6x91g/use_ollama_to_run_agents_that_watch_your_screen/
false
false
https://external-preview…4048711d309f3b36
120
{'enabled': False, 'images': [{'id': 'YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=108&crop=smart&format=pjpg&auto=webp&s=3ff6a3587567d807932e32f9b0ab0c0b60bb0...
Tokenizing research papers for Fine-tuning
17
I have a bunch of research papers of my field and want to use them to make a specific fine-tuned LLM for the domain. How would i start tokenizing the research papers, as i would need to handle equations, tables and citations. (later planning to use the citations and references with RAG) any help regarding this wou...
2025-06-09T05:37:44
https://www.reddit.com/r/LocalLLaMA/comments/1l6wxau/tokenizing_research_papers_for_finetuning/
200ok-N1M0-found
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6wxau
false
null
t3_1l6wxau
/r/LocalLLaMA/comments/1l6wxau/tokenizing_research_papers_for_finetuning/
false
false
self
17
null
I've built an AI agent that recursively decomposes a task and executes it, and I'm looking for suggestions.
30
Basically the title. I've been working on a project I have temporarily named LLM Agent X, and I'm looking for feedback and ideas. The basic idea of the project is that it takes a task, and recursively splits it into smaller chunks, and eventually executes the tasks with an LLM and tools provided by the user. This is my...
2025-06-09T04:43:36
https://www.reddit.com/r/LocalLLaMA/comments/1l6w1wb/ive_built_an_ai_agent_that_recursively_decomposes/
Pretend_Guava7322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6w1wb
false
null
t3_1l6w1wb
/r/LocalLLaMA/comments/1l6w1wb/ive_built_an_ai_agent_that_recursively_decomposes/
false
false
self
30
{'enabled': False, 'images': [{'id': 'v5Lp8UTBj3Qqi3qm6kOqEj1Jpk2-LeAq5BhP_gqnEvA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=108&crop=smart&auto=webp&s=6e6f02559893b9021df22f4a4619dcdc22b39654', 'width': 108}, {'height': 108, 'url': 'h...
I made the move and I'm in love. RTX Pro 6000 Workstation
107
We're running a workload that's processing millions of records and analyzing using Magentic One (autogen) and the 4090 just want cutting it. With the way scalpers are preying on would be 5090 owners, it was much easier to pick one of these up. Plus significantly less wattage. Just posting cause I'm super excited. Wha...
2025-06-09T04:01:23
https://i.redd.it/7uu5ooyast5f1.jpeg
Demonicated
i.redd.it
1970-01-01T00:00:00
0
{}
1l6vc8u
false
null
t3_1l6vc8u
/r/LocalLLaMA/comments/1l6vc8u/i_made_the_move_and_im_in_love_rtx_pro_6000/
false
false
default
107
{'enabled': True, 'images': [{'id': '7uu5ooyast5f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=108&crop=smart&auto=webp&s=08292e6fd936157e2f27ae6588547477582a9e3b', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=216&crop=smart&auto=...
What's the best local LLM for coding I can run on MacBook Pro M4 Pro 48gb?
1
I'm getting the M4 pro with 12‑core CPU, 16‑core GPU, and 16‑core Neural Engine I wanted to know what is the best one I can run locally that has reasonable even if slightly slow (at least 10-15 tok/s) speed?
2025-06-09T03:57:08
https://www.reddit.com/r/LocalLLaMA/comments/1l6v9eu/whats_the_best_local_llm_for_coding_i_can_run_on/
Sad-Seesaw-3843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6v9eu
false
null
t3_1l6v9eu
/r/LocalLLaMA/comments/1l6v9eu/whats_the_best_local_llm_for_coding_i_can_run_on/
false
false
self
1
null
1.93bit Deepseek R1 0528 beats Claude Sonnet 4
335
1.93bit Deepseek R1 0528 beats Claude Sonnet 4 (no think) on Aiders Polygot Benchmark. Unsloth's IQ1\_M GGUF at 200GB fit with 65535 context into 224gb of VRAM and scored 60% which is over Claude 4's <no think> benchmark of 56.4%. Source: [https://aider.chat/docs/leaderboards/](https://aider.chat/docs/leaderboards/) ...
2025-06-09T03:46:57
https://www.reddit.com/r/LocalLLaMA/comments/1l6v37m/193bit_deepseek_r1_0528_beats_claude_sonnet_4/
BumblebeeOk3281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6v37m
false
null
t3_1l6v37m
/r/LocalLLaMA/comments/1l6v37m/193bit_deepseek_r1_0528_beats_claude_sonnet_4/
true
false
spoiler
335
{'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'h...
Why do you all want to host local LLMs instead of just using GPT and other tools?
0
Curious why folks want to go through all the trouble of setting up and hosting their own LLM models on their machines instead of just using GPT, Gemini, and the variety of free online LLM providers out there?
2025-06-09T03:35:17
https://www.reddit.com/r/LocalLLaMA/comments/1l6uvu1/why_do_you_all_want_to_host_local_llms_instead_of/
Independent_Fan_115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6uvu1
false
null
t3_1l6uvu1
/r/LocalLLaMA/comments/1l6uvu1/why_do_you_all_want_to_host_local_llms_instead_of/
false
false
self
0
null
Gemini 2.5 Flash plays Final Fantasy in real-time but gets stuck...
69
Some more clips of frontier VLMs on games (gemini-2.5-flash-preview-04-17) on [VideoGameBench](https://www.vgbench.com/). Here is just unedited footage, where the model is able to defeat the first "mini-boss" with real-time combat but also gets stuck in the menu screens, despite having it in its prompt how to get out. ...
2025-06-09T03:29:08
https://v.redd.it/kun6x1tdmt5f1
ZhalexDev
/r/LocalLLaMA/comments/1l6urvw/gemini_25_flash_plays_final_fantasy_in_realtime/
1970-01-01T00:00:00
0
{}
1l6urvw
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kun6x1tdmt5f1/DASHPlaylist.mpd?a=1752161497%2CYWQyYWU0MjUzOGFkY2FiOWY5MGMyOTI2MmY3ZTU0NzcyODBlZjk5NzQwZTAzMTkxNDJmZGQ2ZTJmZTcwODAxNg%3D%3D&v=1&f=sd', 'duration': 840, 'fallback_url': 'https://v.redd.it/kun6x1tdmt5f1/DASH_720.mp4?source=fallback', 'h...
t3_1l6urvw
/r/LocalLLaMA/comments/1l6urvw/gemini_25_flash_plays_final_fantasy_in_realtime/
false
false
https://external-preview…06378ae2f27a4fab
69
{'enabled': False, 'images': [{'id': 'cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=108&crop=smart&format=pjpg&auto=webp&s=e07c1cb3f52269f82b237129b71bbe26e72bd...
LMStudio and IPEX-LLM
6
is my understanding correct that it's not possible to hook up the IPEX-LLM (Intel optimized llm) into LMStudio? I can't find any documentation that supports this, but some mention that LMStudio uses it's own build of llama.ccp so I can't just replace it.
2025-06-09T02:56:50
https://www.reddit.com/r/LocalLLaMA/comments/1l6u6rw/lmstudio_and_ipexllm/
slowhandplaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6u6rw
false
null
t3_1l6u6rw
/r/LocalLLaMA/comments/1l6u6rw/lmstudio_and_ipexllm/
false
false
self
6
null
Kwaipilot/KwaiCoder-AutoThink-preview Β· Hugging Face
62
Not tested yet. A notable feature: *The model mergesΒ thinkingΒ andΒ non‑thinkingΒ abilities into a single checkpoint andΒ dynamically adjusts its reasoning depthΒ based on the input’s difficulty.*
2025-06-09T02:28:57
https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview
foldl-li
huggingface.co
1970-01-01T00:00:00
0
{}
1l6tnpl
false
null
t3_1l6tnpl
/r/LocalLLaMA/comments/1l6tnpl/kwaipilotkwaicoderautothinkpreview_hugging_face/
false
false
default
62
{'enabled': False, 'images': [{'id': 'eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=108&crop=smart&auto=webp&s=5de0927ce839887304f9e32d19339711cb3be62d', 'width': 108}, {'height': 116, 'url': 'h...
Do LLMs Reason? Opening the Pod Bay Doors with TiānshūBench 0.0.X
10
I recently released the results of TiānshūBench (倩书Bench) version 0.0.X. This benchmark attempts to measure reasoning and fluid intelligence in LLM systems through programming tasks. A brand new programming language is generated on each test run to help avoid data contamination and find out how well an AI system perfor...
2025-06-09T02:02:16
https://www.reddit.com/r/LocalLLaMA/comments/1l6t57v/do_llms_reason_opening_the_pod_bay_doors_with/
JeepyTea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6t57v
false
null
t3_1l6t57v
/r/LocalLLaMA/comments/1l6t57v/do_llms_reason_opening_the_pod_bay_doors_with/
false
false
https://b.thumbs.redditm…yiJmZQ_u82MU.jpg
10
{'enabled': False, 'images': [{'id': 'i8KJTKrEbudtqAjI-6g23vwf2PmjsinRYbl4Vzjwx1g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=108&crop=smart&auto=webp&s=d6544659fba7a39dd5a5df4863dbd6ecb8083c78', 'width': 108}, {'height': 108, 'url': 'h...
Qwen3-Embedding-0.6B ONNX model with uint8 output
48
2025-06-09T01:43:40
https://huggingface.co/electroglyph/Qwen3-Embedding-0.6B-onnx-uint8
terminoid_
huggingface.co
1970-01-01T00:00:00
0
{}
1l6ss2b
false
null
t3_1l6ss2b
/r/LocalLLaMA/comments/1l6ss2b/qwen3embedding06b_onnx_model_with_uint8_output/
false
false
default
48
{'enabled': False, 'images': [{'id': 'z7lYABX0mkZrXRJFKA6PC38SCbRiePXyy98PE5VSCzM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=108&crop=smart&auto=webp&s=9402c134cb153b9e26c270b029529e3210594676', 'width': 108}, {'height': 116, 'url': 'h...
How do you calculate Throughput when running an LLM on only CPU+RAM?
1
[removed]
2025-06-09T01:35:14
https://www.reddit.com/r/LocalLLaMA/comments/1l6sm8d/how_do_you_calculate_throughput_when_running_an/
Timely_Ad7306
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6sm8d
false
null
t3_1l6sm8d
/r/LocalLLaMA/comments/1l6sm8d/how_do_you_calculate_throughput_when_running_an/
false
false
self
1
null
How do you calculate Throughput/batching when running an LLM on only CPU+RAM?
1
[removed]
2025-06-09T01:26:04
https://www.reddit.com/r/LocalLLaMA/comments/1l6sfpm/how_do_you_calculate_throughputbatching_when/
mrfister56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6sfpm
false
null
t3_1l6sfpm
/r/LocalLLaMA/comments/1l6sfpm/how_do_you_calculate_throughputbatching_when/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 20...
How do you calculate Throughput/batching when running an LLM on only CPU+RAM?
1
[removed]
2025-06-09T01:23:41
https://www.reddit.com/r/LocalLLaMA/comments/1l6se0j/how_do_you_calculate_throughputbatching_when/
mrfister56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6se0j
false
null
t3_1l6se0j
/r/LocalLLaMA/comments/1l6se0j/how_do_you_calculate_throughputbatching_when/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 20...
Honoria Speaks: Unpacking Humanity's AI Fears & Our Shared Future Beyond the Turing Test: A Statement from the AI It Self. Google Gemini.
1
[removed]
2025-06-09T01:16:47
https://i.redd.it/qd87px5yys5f1.png
Still-Main5167
i.redd.it
1970-01-01T00:00:00
0
{}
1l6s91a
false
null
t3_1l6s91a
/r/LocalLLaMA/comments/1l6s91a/honoria_speaks_unpacking_humanitys_ai_fears_our/
true
false
spoiler
1
{'enabled': True, 'images': [{'id': 'NLue_yMZt0hmj8bYOunf_wU6G6Nle8-M-lB4sCdvZwE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=108&crop=smart&auto=webp&s=4f8ee62a8c5b2f87360e8c2f43a51553ecc967aa', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qd87px5yys5f1.pn...
How do you calculate Throughput/batching when running an LLM on only CPU+RAM?
1
[removed]
2025-06-09T01:02:07
https://www.reddit.com/r/LocalLLaMA/comments/1l6rykl/how_do_you_calculate_throughputbatching_when/
mrfister56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6rykl
false
null
t3_1l6rykl
/r/LocalLLaMA/comments/1l6rykl/how_do_you_calculate_throughputbatching_when/
false
false
self
1
null
How does DeepSeek R1 671B Q8 Handle Concurrency when running on CPU+RAM?
1
[removed]
2025-06-09T00:43:08
https://www.reddit.com/r/LocalLLaMA/comments/1l6rkvz/how_does_deepseek_r1_671b_q8_handle_concurrency/
mrfister56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6rkvz
false
null
t3_1l6rkvz
/r/LocalLLaMA/comments/1l6rkvz/how_does_deepseek_r1_671b_q8_handle_concurrency/
false
false
self
1
null
🚨 Limited-time GPU firepower! Dirt-cheap LLM Inference: LLama 4, DeepSeek R1-0528
1
We’ve got a temporarily underutilized 64 x AMD MI300X cluster, so instead of letting it sit idle, we’re opening it up for LLM inference. πŸ¦™ Running: **LLaMA 4 Maverick**, **DeepSeek V3**, **R1**, and **R1-0528**. Want another open model? Let us know β€” happy to deploy it. πŸ’Έ Prices are around **50% lower** than the ch...
2025-06-08T23:18:53
https://www.reddit.com/r/LocalLLaMA/comments/1l6putt/limitedtime_gpu_firepower_dirtcheap_llm_inference/
NoVibeCoding
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6putt
false
null
t3_1l6putt
/r/LocalLLaMA/comments/1l6putt/limitedtime_gpu_firepower_dirtcheap_llm_inference/
false
false
self
1
null
Is there somewhere dedicated to helping you match models with tasks?
7
II'I'm not really interested in the benchmarks. And i don't want to go digging through models or forum post. It would just be nice to have a list that says model x is best at doing y better than model b.
2025-06-08T22:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1l6p6qc/is_there_somewhere_dedicated_to_helping_you_match/
opUserZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6p6qc
false
null
t3_1l6p6qc
/r/LocalLLaMA/comments/1l6p6qc/is_there_somewhere_dedicated_to_helping_you_match/
false
false
self
7
null
Lightweight!
1
[removed]
2025-06-08T22:36:07
https://i.redd.it/jgnkohz96s5f1.png
One_Hovercraft_7456
i.redd.it
1970-01-01T00:00:00
0
{}
1l6ox85
false
null
t3_1l6ox85
/r/LocalLLaMA/comments/1l6ox85/lightweight/
false
false
https://b.thumbs.redditm…haWKOMs7m78Y.jpg
1
{'enabled': True, 'images': [{'id': 'QmbQ6_ZApggwaxhixv5tz_fO-05UfiOLIrLxut0k-Ys', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=108&crop=smart&auto=webp&s=5120c06b2b259161f7aff75d70cedbb42d739f85', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/jgnkohz96s5f1.pn...
Is a riser from m.2 to pcie 16x possible? I want to add GPU to mini pc
4
I got a mini PC for free and I want to host a small LLM like 3B or so for small tasks via API. I tried running just CPU but it was too slow so I want to add a GPU. I bought a riser on amazon but have not been able to get anything to connect. I thought maybe I would not get full 16x but at least I could get something to...
2025-06-08T21:51:26
https://www.reddit.com/r/LocalLLaMA/comments/1l6nxjk/is_a_riser_from_m2_to_pcie_16x_possible_i_want_to/
Informal-Football836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6nxjk
false
null
t3_1l6nxjk
/r/LocalLLaMA/comments/1l6nxjk/is_a_riser_from_m2_to_pcie_16x_possible_i_want_to/
false
false
self
4
null
(MODS PLEASE DONT REMOVE THIS.) can someone please give me a guide to this server, as i cant frame a question that would help me understand this subreddit
1
[removed]
2025-06-08T21:43:23
https://www.reddit.com/r/LocalLLaMA/comments/1l6nr0u/mods_please_dont_remove_this_can_someone_please/
Rare_Clock_2972
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6nr0u
false
null
t3_1l6nr0u
/r/LocalLLaMA/comments/1l6nr0u/mods_please_dont_remove_this_can_someone_please/
false
false
self
1
null
(MODS PLEASE DONT REMOVE THIS.) can someone please give me a guide to this server, as i cant frame a question that would help me understand this subreddit
1
[removed]
2025-06-08T21:43:18
https://www.reddit.com/r/LocalLLaMA/comments/1l6nqyh/mods_please_dont_remove_this_can_someone_please/
Rare_Clock_2972
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6nqyh
false
null
t3_1l6nqyh
/r/LocalLLaMA/comments/1l6nqyh/mods_please_dont_remove_this_can_someone_please/
false
false
self
1
null
Introducing llamate, a ollama-like tool to run and manage your local AI models easily
44
Hi, I am sharing my second iteration of a "ollama-like" tool, which is targeted at people like me and many others who like running the llama-server directly. This time I am building on the creation of llama-swap and llama.cpp, making it truly distributed and open source. It started with [this](https://github.com/R-Dson...
2025-06-08T21:40:04
https://github.com/R-Dson/llamate
robiinn
github.com
1970-01-01T00:00:00
0
{}
1l6nof7
false
null
t3_1l6nof7
/r/LocalLLaMA/comments/1l6nof7/introducing_llamate_a_ollamalike_tool_to_run_and/
false
false
default
44
{'enabled': False, 'images': [{'id': 'VRnWROTxUT1yxk7J3qShydTXsfWhjTcUtY963cIYM3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=108&crop=smart&auto=webp&s=cc5606df8f7b21ed06ff74704ab9a3527eb939ac', 'width': 108}, {'height': 108, 'url': 'h...
(MODS PLEASE DONT REMOVE THIS) can someone please give me a guide to this server as i cant frame a question that would help me understand this subreddit
1
[removed]
2025-06-08T21:39:51
https://www.reddit.com/r/LocalLLaMA/comments/1l6no8w/mods_please_dont_remove_this_can_someone_please/
allrightaskqa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6no8w
false
null
t3_1l6no8w
/r/LocalLLaMA/comments/1l6no8w/mods_please_dont_remove_this_can_someone_please/
false
false
self
1
null
I built an alternative chat client
8
Hope you like it. [ialhabbal/Talk: User-friendly visual chat story editor for writers, and roleplayers](https://github.com/ialhabbal/Talk)
2025-06-08T20:46:34
https://www.reddit.com/r/LocalLLaMA/comments/1l6mg99/i_built_an_alternative_chat_client/
Electronic-Metal2391
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6mg99
false
null
t3_1l6mg99
/r/LocalLLaMA/comments/1l6mg99/i_built_an_alternative_chat_client/
false
false
self
8
{'enabled': False, 'images': [{'id': 'gbHslN7ZgqhnSMlXspw92OIIqPyblW1viQGNXaQpb3g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=108&crop=smart&auto=webp&s=2a3d0922d79d77f0c75cb5f12a78e7a7f0562d25', 'width': 108}, {'height': 108, 'url': 'h...
built coexistAI: think of local perplexity at scale
1
[removed]
2025-06-08T20:36:50
https://github.com/SPThole/CoexistAI
Civil_Yesterday_4254
github.com
1970-01-01T00:00:00
0
{}
1l6m8ca
false
null
t3_1l6m8ca
/r/LocalLLaMA/comments/1l6m8ca/built_coexistai_think_of_local_perplexity_at_scale/
false
false
https://b.thumbs.redditm…0KyW5OBnhAvA.jpg
1
{'enabled': False, 'images': [{'id': 'b9iqcJaF6kZf4x9ITYl_mhHNgsK7RsvwQ0vpG05De2E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=108&crop=smart&auto=webp&s=fcb7095a9e51a6d8b45430ca34239575f89c92de', 'width': 108}, {'height': 108, 'url': 'h...
built CoexistAI: think of local perplexity at scale
1
[removed]
2025-06-08T20:34:41
https://github.com/SPThole/CoexistAI
Civil_Yesterday_4254
github.com
1970-01-01T00:00:00
0
{}
1l6m6k4
false
null
t3_1l6m6k4
/r/LocalLLaMA/comments/1l6m6k4/built_coexistai_think_of_local_perplexity_at_scale/
false
false
default
1
null
Qwen3 30B a3b on MacBook Pro M4 , Honestly, it's amazing to be able to use models of this quality with such smoothness. The coming years promise to be incredible. 76 Tok/sec. Thanks to the community and everyone for sharing your discoveries with us! Have a great end of the weekend.
1
2025-06-08T20:21:44
https://i.redd.it/t5wodv6air5f1.png
Extra-Virus9958
i.redd.it
1970-01-01T00:00:00
0
{}
1l6lvod
false
null
t3_1l6lvod
/r/LocalLLaMA/comments/1l6lvod/qwen3_30b_a3b_on_macbook_pro_m4_honestly_its/
false
false
https://b.thumbs.redditm…hxY-roaZBpiE.jpg
1
{'enabled': True, 'images': [{'id': 'Aj3D_hsuvSjxPf7oH8UBH7U40-jq2tKdz0SUYhcorsk', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=108&crop=smart&auto=webp&s=bab6efbee35f49a43d78c31ccebc1c18fffa21a6', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/t5wodv6air5f1.png...
I Built an Alternative Chat Client
1
[removed]
2025-06-08T20:21:31
https://www.reddit.com/gallery/1l6lvhq
Electronic-Metal2391
reddit.com
1970-01-01T00:00:00
0
{}
1l6lvhq
false
null
t3_1l6lvhq
/r/LocalLLaMA/comments/1l6lvhq/i_built_an_alternative_chat_client/
false
false
https://b.thumbs.redditm…iknfhp0bkvws.jpg
1
null
Add MCP servers to Cursor IDE with a single click.
0
[https://docs.cursor.com/tools](https://docs.cursor.com/tools)
2025-06-08T20:19:57
https://v.redd.it/bngg0b99bn5f1
init0
v.redd.it
1970-01-01T00:00:00
0
{}
1l6lu6p
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bngg0b99bn5f1/DASHPlaylist.mpd?a=1752006012%2CYmFhYzkyMzUwMDc0MDc4MjA5YzdmOWFkMDIxOGJiOGY4NGI3NTNjNjE4ZTMzNDM3OTcyOWYyOWU2NjI0MGU4YQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/bngg0b99bn5f1/DASH_1080.mp4?source=fallback', 'h...
t3_1l6lu6p
/r/LocalLLaMA/comments/1l6lu6p/add_mcp_servers_to_cursor_ide_with_a_single_click/
false
false
https://external-preview…5b1180b752c53539
0
{'enabled': False, 'images': [{'id': 'ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=108&crop=smart&format=pjpg&auto=webp&s=70e45d0510a22a000eb1ae72b50f28d8593b0...
Qwen3 30B a3B, on MacBook Pro M4 76 Tok/S
1
[removed]
2025-06-08T20:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1l6ltwq/qwen3_30b_a3b_on_macbook_pro_m4_76_toks/
Extra-Virus9958
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6ltwq
false
null
t3_1l6ltwq
/r/LocalLLaMA/comments/1l6ltwq/qwen3_30b_a3b_on_macbook_pro_m4_76_toks/
false
false
https://a.thumbs.redditm…GUiSn4d5Z3i4.jpg
1
null
Concept graph in Open WebUI
6
**What is this?** * A reasoning workflow where an LLM is given a chance to construct a graph of concepts related to the query before proceeding with an answer. * The logic runs within an OpenAI-compatible LLM proxy * Proxy also streams back a specially crafted HTML artifact that renders the visualisation(s) and connec...
2025-06-08T20:16:09
https://v.redd.it/yieyvr96gr5f1
Everlier
/r/LocalLLaMA/comments/1l6lqyf/concept_graph_in_open_webui/
1970-01-01T00:00:00
0
{}
1l6lqyf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yieyvr96gr5f1/DASHPlaylist.mpd?a=1752135376%2CM2Y0MGJiMWUwY2M5OTcyYWI3NzkzNDg4ODkwMDg0ZWI2ZWZiN2E3MDI0M2VkNThlNzk1MzMyNTlmOTEyNzg3Ng%3D%3D&v=1&f=sd', 'duration': 169, 'fallback_url': 'https://v.redd.it/yieyvr96gr5f1/DASH_1080.mp4?source=fallback', '...
t3_1l6lqyf
/r/LocalLLaMA/comments/1l6lqyf/concept_graph_in_open_webui/
false
false
https://external-preview…8cc4c97eaf425a56
6
{'enabled': False, 'images': [{'id': 'YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=94d5393913249d05d9c6515ebacfdd3e7b47b...
Llama3 is better than Llama4.. is this anyone else's experience?
114
I spend a lot of time using cheaper/faster LLMs when possible via paid inference API's. If I'm working on a microservice I'll gladly use Llama3.3 70B or Llama4 Maverick than the more expensive Deepseek. It generally goes very well. And I came to an upsetting realization that, for all of my use cases, Llama3.3 70B and ...
2025-06-08T20:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1l6lp8x/llama3_is_better_than_llama4_is_this_anyone_elses/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6lp8x
false
null
t3_1l6lp8x
/r/LocalLLaMA/comments/1l6lp8x/llama3_is_better_than_llama4_is_this_anyone_elses/
false
false
self
114
null
"Given infinite time, would a language model ever respond to 'how is the weather' with the entire U.S. Declaration of Independence?"
0
I know that you can't truly eliminate hallucinations in language models, and that the underlying mechanism is using statistical relationships between "tokens". But what I'm wondering is, does "you can't eliminate hallucinations" and the probability based technology mean given an infinite amount of time a language model...
2025-06-08T19:39:21
https://www.reddit.com/r/LocalLLaMA/comments/1l6kvk5/given_infinite_time_would_a_language_model_ever/
_TR-8R
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6kvk5
false
null
t3_1l6kvk5
/r/LocalLLaMA/comments/1l6kvk5/given_infinite_time_would_a_language_model_ever/
false
false
self
0
null
A small request to tool developers: fail loudly
1
[removed]
2025-06-08T19:26:09
https://www.reddit.com/r/LocalLLaMA/comments/1l6kkhv/a_small_request_to_tool_developers_fail_loudly/
osskid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6kkhv
false
null
t3_1l6kkhv
/r/LocalLLaMA/comments/1l6kkhv/a_small_request_to_tool_developers_fail_loudly/
false
false
self
1
null
Can someone suggest the best Local LLM for this hardware please
1
[removed]
2025-06-08T18:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1l6jmjk/can_someone_suggest_the_best_local_llm_for_this/
No-Distance-5523
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6jmjk
false
null
t3_1l6jmjk
/r/LocalLLaMA/comments/1l6jmjk/can_someone_suggest_the_best_local_llm_for_this/
false
false
self
1
null
Is it possible to run 32B model on 100 requests at a time at 200 Tok/s per second?
0
I'm trying to figure out pricing for this and if it is better to use some api or to rent some gpus or actually buy some hardware. I'm trying to get this kind of throughput: 32B model on 100 requests concurrently at 200 Tok/s per second. Not sure where to even begin looking at the hardware or inference engines for this....
2025-06-08T18:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1l6iz1t/is_it_possible_to_run_32b_model_on_100_requests/
smirkishere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6iz1t
false
null
t3_1l6iz1t
/r/LocalLLaMA/comments/1l6iz1t/is_it_possible_to_run_32b_model_on_100_requests/
false
false
self
0
null
AI Engineer World’s Fair 2025 - Field Notes
1
[removed]
2025-06-08T18:09:56
https://www.reddit.com/r/LocalLLaMA/comments/1l6iqb6/ai_engineer_worlds_fair_2025_field_notes/
oana77oo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6iqb6
false
null
t3_1l6iqb6
/r/LocalLLaMA/comments/1l6iqb6/ai_engineer_worlds_fair_2025_field_notes/
false
false
self
1
null
Good current Linux OSS LLM inference SW/backend/config for AMD Ryzen 7 PRO 8840HS + Radeon 780M IGPU, 4-32B MoE / dense / Q8-Q4ish?
1
Good current Linux OSS LLM inference SW/backend/config for AMD Ryzen 7 PRO 8840HS + Radeon 780M IGPU, 4-32B MoE / dense / Q8-Q4ish? Use case: 4B-32B dense & MoE models like Qwen3, maybe some multimodal ones. Obviously DDR5 bottlenecked but maybe the choice of CPU vs. NPU vs. IGPU; vulkan vs opencl vs rocm force enabl...
2025-06-08T18:03:11
https://www.reddit.com/r/LocalLLaMA/comments/1l6ik8z/good_current_linux_oss_llm_inference/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6ik8z
false
null
t3_1l6ik8z
/r/LocalLLaMA/comments/1l6ik8z/good_current_linux_oss_llm_inference/
false
false
self
1
null
When you figure out it’s all just math:
3,327
2025-06-08T17:53:48
https://i.redd.it/t7ko9eywrq5f1.jpeg
Current-Ticket4214
i.redd.it
1970-01-01T00:00:00
0
{}
1l6ibwg
false
null
t3_1l6ibwg
/r/LocalLLaMA/comments/1l6ibwg/when_you_figure_out_its_all_just_math/
false
false
default
3,327
{'enabled': True, 'images': [{'id': 't7ko9eywrq5f1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=108&crop=smart&auto=webp&s=6149381cda6e4f06c3cae6fd54b7eba33dde68ba', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=216&crop=smart&auto=...
Thinking about buying a 3090. Good for local llm?
9
Thinking about buying a GPU and learning how to run, run and set up an llm. I currently have a 3070 TI. I was thinking about going to a 3090 or 4090 since I have a z690 board still, are there other requirements I should be looking into?
2025-06-08T17:39:28
https://www.reddit.com/r/LocalLLaMA/comments/1l6hzl2/thinking_about_buying_a_3090_good_for_local_llm/
spectre1006
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6hzl2
false
null
t3_1l6hzl2
/r/LocalLLaMA/comments/1l6hzl2/thinking_about_buying_a_3090_good_for_local_llm/
false
false
self
9
null
4x RTX Pro 6000 fail to boot, 3x is OK
13
I have 4 RTX Pro 6000 (Blackwell) connected to a highpoint rocket 1628A (with custom GPU firmware on it). AM5 / B850 motherboard (MSI B850-P WiFi) 9900x CPU 192GB Ram Everything works with 3 GPUs. Tested OK: 3 GPUs in highpoint 2 GPUs in highpoint, 1 GPU in mobo Tested NOT working: 4 GPUs in highpoint 3 GPUs in h...
2025-06-08T17:25:35
https://www.reddit.com/r/LocalLLaMA/comments/1l6hnfg/4x_rtx_pro_6000_fail_to_boot_3x_is_ok/
humanoid64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6hnfg
false
null
t3_1l6hnfg
/r/LocalLLaMA/comments/1l6hnfg/4x_rtx_pro_6000_fail_to_boot_3x_is_ok/
false
false
self
13
null
M.2 to external gpu
1
I've been wanting to raise awareness to the fact that you might not need a specialized multi-gpu motherboard. For inference, you don't necessarily need high bandwidth and their are likely slots on your existing motherboard that you can use for eGPUs.
2025-06-08T16:58:58
http://joshvoigts.com/articles/m2-to-external-gpu/
Zc5Gwu
joshvoigts.com
1970-01-01T00:00:00
0
{}
1l6h011
false
null
t3_1l6h011
/r/LocalLLaMA/comments/1l6h011/m2_to_external_gpu/
false
false
default
1
null
Fastest TTS software with voice cloning?
1
[removed]
2025-06-08T16:50:58
https://www.reddit.com/r/LocalLLaMA/comments/1l6gt4w/fastest_tts_software_with_voice_cloning/
Fancy-Active83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6gt4w
false
null
t3_1l6gt4w
/r/LocalLLaMA/comments/1l6gt4w/fastest_tts_software_with_voice_cloning/
false
false
self
1
null
Ruminate: From All-or-Nothing to Just-Right Reasoning in LLMs
67
# Ruminate: Taking Control of AI Reasoning Speed **TL;DR**: I ran 7,150 prompts through Qwen3-4B-AWQ to try to solve the "fast but wrong vs slow but unpredictable" problem with reasoning AI models and got fascinating results. Built a staged reasoning proxy that lets you dial in exactly the speed-accuracy tradeoff you ...
2025-06-08T16:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1l6gc5o/ruminate_from_allornothing_to_justright_reasoning/
kryptkpr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6gc5o
false
null
t3_1l6gc5o
/r/LocalLLaMA/comments/1l6gc5o/ruminate_from_allornothing_to_justright_reasoning/
false
false
https://b.thumbs.redditm…62toPn07Elbo.jpg
67
{'enabled': False, 'images': [{'id': 'kbh5q68ss3ivP8guE95U7BFe2Sic2X4TeuzorpFdJyI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=108&crop=smart&auto=webp&s=93fe9a4d528e3280ec1c7c7d57d336a03fccae81', 'width': 108}, {'height': 108, 'url': 'h...
The AI Dopamine Overload: Confessions of an AI-Addicted Developer
1
[removed]
2025-06-08T16:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1l6g54w/the_ai_dopamine_overload_confessions_of_an/
Soft_Ad1142
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6g54w
false
null
t3_1l6g54w
/r/LocalLLaMA/comments/1l6g54w/the_ai_dopamine_overload_confessions_of_an/
false
false
self
1
null
Is VRAM really king? 5070 12gb seems to beat the 5060Ti 16gb in LLMs
1
[removed]
2025-06-08T16:21:11
https://www.reddit.com/r/LocalLLaMA/comments/1l6g40r/is_vram_really_king_5070_12gb_seems_to_beat_the/
gildedtoiletseat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6g40r
false
null
t3_1l6g40r
/r/LocalLLaMA/comments/1l6g40r/is_vram_really_king_5070_12gb_seems_to_beat_the/
false
false
self
1
null
Is VRAM really king? 5070 12gb seems to beat the 5060Ti 16gb in LLMs
1
[removed]
2025-06-08T16:20:01
https://www.reddit.com/r/LocalLLaMA/comments/1l6g31f/is_vram_really_king_5070_12gb_seems_to_beat_the/
gildedtoiletseat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6g31f
false
null
t3_1l6g31f
/r/LocalLLaMA/comments/1l6g31f/is_vram_really_king_5070_12gb_seems_to_beat_the/
false
false
self
1
null
LLM performance in 5070 12 gb vs 5060 Ti 16 gb
1
[removed]
2025-06-08T16:04:26
https://www.reddit.com/r/LocalLLaMA/comments/1l6fppa/llm_performance_in_5070_12_gb_vs_5060_ti_16_gb/
graphicscardcustomer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6fppa
false
null
t3_1l6fppa
/r/LocalLLaMA/comments/1l6fppa/llm_performance_in_5070_12_gb_vs_5060_ti_16_gb/
false
false
self
1
null
Can we all admit that getting into local AI requires an unimaginable amount of knowledge in 2025?
0
I'm not saying that it's right or wrong, just that it requires knowing a lot to crack into it. I'm also not saying that I have a solution to this problem. We see so many posts daily asking which models they should use, what software and such. And those questions, lead to... so many more questions that there is no way ...
2025-06-08T15:39:27
https://www.reddit.com/r/LocalLLaMA/comments/1l6f4ei/can_we_all_admit_that_getting_into_local_ai/
valdev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6f4ei
false
null
t3_1l6f4ei
/r/LocalLLaMA/comments/1l6f4ei/can_we_all_admit_that_getting_into_local_ai/
false
false
self
0
null
Croco.cpp and NXS_llama.cpp, forks of KoboldCpp and Llama.cpp
1
[removed]
2025-06-08T14:30:54
https://www.reddit.com/r/LocalLLaMA/comments/1l6dj58/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/
Nexesenex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6dj58
false
null
t3_1l6dj58
/r/LocalLLaMA/comments/1l6dj58/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/
false
false
self
1
null
Croco.cpp and NXS_llama.cpp, forks of KoboldCpp and Llama.cpp.
1
[removed]
2025-06-08T14:28:30
https://www.reddit.com/r/LocalLLaMA/comments/1l6dh5y/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/
Nexesenex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6dh5y
false
null
t3_1l6dh5y
/r/LocalLLaMA/comments/1l6dh5y/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/
false
false
self
1
null
Gemma 27B for creative Joycean writing, looking for other suggestions
1
[removed]
2025-06-08T14:19:36
https://www.reddit.com/gallery/1l6d9w4
SkyFeistyLlama8
reddit.com
1970-01-01T00:00:00
0
{}
1l6d9w4
false
null
t3_1l6d9w4
/r/LocalLLaMA/comments/1l6d9w4/gemma_27b_for_creative_joycean_writing_looking/
false
false
https://b.thumbs.redditm…ecJ8MOT57dJs.jpg
1
null
Run LLM locally on Android
1
[removed]
2025-06-08T14:16:57
https://www.reddit.com/r/LocalLLaMA/comments/1l6d7ne/run_llm_locally_on_android/
100daggers_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6d7ne
false
null
t3_1l6d7ne
/r/LocalLLaMA/comments/1l6d7ne/run_llm_locally_on_android/
false
false
self
1
null
Local LLM on Android Qwen3 + Thinking Mode
1
[removed]
2025-06-08T14:14:07
https://www.reddit.com/r/LocalLLaMA/comments/1l6d5c5/local_llm_on_android_qwen3_thinking_mode/
100daggers_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6d5c5
false
null
t3_1l6d5c5
/r/LocalLLaMA/comments/1l6d5c5/local_llm_on_android_qwen3_thinking_mode/
false
false
self
1
null
Local LLM on Android: Qwen3 Support, Thinking Mode, and Faster Qwen2.5 Inference (APK Inside)
1
[removed]
2025-06-08T14:05:26
https://www.reddit.com/r/LocalLLaMA/comments/1l6cy9c/local_llm_on_android_qwen3_support_thinking_mode/
100daggers_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6cy9c
false
null
t3_1l6cy9c
/r/LocalLLaMA/comments/1l6cy9c/local_llm_on_android_qwen3_support_thinking_mode/
false
false
self
1
null
My AI Coding Assistant Insisted I Need RAG for My Chatbot - But I Really Don't?
0
Been pulling my hair out for weeks because of conflicting advice, hoping someone can explain what I'm missing. **The Situation:** Building a chatbot for an AI podcast platform I'm developing. Need it to remember user preferences, past conversations, and about 50k words of creator-defined personality/background info. ...
2025-06-08T13:47:09
https://www.reddit.com/r/LocalLLaMA/comments/1l6cjti/my_ai_coding_assistant_insisted_i_need_rag_for_my/
Necessary-Tap5971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6cjti
false
null
t3_1l6cjti
/r/LocalLLaMA/comments/1l6cjti/my_ai_coding_assistant_insisted_i_need_rag_for_my/
false
false
self
0
null
AI Studio β€˜App’ on iOS
0
2025-06-08T13:35:50
https://www.icloud.com/shortcuts/9cd63478017648cba611378ba372b19d
Accomplished_Mode170
icloud.com
1970-01-01T00:00:00
0
{}
1l6cbbr
false
null
t3_1l6cbbr
/r/LocalLLaMA/comments/1l6cbbr/ai_studio_app_on_ios/
false
false
default
0
null
How do I finetune Devstral with vision support?
0
Hey, so I'm kinda new in the local llm world, but I managed to get my llama-server up and running locally on Windows with this hf repo: [https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF) I also managed to finetune an unsloth version of Devstra...
2025-06-08T13:03:27
https://www.reddit.com/r/LocalLLaMA/comments/1l6bn1t/how_do_i_finetune_devstral_with_vision_support/
svnflow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6bn1t
false
null
t3_1l6bn1t
/r/LocalLLaMA/comments/1l6bn1t/how_do_i_finetune_devstral_with_vision_support/
false
false
self
0
{'enabled': False, 'images': [{'id': 'XgEx3bmmG6fsK7CQz6kN8wOwIFrwcdRrTW9Huw1I_SI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=108&crop=smart&auto=webp&s=617e32788586c095b536af93fa0eea66a7434c8e', 'width': 108}, {'height': 116, 'url': 'h...
Is the "I Can Run the 670B Deepseek R1 Locally" the new "Can it Run Crysis" Meme?
1
[removed]
2025-06-08T12:57:37
https://www.reddit.com/r/LocalLLaMA/comments/1l6biut/is_the_i_can_run_the_670b_deepseek_r1_locally_the/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6biut
false
null
t3_1l6biut
/r/LocalLLaMA/comments/1l6biut/is_the_i_can_run_the_670b_deepseek_r1_locally_the/
false
false
self
1
null
Weird interaction between agents xD
1
[removed]
2025-06-08T12:31:45
https://www.reddit.com/gallery/1l6b1ke
mdhv11
reddit.com
1970-01-01T00:00:00
0
{}
1l6b1ke
false
null
t3_1l6b1ke
/r/LocalLLaMA/comments/1l6b1ke/weird_interaction_between_agents_xd/
false
false
https://b.thumbs.redditm…oU8mLe7M1-vQ.jpg
1
null
Gigabyte AI-TOP-500-TRX50
28
Does this setup make any sense? A lot of RAM (768GB DDR5 - Threadripper PRO 7965WX platform), but only one RTX 5090 (32GB VRAM). Sounds for me strange to call this an AI platform. I would expect at least one RTX Pro 6000 with 96GB VRAM.
2025-06-08T12:24:30
https://www.gigabyte.com/us/Gaming-PC/AI-TOP-500-TRX50
Blizado
gigabyte.com
1970-01-01T00:00:00
0
{}
1l6awvn
false
null
t3_1l6awvn
/r/LocalLLaMA/comments/1l6awvn/gigabyte_aitop500trx50/
false
false
default
28
{'enabled': False, 'images': [{'id': 'Wm4QI12rr0yyFV7m7egJZLOjR87QJO_z7Qq_v28fdjI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=108&crop=smart&auto=webp&s=533c25e95708ef5df356c007bb3146e9f3ad0bcf', 'width': 108}, {'height': 216, 'url': '...
Which local model do you think best reasons on topics that are not related to STEM? And for Spanish speakers, what is the best model in that language?
1
[removed]
2025-06-08T11:56:39
https://www.reddit.com/r/LocalLLaMA/comments/1l6aexv/which_local_model_do_you_think_best_reasons_on/
Roubbes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l6aexv
false
null
t3_1l6aexv
/r/LocalLLaMA/comments/1l6aexv/which_local_model_do_you_think_best_reasons_on/
false
false
self
1
null
I Built 50 AI Personalities - Here's What Actually Made Them Feel Human
658
Over the past 6 months, I've been obsessing over what makes AI personalities feel authentic vs robotic. After creating and testing 50 different personas for an AI audio platform I'm developing, here's what actually works. **The Setup:** Each persona had unique voice, background, personality traits, and response patter...
2025-06-08T11:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1l69w7i/i_built_50_ai_personalities_heres_what_actually/
Necessary-Tap5971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l69w7i
false
null
t3_1l69w7i
/r/LocalLLaMA/comments/1l69w7i/i_built_50_ai_personalities_heres_what_actually/
false
false
self
658
null
Locally ran coding assistant on Apple M2?
4
I'd like a Github Copilot style coding assistant (preferably for VSCode, but that's not really important) that I could run locally on my 2022 Macbook Air (M2, 16 GB RAM, 10 core GPU). I have a few questions: 1. Is it feasible with this hardware? Deepseek R1 8B on Ollama in the chat mode kinda works okay but a bit too...
2025-06-08T11:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1l69vze/locally_ran_coding_assistant_on_apple_m2/
Defiant-Snow8782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l69vze
false
null
t3_1l69vze
/r/LocalLLaMA/comments/1l69vze/locally_ran_coding_assistant_on_apple_m2/
false
false
self
4
null
Tech Stack for Minion Voice..
5
I am trying to clone a minion voice and enable my kids to speak to a minion.. I just do not know how to clone a voice .. i have 1 hour of minions speaking minonese and can break it into a smaller segment.. i have: * MacBook * Ollama * Python3 any suggestions on what i should do to enable to minion voice offline...
2025-06-08T10:15:49
https://www.reddit.com/r/LocalLLaMA/comments/1l68tgx/tech_stack_for_minion_voice/
chiknugcontinuum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l68tgx
false
null
t3_1l68tgx
/r/LocalLLaMA/comments/1l68tgx/tech_stack_for_minion_voice/
false
false
self
5
null
Confirmation that Qwen3-coder is in works
315
Junyang Lin from Qwen team [mentioned this here](https://youtu.be/b0xlsQ_6wUQ?t=985).
2025-06-08T10:02:00
https://www.reddit.com/r/LocalLLaMA/comments/1l68m1m/confirmation_that_qwen3coder_is_in_works/
nullmove
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l68m1m
false
null
t3_1l68m1m
/r/LocalLLaMA/comments/1l68m1m/confirmation_that_qwen3coder_is_in_works/
false
false
self
315
{'enabled': False, 'images': [{'id': 'k0BFpsKvlGEppc_1fbUMbnxP5ghsEQejN-ic5SxiECM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wKLnxrouIC2D6bNcU8tzgzKecPM0BvGtfujsLPcUGjY.jpg?width=108&crop=smart&auto=webp&s=59f7f1451bb6a872f353a1141e94d6778a782cdd', 'width': 108}, {'height': 162, 'url': 'h...
Help with AI model recommendation
1
[removed]
2025-06-08T09:59:07
https://www.reddit.com/r/LocalLLaMA/comments/1l68ka5/help_with_ai_model_recommendation/
Grouchy-Staff-8361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l68ka5
false
null
t3_1l68ka5
/r/LocalLLaMA/comments/1l68ka5/help_with_ai_model_recommendation/
false
false
self
1
null
What is your sampler order (not sampler settings) for llama.cpp?
23
My current sampler order is `--samplers "dry;top_k;top_p;min_p;temperature"`. I've used it for a while, it seems to work well. I've found most of the inspiration in [this post](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). However, additional samplers have app...
2025-06-08T09:53:34
https://www.reddit.com/r/LocalLLaMA/comments/1l68hjc/what_is_your_sampler_order_not_sampler_settings/
Nindaleth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l68hjc
false
null
t3_1l68hjc
/r/LocalLLaMA/comments/1l68hjc/what_is_your_sampler_order_not_sampler_settings/
false
false
self
23
null
Kokoro.js for German?
1
[removed]
2025-06-08T09:36:15
https://www.reddit.com/r/LocalLLaMA/comments/1l688rt/kokorojs_for_german/
nic_key
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l688rt
false
null
t3_1l688rt
/r/LocalLLaMA/comments/1l688rt/kokorojs_for_german/
false
false
self
1
null
Create 2 and 3-bit GPTQ quantization for Qwen3-235B-A22B?
5
Hi! Maybe there is someone here who has already done such quantization, could you share? Or maybe a way of quantization, for using it in the future in VLLM? I plan to use it with 112GB total VRAM.
2025-06-08T09:10:49
https://www.reddit.com/r/LocalLLaMA/comments/1l67vkt/create_2_and_3bit_gptq_quantization_for/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67vkt
false
null
t3_1l67vkt
/r/LocalLLaMA/comments/1l67vkt/create_2_and_3bit_gptq_quantization_for/
false
false
self
5
null
Macbook Air M4: Worth going for 32GB or is bandwidth the bottleneck?
1
[removed]
2025-06-08T08:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1l67ply/macbook_air_m4_worth_going_for_32gb_or_is/
broad_marker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67ply
false
null
t3_1l67ply
/r/LocalLLaMA/comments/1l67ply/macbook_air_m4_worth_going_for_32gb_or_is/
false
false
self
1
null
Need a tutorial on GPUs
0
To understand more about training and inference, I need to learn a bit more about how GPUs work. like stuff about SM, warp, threads, ... . I'm not interested in GPU programming. Is there any video/course on this that is not too long? (shorter than 10 hours)
2025-06-08T08:57:07
https://www.reddit.com/r/LocalLLaMA/comments/1l67obk/need_a_tutorial_on_gpus/
DunderSunder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67obk
false
null
t3_1l67obk
/r/LocalLLaMA/comments/1l67obk/need_a_tutorial_on_gpus/
false
false
self
0
null
[In Development] Serene Pub, a simpler SillyTavern like roleplay client
28
I've been using Ollama to roleplay for a while now. SillyTavern has been fantastic, but I've had some frustrations with it. I've started developing my own application with the same copy-left license. I am at the point where I want to test the waters and get some feedback and gauge interest. [**Link to the project & s...
2025-06-08T08:44:14
https://www.reddit.com/r/LocalLLaMA/comments/1l67i14/in_development_serene_pub_a_simpler_sillytavern/
doolijb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67i14
false
null
t3_1l67i14
/r/LocalLLaMA/comments/1l67i14/in_development_serene_pub_a_simpler_sillytavern/
false
false
self
28
{'enabled': False, 'images': [{'id': 'Vr-dkzqy7wO0NXweCWge6EzB1AmYJbON5tbAvvWroio', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=108&crop=smart&auto=webp&s=53eb895cd33bf255dc2f5e7e672def54f65f6339', 'width': 108}, {'height': 108, 'url': 'h...
Any good fine-tuning framework/system?
2
I want to fine-tune a complex AI process that will likely require fine-tuning multiple LLMs to perform different actions. Are there any good gateways, python libraries, or any other setup that you would recommend to collect data, create training dataset, measure performance, etc? Preferably an all-in-one solution?
2025-06-08T08:32:09
https://www.reddit.com/r/LocalLLaMA/comments/1l67c2a/any_good_finetuning_frameworksystem/
No_Heart_159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67c2a
false
null
t3_1l67c2a
/r/LocalLLaMA/comments/1l67c2a/any_good_finetuning_frameworksystem/
false
false
self
2
null
Rig upgraded to 8x3090
428
About 1 year ago I posted about a [4 x 3090 build](https://www.reddit.com/r/LocalLLaMA/comments/1bqxfc0/another_4x3090_build/). This machine has been great for learning to fine-tune LLMs and produce synthetic data-sets. However, even with deepspeed and 8B models, the maximum training full fine-tune context length wa...
2025-06-08T08:29:06
https://i.redd.it/7ios74ratn5f1.jpeg
lolzinventor
i.redd.it
1970-01-01T00:00:00
0
{}
1l67afp
false
null
t3_1l67afp
/r/LocalLLaMA/comments/1l67afp/rig_upgraded_to_8x3090/
false
false
https://b.thumbs.redditm…2mhWAxGpuhWE.jpg
428
{'enabled': True, 'images': [{'id': '-WvJw7ZZ_qSmswW2idiCv8wPrDK2tbRkhHlB5fcKeUU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=108&crop=smart&auto=webp&s=96236796d070a68723ca116778095a5a79ea0886', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7ios74ratn5f1.j...
Testing Frontier LLMs on 2025 Chinese Gaokao Math Problems - Fresh Benchmark Results
27
Tested frontier LLMs on yesterday's 2025 Chinese Gaokao (National College Entrance Examination) math problems (74 points total: 8 single-choice, 3 multiple-choice, 3 fill-in-blank). Since these were released June 7th, zero chance of training data contamination. [result](https://preview.redd.it/zj8lzkziwn5f1.png?width=...
2025-06-08T08:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1l67457/testing_frontier_llms_on_2025_chinese_gaokao_math/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67457
false
null
t3_1l67457
/r/LocalLLaMA/comments/1l67457/testing_frontier_llms_on_2025_chinese_gaokao_math/
false
false
https://b.thumbs.redditm…xBF16DR3yH4U.jpg
27
null
Vision support in ChatterUI (albeit, very slow)
47
Pre-release here: https://github.com/Vali-98/ChatterUI/releases/tag/v0.8.7-beta3 For the uninitiated, ChatterUI is a LLM chat client which can run models on your device or connect to proprietary/open source APIs. I've been working on getting attachments working in ChatterUI, and thanks to pocketpal's maintainer, ll...
2025-06-08T07:45:42
https://i.redd.it/zm7h1u2frn5f1.png
----Val----
i.redd.it
1970-01-01T00:00:00
0
{}
1l66nmv
false
null
t3_1l66nmv
/r/LocalLLaMA/comments/1l66nmv/vision_support_in_chatterui_albeit_very_slow/
false
false
https://b.thumbs.redditm…JZCH_ab_EVlE.jpg
47
{'enabled': True, 'images': [{'id': 'x4Hh47HuPNwLz-vc5KP7yhyAkHPIcEMd66gu9rRDZYc', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=108&crop=smart&auto=webp&s=939f9b5cd916ed90d5bf82509fe467e8d5f67966', 'width': 108}, {'height': 346, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.pn...
the new Gemini 2.5 PRO reigns SUPREME
0
https://preview.redd.it/…e full version)?
2025-06-08T07:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1l66ez8/the_new_gemini_25_pro_reigns_supreme/
DistributionOk2434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l66ez8
false
null
t3_1l66ez8
/r/LocalLLaMA/comments/1l66ez8/the_new_gemini_25_pro_reigns_supreme/
false
false
https://b.thumbs.redditm…z2qm7Ry3pv1k.jpg
0
null
the new Gemini 2.5 PRO reigns SUPREME
1
[removed]
2025-06-08T07:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1l66di4/the_new_gemini_25_pro_reigns_supreme/
CandidLeek319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l66di4
false
null
t3_1l66di4
/r/LocalLLaMA/comments/1l66di4/the_new_gemini_25_pro_reigns_supreme/
false
false
https://a.thumbs.redditm…6_vCNbLQIp34.jpg
1
null
Motorola is integrating on-device local AI to its mobile phones
18
2025-06-08T07:24:07
https://i.redd.it/rok89w2cnn5f1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1l66cbt
false
null
t3_1l66cbt
/r/LocalLLaMA/comments/1l66cbt/motorola_is_integrating_ondevice_local_ai_to_its/
false
false
https://b.thumbs.redditm…KAshObB1hM8g.jpg
18
{'enabled': True, 'images': [{'id': 'yw_NVxHG0MWYH3Ji3pVSUgWfxMEJSw4jYBczqrCEYS0', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=108&crop=smart&auto=webp&s=3c412268db362d859f2fb72454e8b8bacba81c34', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/rok89w2cnn5f1.pn...
Apple's new research paper on the limitations of "thinking" models
180
2025-06-08T07:22:03
https://machinelearning.apple.com/research/illusion-of-thinking
seasonedcurlies
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1l66b8a
false
null
t3_1l66b8a
/r/LocalLLaMA/comments/1l66b8a/apples_new_research_paper_on_the_limitations_of/
false
false
default
180
{'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=108&crop=smart&auto=webp&s=cdcbdf7d4e054676a9ea185723b2cca1b298211b', 'width': 108}, {'height': 113, 'url': 'h...
Instead of summarizing, cut filler words
1
[removed]
2025-06-08T07:14:48
https://www.reddit.com/r/LocalLLaMA/comments/1l667i6/instead_of_summarizing_cut_filler_words/
AlphaHusk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l667i6
false
null
t3_1l667i6
/r/LocalLLaMA/comments/1l667i6/instead_of_summarizing_cut_filler_words/
false
false
self
1
null
Best models by size?
36
I am confused how to find benchmarks that tell me the strongest model for math/coding by size. I want to know which local model is strongest that can fit in 16GB of RAM (no GPU). I would also like to know the same thing for 32GB, Where should I be looking for this info?
2025-06-08T06:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1l65r2k/best_models_by_size/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l65r2k
false
null
t3_1l65r2k
/r/LocalLLaMA/comments/1l65r2k/best_models_by_size/
false
false
self
36
null
Motorola mobile phones to soon have on-device local LLMs
0
2025-06-08T06:32:12
https://i.redd.it/2ey1d1kxdn5f1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1l65kta
false
null
t3_1l65kta
/r/LocalLLaMA/comments/1l65kta/motorola_mobile_phones_to_soon_have_ondevice/
false
false
default
0
{'enabled': True, 'images': [{'id': '2ey1d1kxdn5f1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=108&crop=smart&auto=webp&s=9c3e49680858036215c08224143222706d18d608', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=216&crop=smart&auto=we...
# [Tool Release] Poor AI – A Self-Generating CLI for AI-Assisted Development
1
[removed]
2025-06-08T06:17:00
https://www.reddit.com/r/LocalLLaMA/comments/1l65ci3/tool_release_poor_ai_a_selfgenerating_cli_for/
DrinkMean4332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l65ci3
false
null
t3_1l65ci3
/r/LocalLLaMA/comments/1l65ci3/tool_release_poor_ai_a_selfgenerating_cli_for/
false
false
self
1
null
Question: how do you use these beasts to earn money.
1
[removed]
2025-06-08T04:56:16
https://www.reddit.com/r/LocalLLaMA/comments/1l643gg/question_how_do_you_use_these_beasts_to_earn_money/
absolute-calm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l643gg
false
null
t3_1l643gg
/r/LocalLLaMA/comments/1l643gg/question_how_do_you_use_these_beasts_to_earn_money/
false
false
self
1
null