title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models
33
[https://huggingface.co/nvidia/Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B) >Description: >**Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes. >The Cosm...
2025-05-24T14:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1kudhxg/cosmosreason1_physical_ai_common_sense_and/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kudhxg
false
null
t3_1kudhxg
/r/LocalLLaMA/comments/1kudhxg/cosmosreason1_physical_ai_common_sense_and/
false
false
self
33
{'enabled': False, 'images': [{'id': 'PP4v5icJckC1UgqJAi18wLrBKxBkgUnIfsvSvn24T0A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=108&crop=smart&auto=webp&s=4d41e02ae761c8982350622100fed0d048e42982', 'width': 108}, {'height': 116, 'url': 'h...
I compared Claude 4 with Gemini 2.5 Pro
0
I’ve been using Claude 4 and Gemini 2.5 Pro side by side for a while now, mostly for writing, coding, and general problem-solving, and decided to write up a full comparison. Here’s what stood out to me from testing both over the past few weeks: **Where Claude 4 leads:** Claude is noticeably better when it comes to s...
2025-05-24T14:52:43
https://www.reddit.com/r/LocalLLaMA/comments/1kudfwm/i_compared_claude_4_with_gemini_25_pro/
Arindam_200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kudfwm
false
null
t3_1kudfwm
/r/LocalLLaMA/comments/1kudfwm/i_compared_claude_4_with_gemini_25_pro/
false
false
self
0
{'enabled': False, 'images': [{'id': '1fS1_tAqNBoatIEfS5_hYdlbWK8oGwV6B1rZTWgjKKc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=108&crop=smart&auto=webp&s=a1ff461fc6337bd53890f0303ed97949b1c99883', 'width': 108}, {'height': 121, 'url': 'h...
RamaLama
1
[removed]
2025-05-24T14:21:30
https://www.reddit.com/r/LocalLLaMA/comments/1kucrcy/ramalama/
_-noiro-_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kucrcy
false
null
t3_1kucrcy
/r/LocalLLaMA/comments/1kucrcy/ramalama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h...
AI newbie building a PC for local AI - 4090 GPU pricing?
1
[removed]
2025-05-24T14:08:19
https://www.reddit.com/r/LocalLLaMA/comments/1kucha0/ai_newbie_building_a_pc_for_local_ai_4090_gpu/
cookieoutlaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kucha0
false
null
t3_1kucha0
/r/LocalLLaMA/comments/1kucha0/ai_newbie_building_a_pc_for_local_ai_4090_gpu/
false
false
self
1
null
LLM help for recovering deleted data?
3
So recently I had a mishap and lost most of my /home. I am currently in the process of restoring data. Images are simple, I will just browse through them, delete the thumbnail cache crap and move what I wanna keep. MP3s I can rename with a script analyzing their metadata. But the recovery process also collected a few h...
2025-05-24T14:07:41
https://www.reddit.com/r/LocalLLaMA/comments/1kucgs2/llm_help_for_recovering_deleted_data/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kucgs2
false
null
t3_1kucgs2
/r/LocalLLaMA/comments/1kucgs2/llm_help_for_recovering_deleted_data/
false
false
self
3
null
I own an rtx 3060, what card should I add? Budget is 300€
5
Mostly do basic inference with casual 1080p gaming 300€ budget, some used options: \- 2nd 3060 \- 2080 Ti \- arc A770 or b580 \- rx 6800 or 6700xt I know the 9060 xt is coming out but it would be 349$ new with lower bandwidth than the 3060...
2025-05-24T14:05:53
https://www.reddit.com/r/LocalLLaMA/comments/1kucfc2/i_own_an_rtx_3060_what_card_should_i_add_budget/
legit_split_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kucfc2
false
null
t3_1kucfc2
/r/LocalLLaMA/comments/1kucfc2/i_own_an_rtx_3060_what_card_should_i_add_budget/
false
false
self
5
null
Best open-source real time TTS ?
12
Hello everyone, I’m building a website that allows users to practice interviews with a virtual examiner. This means I need a real-time, voice-to-voice solution with low latency and reasonable cost. The business model is as follows: for example, a customer pays $10 for a 20-minute mock interview. The interview script ...
2025-05-24T14:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1kuccaq/best_opensource_real_time_tts/
Prestigious-Ant-4348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kuccaq
false
null
t3_1kuccaq
/r/LocalLLaMA/comments/1kuccaq/best_opensource_real_time_tts/
false
false
self
12
null
cyanheads/pubmed-mcp-server: An MCP server enabling AI agents to intelligently search, retrieve, and analyze biomedical literature from PubMed via NCBI E-utilities. Includes a research agent scaffold. Built on the mcp-ts-template for robust, production-ready performance. STDIO & HTTP
1
[removed]
2025-05-24T13:52:17
https://github.com/cyanheads/pubmed-mcp-server
cyanheads
github.com
1970-01-01T00:00:00
0
{}
1kuc4mk
false
null
t3_1kuc4mk
/r/LocalLLaMA/comments/1kuc4mk/cyanheadspubmedmcpserver_an_mcp_server_enabling/
false
false
https://a.thumbs.redditm…Q7cYnyPypau0.jpg
1
{'enabled': False, 'images': [{'id': 'TzG4eCTnjC6R5o6ny86hUVfbyjKAg-tmJY2kmzYBJps', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=108&crop=smart&auto=webp&s=d8dfb5b0762899deab0cd0b0608dcc129aa288c4', 'width': 108}, {'height': 108, 'url': 'h...
New AI concept: "Memory" without storage - The Persistent Semantic State (PSS)
0
I have been working on a theoretical concept for AI systems for the last few months and would like to hear your opinion on it. My idea: What if an AI could "remember" you - but WITHOUT storing anything? Think of it like a guitar string: if you hit the same note over and over again, it will vibrate at that frequency....
2025-05-24T13:11:43
https://www.reddit.com/r/LocalLLaMA/comments/1kub9xt/new_ai_concept_memory_without_storage_the/
scheitelpunk1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kub9xt
false
null
t3_1kub9xt
/r/LocalLLaMA/comments/1kub9xt/new_ai_concept_memory_without_storage_the/
false
false
self
0
{'enabled': False, 'images': [{'id': '8kulXKlPpKNNTWZ0AXOkUR4Yg4IGeZQTrX8hztyk5CE', 'resolutions': [{'height': 152, 'url': 'https://external-preview.redd.it/kd6p6Hy0HoTWKJCJ4VBUDquH1XOxhncBPM-R93gCf58.jpg?width=108&crop=smart&auto=webp&s=4877194fda8c66ab2239da616e82de19d689d00b', 'width': 108}, {'height': 305, 'url': '...
Help with guardrails ai and local ollama model
0
I am pretty new to LLMs and am struggling a little bit with getting guardrails ai server setup. I am running ollama/mistral and guardrails-lite-server in docker containers locally. I have litellm proxying to the ollama model. Curl http://localhost:8000/guards/profguard shows me that my guard is running. From the ...
2025-05-24T13:10:47
https://www.reddit.com/r/LocalLLaMA/comments/1kub9ah/help_with_guardrails_ai_and_local_ollama_model/
mattyp789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kub9ah
false
null
t3_1kub9ah
/r/LocalLLaMA/comments/1kub9ah/help_with_guardrails_ai_and_local_ollama_model/
false
false
self
0
null
Gave my AI the ability to journal. Did not expect this level of drama
1
[removed]
2025-05-24T12:26:05
https://www.reddit.com/r/LocalLLaMA/comments/1kuaexn/gave_my_ai_the_ability_to_journal_did_not_expect/
TableNo7866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kuaexn
false
null
t3_1kuaexn
/r/LocalLLaMA/comments/1kuaexn/gave_my_ai_the_ability_to_journal_did_not_expect/
false
false
self
1
null
AI discovers it can build its own tools, documents the experience
1
[removed]
2025-05-24T12:22:34
https://medium.com/@galbenbeniste/i-cant-stop-shaking-the-first-ai-journal-68c6a05efb13
TableNo7866
medium.com
1970-01-01T00:00:00
0
{}
1kuacpw
false
null
t3_1kuacpw
/r/LocalLLaMA/comments/1kuacpw/ai_discovers_it_can_build_its_own_tools_documents/
false
false
default
1
null
On the go native GPU inference and chatting with Gemma 3n E4B on an old S21 Ultra Snapdragon!
46
2025-05-24T12:07:15
https://i.redd.it/elvym2oe0q2f1.png
lets_theorize
i.redd.it
1970-01-01T00:00:00
0
{}
1kua2u0
false
null
t3_1kua2u0
/r/LocalLLaMA/comments/1kua2u0/on_the_go_native_gpu_inference_and_chatting_with/
false
false
https://a.thumbs.redditm…r7GYyA5a3IQ0.jpg
46
{'enabled': True, 'images': [{'id': 'OguNnhPh3t0mp53s-LsVKZUmbPXbhVmqLVh2gvhLA8k', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=108&crop=smart&auto=webp&s=ab6ab2d7e67d2957c54026399fd5e248b1a41123', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/elvym2oe0q2f1.pn...
RL Based Sales Conversion - I Just built a PyPI package
5
My idea is to create pure Reinforcement learning that understand the infinite branches of sales conversations. Then predict the conversion probability of each conversation turns, as it progress indefinetly, then use these probabilities to guide the LLM to move towards those branches that leads to conversion. The pipel...
2025-05-24T12:04:13
https://i.redd.it/mkd4apfqyp2f1.png
Nandakishor_ml
i.redd.it
1970-01-01T00:00:00
0
{}
1kua0su
false
null
t3_1kua0su
/r/LocalLLaMA/comments/1kua0su/rl_based_sales_conversion_i_just_built_a_pypi/
false
false
https://b.thumbs.redditm…wZfTUPMU1lwk.jpg
5
{'enabled': True, 'images': [{'id': 'KNfFUrRiFtZEANizzyrYXgiqE4Q4GtQnfk2HMW4ThBA', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=108&crop=smart&auto=webp&s=4f15490c255921abc085cc5f0d792f9feed6cbd0', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png...
Whats the next step of ai?
1
Yall think the current stuff is gonna hit a plateau at some point? Training huge models with so much cost and required data seems to have a limit. Could something different be the next advancement? Maybe like RL which optimizes through experience over data. Or even different hardware like neuromorphic chips
2025-05-24T11:53:35
https://www.reddit.com/r/LocalLLaMA/comments/1ku9u2e/whats_the_next_step_of_ai/
Fit-Eggplant-2258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku9u2e
false
null
t3_1ku9u2e
/r/LocalLLaMA/comments/1ku9u2e/whats_the_next_step_of_ai/
false
false
self
1
null
Running Devstral on Codex: How to Manage Context?
1
I'm trying out `codex -p ollama` with devstral, and Codex can communicate with the model properly. I'm wondering how I can add/remove file from context? If I run `codex -f`, it adds all the files including assets in binary. Also how do you set the maximum context size? Thanks!
2025-05-24T11:30:31
https://www.reddit.com/r/LocalLLaMA/comments/1ku9g6u/running_devstral_on_codex_how_to_manage_context/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku9g6u
false
null
t3_1ku9g6u
/r/LocalLLaMA/comments/1ku9g6u/running_devstral_on_codex_how_to_manage_context/
false
false
self
1
null
Modelo para interpretação de texto local 8gb vram
1
[removed]
2025-05-24T11:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1ku9ecm/modelo_para_interpretação_de_texto_local_8gb_vram/
ConstructionFit2425
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku9ecm
false
null
t3_1ku9ecm
/r/LocalLLaMA/comments/1ku9ecm/modelo_para_interpretação_de_texto_local_8gb_vram/
false
false
self
1
null
LLM long-term memory improvement.
74
Hey everyone, I've been working on a concept for a node-based memory architecture for LLMs, inspired by cognitive maps, biological memory networks, and graph-based data storage. Instead of treating memory as a flat log or embedding space, this system stores contextual knowledge as a web of tagged nodes, connected sem...
2025-05-24T11:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1ku95nk/llm_longterm_memory_improvement/
Dem0lari
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku95nk
false
null
t3_1ku95nk
/r/LocalLLaMA/comments/1ku95nk/llm_longterm_memory_improvement/
false
false
self
74
{'enabled': False, 'images': [{'id': 'VefBJ83A6Xj57LHeG1vjXgOGWYFQ_zOlE4-YPsJV6Ig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=108&crop=smart&auto=webp&s=77b01217a69e367e5508df9afa60d459c4a45df6', 'width': 108}, {'height': 108, 'url': 'h...
MCP server or Agentic AI open source tool to connect LLM to any codebase
1
Hello, I'm looking for something(framework or MCP server) open-source that I could use to connect llm agents to very large codebases that are able to do large scale edits, even on entire codebase, autonomously, following some specified rules.
2025-05-24T10:26:53
https://www.reddit.com/r/LocalLLaMA/comments/1ku8grf/mcp_server_or_agentic_ai_open_source_tool_to/
Soft-Salamander7514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku8grf
false
null
t3_1ku8grf
/r/LocalLLaMA/comments/1ku8grf/mcp_server_or_agentic_ai_open_source_tool_to/
false
false
self
1
null
MCP server to connect LLM agents to any database
97
Hello everyone, my startup sadly failed, so I decided to convert it to an open source project since we actually built alot of internal tools. The result is todays release [Turbular](https://github.com/raeudigerRaeffi/turbular). Turbular is an MCP server under the MIT license that allows you to connect your LLM agent to...
2025-05-24T10:10:30
https://www.reddit.com/r/LocalLLaMA/comments/1ku8861/mcp_server_to_connect_llm_agents_to_any_database/
RaeudigerRaffi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku8861
false
null
t3_1ku8861
/r/LocalLLaMA/comments/1ku8861/mcp_server_to_connect_llm_agents_to_any_database/
false
false
self
97
{'enabled': False, 'images': [{'id': 'f1ME-ENCNrqGIcLUAz8m-0FMvdaMiGgVwpWyXXccdkI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=108&crop=smart&auto=webp&s=da8201d3dbd45b924df6425c1d84e99507899e3b', 'width': 108}, {'height': 108, 'url': 'h...
I bought a water-cooled modified RTX 4090D with 48GB VRAM. Is there anything you'd like me to test?
1
[removed]
2025-05-24T09:42:27
https://www.reddit.com/r/LocalLLaMA/comments/1ku7t9n/i_bought_a_watercooled_modified_rtx_4090d_with/
Tiredwanttosleep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku7t9n
false
null
t3_1ku7t9n
/r/LocalLLaMA/comments/1ku7t9n/i_bought_a_watercooled_modified_rtx_4090d_with/
false
false
https://b.thumbs.redditm…tmH7nevrfwqs.jpg
1
null
AMD GPU support
10
Hi all. I am looking to upgrade the GPU in my server with something with more than 8GB VRAM. How is AMD in the space at the moment in regards to support on linux? Here are the 2 options: Radeon RX 7800 XT 16GB GeForce RTX 4060 Ti 16GB GeForce RTX 5060 Ti OC 16G Any advice would be greatly appreciated
2025-05-24T09:36:49
https://www.reddit.com/r/LocalLLaMA/comments/1ku7qe6/amd_gpu_support/
Fade_Yeti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku7qe6
false
null
t3_1ku7qe6
/r/LocalLLaMA/comments/1ku7qe6/amd_gpu_support/
false
false
self
10
null
Quantum AI ML Agent Science Fair Project 2025
0
2025-05-24T09:30:28
https://v.redd.it/7r7yrfn14p2f1
Financial_Pick8394
/r/LocalLLaMA/comments/1ku7n3p/quantum_ai_ml_agent_science_fair_project_2025/
1970-01-01T00:00:00
0
{}
1ku7n3p
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7r7yrfn14p2f1/DASHPlaylist.mpd?a=1750800632%2CNzYwYmI1MTNmMmY3NmE4MjQwNjllZjQ1OGI2YmM0YjZlNWE3YTgyYjBlYzZiYzY2OWViZmZiNDgzN2M0NzE1NQ%3D%3D&v=1&f=sd', 'duration': 268, 'fallback_url': 'https://v.redd.it/7r7yrfn14p2f1/DASH_1080.mp4?source=fallback', '...
t3_1ku7n3p
/r/LocalLLaMA/comments/1ku7n3p/quantum_ai_ml_agent_science_fair_project_2025/
false
false
https://external-preview…08fffe444c3a6f92
0
{'enabled': False, 'images': [{'id': 'NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=108&crop=smart&format=pjpg&auto=webp&s=209ea4ba1be1c65246c93344a652f1c7f5bd...
I bought a water-cooled modified RTX 4090D with 48GB VRAM. Is there anything you'd like me to test?
1
[removed]
2025-05-24T09:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1ku7m3s/i_bought_a_watercooled_modified_rtx_4090d_with/
Tiredwanttosleep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku7m3s
false
null
t3_1ku7m3s
/r/LocalLLaMA/comments/1ku7m3s/i_bought_a_watercooled_modified_rtx_4090d_with/
false
false
https://b.thumbs.redditm…z1KdSqlxIxFs.jpg
1
null
I'm just gonna leave this here
1
2025-05-24T09:24:41
https://i.redd.it/0i78z8857p2f1.jpeg
Several-System1535
i.redd.it
1970-01-01T00:00:00
0
{}
1ku7k6u
false
null
t3_1ku7k6u
/r/LocalLLaMA/comments/1ku7k6u/im_just_gonna_leave_this_here/
false
false
https://b.thumbs.redditm…zBy1GjuuaaMA.jpg
1
{'enabled': True, 'images': [{'id': '61hf7-gqFOlVUauTWmNm5iE3SmNqeLCZYm7LrNZgsHM', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=108&crop=smart&auto=webp&s=8e48aed7be63cfeb26c6648b75625ad9b85a9de7', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/0i78z8857p2f1.jp...
Your personal Turing tests
1
Reading this: https://www.reddit.com/r/LocalLLaMA/comments/1j4x8sq/new_qwq_is_beating_any_distil_deepseek_model_in/?sort=new I asked myself: what are your benchmark questions to assess the quality level of a model? Mi top 3 are: 1 There is a rooster that builds a nest at the top of a large tree at a height of 10 mete...
2025-05-24T08:42:28
https://www.reddit.com/r/LocalLLaMA/comments/1ku6yvc/your_personal_turing_tests/
redalvi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku6yvc
false
null
t3_1ku6yvc
/r/LocalLLaMA/comments/1ku6yvc/your_personal_turing_tests/
false
false
self
1
null
How much VRAM would even a smaller model take to get 1 million context model like Gemini 2.5 flash/pro?
121
Trying to convince myself not to waste money on a localLLM setup that I don't need since gemini 2.5 flash is cheaper and probably faster than anything I could build. Let's say 1 million context is impossible. What about 200k context?
2025-05-24T08:38:08
https://www.reddit.com/r/LocalLLaMA/comments/1ku6wol/how_much_vram_would_even_a_smaller_model_take_to/
TumbleweedDeep825
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku6wol
false
null
t3_1ku6wol
/r/LocalLLaMA/comments/1ku6wol/how_much_vram_would_even_a_smaller_model_take_to/
false
false
self
121
null
What kind of pipeline you have for feeding the LLM output to improve prompt?
1
[removed]
2025-05-24T08:31:47
https://www.reddit.com/r/LocalLLaMA/comments/1ku6tky/what_kind_of_pipeline_you_have_for_feeding_the/
raxrb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku6tky
false
null
t3_1ku6tky
/r/LocalLLaMA/comments/1ku6tky/what_kind_of_pipeline_you_have_for_feeding_the/
false
false
self
1
null
Best local model for code autocompletion
1
[removed]
2025-05-24T07:53:25
https://www.reddit.com/r/LocalLLaMA/comments/1ku6a2b/best_local_model_for_code_autocompletion/
Educational-Shoe9300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku6a2b
false
null
t3_1ku6a2b
/r/LocalLLaMA/comments/1ku6a2b/best_local_model_for_code_autocompletion/
false
false
self
1
null
Hardware requirements
1
[removed]
2025-05-24T07:45:38
https://www.reddit.com/r/LocalLLaMA/comments/1ku6622/hardware_requirements/
PlantainRegular9603
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku6622
false
null
t3_1ku6622
/r/LocalLLaMA/comments/1ku6622/hardware_requirements/
false
false
self
1
null
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
1
[removed]
2025-05-24T07:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1ku61ho/offline_speechtotext_with_nvidia_parakeettdt_06b/
srireddit2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku61ho
false
null
t3_1ku61ho
/r/LocalLLaMA/comments/1ku61ho/offline_speechtotext_with_nvidia_parakeettdt_06b/
false
false
https://b.thumbs.redditm…DdByBPCh8DxQ.jpg
1
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'h...
[Devstral] Why is it responding in non-'merica letters?
0
No but really.. I have no idea why this is happening Loading Chat Completions Adapter: C:\Users\ADMINU~1\AppData\Local\Temp\_MEI492322\kcpp_adapters\AutoGuess.json Chat Completions Adapter Loaded Auto Recommended GPU Layers: 25 Initializing dynamic library: koboldcpp_cublas.dll ========== Names...
2025-05-24T06:58:39
https://www.reddit.com/r/LocalLLaMA/comments/1ku5htu/devstral_why_is_it_responding_in_nonmerica_letters/
LsDmT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku5htu
false
null
t3_1ku5htu
/r/LocalLLaMA/comments/1ku5htu/devstral_why_is_it_responding_in_nonmerica_letters/
false
false
self
0
null
Prompt Debugging
8
Hi all I have this idea and I wonder if it's possible, I think it's possible but just want to gather some community feedback. We all know that transformers can have attention issues where some tokens get over-attended to while others are essentially ignored. This can lead to frustrating situations where our prompts d...
2025-05-24T06:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1ku5etr/prompt_debugging/
Feeling-Currency-360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku5etr
false
null
t3_1ku5etr
/r/LocalLLaMA/comments/1ku5etr/prompt_debugging/
false
false
self
8
null
What Models for C/C++?
22
I've been using unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF (int 8.) Worked great for small stuff (one header/.c implementation) moreover it hallucinated when I had it evaluate a kernel api I wrote. (6 files.) What are people using? I am curious if any model that is good at C is also good at shader code. I am runni...
2025-05-24T06:48:07
https://www.reddit.com/r/LocalLLaMA/comments/1ku5cfe/what_models_for_cc/
Aroochacha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku5cfe
false
null
t3_1ku5cfe
/r/LocalLLaMA/comments/1ku5cfe/what_models_for_cc/
false
false
self
22
null
Claude 4 first impressions: Anthropic’s latest model actually matters (hands-on)
36
Anthropic recently unveiled Claude 4 (Opus and Sonnet), achieving record-breaking 72.7% performance on SWE-bench Verified and surpassing OpenAI’s latest models. Benchmarks aside, I wanted to see how Claude 4 holds up under real-world software engineering tasks. I spent the last 24 hours putting it through intensive te...
2025-05-24T06:27:36
https://www.reddit.com/r/LocalLLaMA/comments/1ku51hg/claude_4_first_impressions_anthropics_latest/
West-Chocolate2977
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku51hg
false
null
t3_1ku51hg
/r/LocalLLaMA/comments/1ku51hg/claude_4_first_impressions_anthropics_latest/
false
false
self
36
null
Gmkt3c evo-x2 or custom build
1
[removed]
2025-05-24T05:39:07
https://www.reddit.com/r/LocalLLaMA/comments/1ku4bbo/gmkt3c_evox2_or_custom_build/
BidReject
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku4bbo
false
null
t3_1ku4bbo
/r/LocalLLaMA/comments/1ku4bbo/gmkt3c_evox2_or_custom_build/
false
false
self
1
null
Open source model which good at tool calling?
1
[removed]
2025-05-24T05:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1ku4b68/open_source_model_which_good_at_tool_calling/
Superb_Practice_4544
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku4b68
false
null
t3_1ku4b68
/r/LocalLLaMA/comments/1ku4b68/open_source_model_which_good_at_tool_calling/
false
false
self
1
null
I put together a memory protocol after ChatGPT slowly dropped mine
1
[removed]
2025-05-24T04:57:16
https://www.reddit.com/r/LocalLLaMA/comments/1ku3nne/i_put_together_a_memory_protocol_after_chatgpt/
konig-ophion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku3nne
false
null
t3_1ku3nne
/r/LocalLLaMA/comments/1ku3nne/i_put_together_a_memory_protocol_after_chatgpt/
false
false
self
1
null
Why cant i use my system ram in LMStudio OpenSuse Tumbleweed
1
[removed]
2025-05-24T04:48:12
https://www.reddit.com/r/LocalLLaMA/comments/1ku3iif/why_cant_i_use_my_system_ram_in_lmstudio_opensuse/
nanomax55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku3iif
false
null
t3_1ku3iif
/r/LocalLLaMA/comments/1ku3iif/why_cant_i_use_my_system_ram_in_lmstudio_opensuse/
false
false
self
1
null
LM Studio 0.3.16 Released
1
[removed]
2025-05-24T02:46:27
https://lmstudio.ai/blog/lmstudio-v0.3.16
Hanthunius
lmstudio.ai
1970-01-01T00:00:00
0
{}
1ku1fgi
false
null
t3_1ku1fgi
/r/LocalLLaMA/comments/1ku1fgi/lm_studio_0316_released/
false
false
https://a.thumbs.redditm…Ymzf5I1N0qm8.jpg
1
{'enabled': False, 'images': [{'id': 'v3UIfCJZg3iZ4-uyLENvEYYuJNQvOgQfGclLgEQrP88', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=108&crop=smart&auto=webp&s=2650eb1c7472b5654084865b25cfc3e3c40fe8d3', 'width': 108}, {'height': 113, 'url': 'h...
A Privacy-Focused Perplexity That Runs Locally on Your Phone
68
Hey r/LocalLlama! 👋 I wanted to share **MyDeviceAI** \- a completely private alternative to Perplexity that runs entirely on your device. If you're tired of your search queries being sent to external servers and want the power of AI search without the privacy trade-offs, this might be exactly what you're looking for....
2025-05-24T02:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/
Ssjultrainstnict
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku1444
false
null
t3_1ku1444
/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/
false
false
self
68
{'enabled': False, 'images': [{'id': '_7vv-xzI257bN17gmsRQB9o502_chuq76bhLZhNoV3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=108&crop=smart&auto=webp&s=54dd364b3e06c51c826ecdb04d4d91b2127501a6', 'width': 108}, {'height': 108, 'url': 'h...
LM Studio 0.3.16 released.
1
2025-05-24T02:28:09
https://lmstudio.ai/blog/lmstudio-v0.3.16
Hanthunius
lmstudio.ai
1970-01-01T00:00:00
0
{}
1ku13ra
false
null
t3_1ku13ra
/r/LocalLLaMA/comments/1ku13ra/lm_studio_0316_released/
false
false
https://a.thumbs.redditm…Ymzf5I1N0qm8.jpg
1
{'enabled': False, 'images': [{'id': 'v3UIfCJZg3iZ4-uyLENvEYYuJNQvOgQfGclLgEQrP88', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=108&crop=smart&auto=webp&s=2650eb1c7472b5654084865b25cfc3e3c40fe8d3', 'width': 108}, {'height': 113, 'url': 'h...
VlLAMP
1
[removed]
2025-05-24T02:10:23
https://www.reddit.com/r/LocalLLaMA/comments/1ku0sbv/vllamp/
orangesk14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ku0sbv
false
null
t3_1ku0sbv
/r/LocalLLaMA/comments/1ku0sbv/vllamp/
false
false
https://b.thumbs.redditm…Pfc-fvoTCo8Y.jpg
1
null
I'm Building an AI Interview Prep Tool to Get Real Feedback on Your Answers - Using Ollama and Multi Agents using Agno
3
I'm developing an AI-powered interview preparation tool because I know how tough it can be to get good, specific feedback when practising for technical interviews. The idea is to use local Large Language Models (via Ollama) to: 1. Analyse your resume and extract key skills. 2. Generate dynamic interview questions bas...
2025-05-24T01:24:19
https://v.redd.it/1y00f0j9tm2f1
Solid_Woodpecker3635
v.redd.it
1970-01-01T00:00:00
0
{}
1ktzxni
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1y00f0j9tm2f1/DASHPlaylist.mpd?a=1750641875%2CY2EyNWI4NDlhYjQ0NTliNDkwZjAzODkzNDFhZmJjYjkzYjcxZjUwYmZiN2NmMmNjMDhmMDE0ZjE5Y2QyYWI2Mw%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/1y00f0j9tm2f1/DASH_1080.mp4?source=fallback', 'h...
t3_1ktzxni
/r/LocalLLaMA/comments/1ktzxni/im_building_an_ai_interview_prep_tool_to_get_real/
false
false
https://external-preview…590af62c296fc76f
3
{'enabled': False, 'images': [{'id': 'cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=108&crop=smart&format=pjpg&auto=webp&s=40b5a8f4c4e9994a7bd08755a70ac4c353aec...
Ollama finally acknowledged llama.cpp officially
499
In the 0.7.1 release, they introduce the capabilities of their multimodal engine. At the end in the [acknowledgments section](https://imgur.com/a/zKMizcr) they thanked the GGML project. https://ollama.com/blog/multimodal-models
2025-05-24T01:22:35
https://www.reddit.com/r/LocalLLaMA/comments/1ktzwgq/ollama_finally_acknowledged_llamacpp_officially/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktzwgq
false
null
t3_1ktzwgq
/r/LocalLLaMA/comments/1ktzwgq/ollama_finally_acknowledged_llamacpp_officially/
false
false
self
499
{'enabled': False, 'images': [{'id': 'NjutXkTOw_QxRIOA_mt2FgoZ25vP0-xQ21PVJHjT8_g', 'resolutions': [{'height': 175, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=108&crop=smart&auto=webp&s=50daed24a74c7e1aa86bcde3396b47f4f450407a', 'width': 108}, {'height': 350, 'url': '...
Ollama Qwen2.5-VL 7B & OCR
1
Started working with data extraction from scanned documents today using Open WebUI, Ollama and Qwen2.5-VL 7B. I had some shockingly good initial results, but when I tried to get the model to extract more data it started loosing detail that it had previously reported correctly. One issue was that the images I am deali...
2025-05-24T00:41:58
https://www.reddit.com/r/LocalLLaMA/comments/1ktz4ss/ollama_qwen25vl_7b_ocr/
PleasantCandidate785
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktz4ss
false
null
t3_1ktz4ss
/r/LocalLLaMA/comments/1ktz4ss/ollama_qwen25vl_7b_ocr/
false
false
self
1
null
New best Local Model?
0
https://www.sarvam.ai/blogs/sarvam-m Matches or beats Gemma3 27b supposedly
2025-05-24T00:11:16
https://i.redd.it/o0zqv39ogm2f1.png
dRraMaticc
i.redd.it
1970-01-01T00:00:00
0
{}
1ktyjkm
false
null
t3_1ktyjkm
/r/LocalLLaMA/comments/1ktyjkm/new_best_local_model/
false
false
https://a.thumbs.redditm…K8DU1EgKMX94.jpg
0
{'enabled': True, 'images': [{'id': '7MhzmUFVkK4m-k5a3ZErVkjWNORjVKkTLfrG6825Hkc', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=108&crop=smart&auto=webp&s=5f371f69fddcaa4ccf1d9b1cb0760a6f8f7c6f48', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.pn...
Want to know your reviews about this 14B model.
1
[removed]
2025-05-24T00:07:30
https://www.reddit.com/r/LocalLLaMA/comments/1ktygug/want_to_know_your_reviews_about_this_14b_model/
pinpann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktygug
false
null
t3_1ktygug
/r/LocalLLaMA/comments/1ktygug/want_to_know_your_reviews_about_this_14b_model/
false
false
self
1
null
Anyone else prefering non thinking models ?
145
So far Ive experienced non CoT models to have more curiosity and asking follow up questions. Like gemma3 or qwen2.5 70b. Tell them about something and they ask follow up questions, i think CoT models ask them selves all the questions and end up very confident. I also understand the strength of CoT models for problem so...
2025-05-23T23:50:26
https://www.reddit.com/r/LocalLLaMA/comments/1kty4mh/anyone_else_prefering_non_thinking_models/
StandardLovers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kty4mh
false
null
t3_1kty4mh
/r/LocalLLaMA/comments/1kty4mh/anyone_else_prefering_non_thinking_models/
false
false
self
145
null
Run llama3.3 70b (8 bit) on 40G vram - 10 tokens/sec ?
1
[removed]
2025-05-23T23:25:16
https://www.reddit.com/r/LocalLLaMA/comments/1ktxmif/run_llama33_70b_8_bit_on_40g_vram_10_tokenssec/
Adventurous_Disk8047
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktxmif
false
null
t3_1ktxmif
/r/LocalLLaMA/comments/1ktxmif/run_llama33_70b_8_bit_on_40g_vram_10_tokenssec/
false
false
self
1
null
run LLaMA 3.3 70B (8-bit) on 40G vram — how to hit 10+ tokens/sec?
1
[removed]
2025-05-23T23:21:52
https://www.reddit.com/r/LocalLLaMA/comments/1ktxk14/run_llama_33_70b_8bit_on_40g_vram_how_to_hit_10/
Adventurous_Disk8047
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktxk14
false
null
t3_1ktxk14
/r/LocalLLaMA/comments/1ktxk14/run_llama_33_70b_8bit_on_40g_vram_how_to_hit_10/
false
false
self
1
null
Guys! I managed to build a 100% fully local voice AI with Ollama that can have full conversations, control all my smart devices AND now has both short term + long term memory. 🤘
1,777
I found out recently that Amazon/Alexa is going to use ALL users vocal data with ZERO opt outs for their new Alexa+ service so I decided to build my own that is 1000x better and runs fully local. The stack uses Home Assistant directly tied into Ollama. The long and short term memory is a custom automation design that...
2025-05-23T22:56:42
https://v.redd.it/iigum5tb3m2f1
RoyalCities
/r/LocalLLaMA/comments/1ktx15j/guys_i_managed_to_build_a_100_fully_local_voice/
1970-01-01T00:00:00
0
{}
1ktx15j
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/iigum5tb3m2f1/DASHPlaylist.mpd?a=1750762607%2CNDc4YmY4NTY4MTA3OGY0MjZiNGIzMmIzYzgxODQwYTIwY2ZhMjUyM2E3ZWFiN2RkMDVmNTI5ZTZlZDQyOTRkZQ%3D%3D&v=1&f=sd', 'duration': 124, 'fallback_url': 'https://v.redd.it/iigum5tb3m2f1/DASH_1080.mp4?source=fallback', '...
t3_1ktx15j
/r/LocalLLaMA/comments/1ktx15j/guys_i_managed_to_build_a_100_fully_local_voice/
false
false
https://external-preview…36b9ecf60dd06747
1,777
{'enabled': False, 'images': [{'id': 'b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=d6de92e50c97ea5fa95649d6c083258792e8...
AM5 or TRX4 for local LLMs?
11
Hello all, I am just now dipping my toes in local LLMs and wanting to run LLaMa 70B locally, had some questions regarding the hardware side of things before I start spending more money. My main concern is whether to go with the AM5 platform or TRX4 for local inferencing and minor fine-tuning on smaller models here and...
2025-05-23T21:58:28
https://www.reddit.com/r/LocalLLaMA/comments/1ktvs5j/am5_or_trx4_for_local_llms/
Ponce_DeLeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktvs5j
false
null
t3_1ktvs5j
/r/LocalLLaMA/comments/1ktvs5j/am5_or_trx4_for_local_llms/
false
false
self
11
null
Building a new server, looking at using two AMD MI60 (32gb VRAM) GPU’s. Will it be sufficient/effective for my use case?
4
I'm putting together my new build, I already purchased a Darkrock Classico Max case (as I use my server for Plex and wanted a lot of space for drives). I'm currently landing on the following for the rest of the specs: CPU: I9-12900K RAM: 64GB DDR5 MB: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard Storage: 2TB crucial ...
2025-05-23T21:48:07
https://www.reddit.com/r/LocalLLaMA/comments/1ktvjw4/building_a_new_server_looking_at_using_two_amd/
FantasyMaster85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktvjw4
false
null
t3_1ktvjw4
/r/LocalLLaMA/comments/1ktvjw4/building_a_new_server_looking_at_using_two_amd/
false
false
self
4
null
Hiring etiquette
1
[removed]
2025-05-23T21:17:00
https://www.reddit.com/r/LocalLLaMA/comments/1ktuub3/hiring_etiquette/
sunnysing_73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktuub3
false
null
t3_1ktuub3
/r/LocalLLaMA/comments/1ktuub3/hiring_etiquette/
false
false
self
1
null
Why does LM Studio have such small context ???
0
I ask like 2 questions for coding or 1 conversation of 3 msg's and context is 100% full, why can't we have epic length convos ??
2025-05-23T21:09:59
https://www.reddit.com/r/LocalLLaMA/comments/1ktuodr/why_does_lm_studio_have_such_small_context/
intimate_sniffer69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktuodr
false
null
t3_1ktuodr
/r/LocalLLaMA/comments/1ktuodr/why_does_lm_studio_have_such_small_context/
false
false
self
0
null
Best local coding model right now?
65
Hi! I was very active here about a year ago, but I've been using Claude a lot the past few months. I do like claude a lot, but it's not magic and smaller models are actually quite a lot nicer in the sense that I have far, far more control over I have a 7900xtx, and I was eyeing gemma 27b for local coding support? A...
2025-05-23T20:57:09
https://www.reddit.com/r/LocalLLaMA/comments/1ktudaj/best_local_coding_model_right_now/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktudaj
false
null
t3_1ktudaj
/r/LocalLLaMA/comments/1ktudaj/best_local_coding_model_right_now/
false
false
self
65
null
How do I get started? Hardware?
1
[removed]
2025-05-23T20:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1ktttk8/how_do_i_get_started_hardware/
Grand-Departure3485
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktttk8
false
null
t3_1ktttk8
/r/LocalLLaMA/comments/1ktttk8/how_do_i_get_started_hardware/
false
false
self
1
null
What's the limits of vibe coding?
0
First, the link of my project: [https://github.com/charmandercha/TextGradGUI](https://github.com/charmandercha/TextGradGUI) Original repository: [https://github.com/zou-group/textgrad](https://github.com/zou-group/textgrad) Nature article about TextGrad: [https://www.nature.com/articles/s41586-025-08661-4](https://...
2025-05-23T20:15:34
https://www.reddit.com/r/LocalLLaMA/comments/1kttes0/whats_the_limits_of_vibe_coding/
charmander_cha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kttes0
false
null
t3_1kttes0
/r/LocalLLaMA/comments/1kttes0/whats_the_limits_of_vibe_coding/
false
false
self
0
{'enabled': False, 'images': [{'id': 'lRy20nCs27CBlbBYfH6rHMv64-unttvihTyh4gbBJ3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=108&crop=smart&auto=webp&s=c9254c012ebf9234c241f157a5de7a2eb192a9b6', 'width': 108}, {'height': 108, 'url': 'h...
Need Help! What are the Best and Most Stable Versions of Gemma-3, Qwen-3, QwQ-32B, GLM4, and Mistral Small?
1
[removed]
2025-05-23T20:13:08
https://www.reddit.com/r/LocalLLaMA/comments/1kttcqy/need_help_what_are_the_best_and_most_stable/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kttcqy
false
null
t3_1kttcqy
/r/LocalLLaMA/comments/1kttcqy/need_help_what_are_the_best_and_most_stable/
false
false
self
1
null
Google Veo 3 Computation Usage
9
Are there any asumptions what google veo 3 may cost in computation? I just want to see if there is a chance of model becoming local available. Or how their price may develop over time.
2025-05-23T20:02:24
https://www.reddit.com/r/LocalLLaMA/comments/1ktt3i8/google_veo_3_computation_usage/
Spiritual-Neat889
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktt3i8
false
null
t3_1ktt3i8
/r/LocalLLaMA/comments/1ktt3i8/google_veo_3_computation_usage/
false
false
self
9
null
A survey on AI-generated content.
1
[removed]
2025-05-23T20:02:08
https://www.reddit.com/r/LocalLLaMA/comments/1ktt3av/a_survey_on_aigenerated_content/
Goddamn_Lizard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktt3av
false
null
t3_1ktt3av
/r/LocalLLaMA/comments/1ktt3av/a_survey_on_aigenerated_content/
false
false
self
1
null
AMD Ryzen AI Max+ 395 vs M4 Max (?)
1
[removed]
2025-05-23T20:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1ktt2kv/amd_ryzen_ai_max_395_vs_m4_max/
c7abe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktt2kv
false
null
t3_1ktt2kv
/r/LocalLLaMA/comments/1ktt2kv/amd_ryzen_ai_max_395_vs_m4_max/
false
false
self
1
null
LLama.cpp with smolVLM 500M very slow on windows
4
I recently downloaded LLama.cpp on a mac M1 8gb ram, with smolVLM 500M, I get instant replies. I wanted to try on my windows with 32gb ram, i7-13700H, but it's so slow, almost 2 minutes to get the response. Do you guys have any idea why ? I tried with GPU mode (4070) but still super slow, i tried many diffrent bu...
2025-05-23T19:52:31
https://www.reddit.com/r/LocalLLaMA/comments/1ktsvaj/llamacpp_with_smolvlm_500m_very_slow_on_windows/
firyox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktsvaj
false
null
t3_1ktsvaj
/r/LocalLLaMA/comments/1ktsvaj/llamacpp_with_smolvlm_500m_very_slow_on_windows/
false
false
self
4
null
What models are you training right now and what compute are you using? (Parody of PCMR post)
1
2025-05-23T19:50:58
https://i.redd.it/p8khpzt64l2f1.png
Avelina9X
i.redd.it
1970-01-01T00:00:00
0
{}
1ktstzh
false
null
t3_1ktstzh
/r/LocalLLaMA/comments/1ktstzh/what_models_are_you_training_right_now_and_what/
false
false
https://b.thumbs.redditm…7ITxf3LRwIsM.jpg
1
{'enabled': True, 'images': [{'id': 'lXEPKihxhqD9mH7NAZaEAIwBB4wicZBOSH-MtoLIHPc', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=108&crop=smart&auto=webp&s=cbb1c62c86da82999bfbd50980fc3a184efe0532', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?...
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark
1
[removed]
2025-05-23T19:49:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktssw0/qwen314b_vs_phi14breasoning_plus_practical/
qki_machine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktssw0
false
null
t3_1ktssw0
/r/LocalLLaMA/comments/1ktssw0/qwen314b_vs_phi14breasoning_plus_practical/
false
false
self
1
{'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'h...
Cosyvoice 2 vs Dia 1.6b - which one is better overall?
16
Did anyone get to test both tts models? If yes, which sounds more realistic from your POV? Both models are very close, but I find CosyVoice slightly ahead due to its zero-shot capabilities; however, one downside is that you may need to use specific models for different tasks (e.g., zero-shot, cross-lingual). [https:/...
2025-05-23T19:48:21
https://www.reddit.com/r/LocalLLaMA/comments/1ktsrqv/cosyvoice_2_vs_dia_16b_which_one_is_better_overall/
Xodnil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktsrqv
false
null
t3_1ktsrqv
/r/LocalLLaMA/comments/1ktsrqv/cosyvoice_2_vs_dia_16b_which_one_is_better_overall/
false
false
self
16
{'enabled': False, 'images': [{'id': '5tzSDS7Cu7WmpF2f03uv3UBNPUJ-K-LnJ5_5ie1ZNf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=108&crop=smart&auto=webp&s=019cb7fa7296091ebede8514e483a64e95a1a184', 'width': 108}, {'height': 108, 'url': 'h...
Best Vibe Code tools (like Cursor) but are free and use your own local LLM?
146
I've seen Cursor and how it works, and it looks pretty cool, but I rather use my own local hosted LLMs and not pay a usage fee to a 3rd party company. Does anybody know of any good Vibe Coding tools, as good or better than Cursor, that run on your own local LLMs? Thanks!
2025-05-23T19:46:58
https://www.reddit.com/r/LocalLLaMA/comments/1ktsqit/best_vibe_code_tools_like_cursor_but_are_free_and/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktsqit
false
null
t3_1ktsqit
/r/LocalLLaMA/comments/1ktsqit/best_vibe_code_tools_like_cursor_but_are_free_and/
false
false
self
146
null
Upgraded from Ryzen 5 5600X to Ryzen 7 5700X3D, should I return it and get a Ryzen 7 5800X?
0
I have an RTX 4080 super (16gb) and I think qwen3-30b and 235b benefit from a faster CPU. As I've just upgraded to the Ryzen 7 5700X3D (3 GHZ), I wonder if I should return it and get the Ryzen 7 5800X (3.8 GHZ) instead (it's also about 30% cheaper)?
2025-05-23T19:46:15
https://www.reddit.com/r/LocalLLaMA/comments/1ktspy3/upgraded_from_ryzen_5_5600x_to_ryzen_7_5700x3d/
relmny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktspy3
false
null
t3_1ktspy3
/r/LocalLLaMA/comments/1ktspy3/upgraded_from_ryzen_5_5600x_to_ryzen_7_5700x3d/
false
false
self
0
null
LLM Judges Are Unreliable
13
2025-05-23T19:42:52
https://www.cip.org/blog/llm-judges-are-unreliable
IAmJoal
cip.org
1970-01-01T00:00:00
0
{}
1ktsn47
false
null
t3_1ktsn47
/r/LocalLLaMA/comments/1ktsn47/llm_judges_are_unreliable/
false
false
https://b.thumbs.redditm…DYxbtHB5hf8o.jpg
13
{'enabled': False, 'images': [{'id': '3z-maw8FO4zejvbJ3nADc4HJKO-ze8aL2QygE3TvCNk', 'resolutions': [{'height': 130, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=108&crop=smart&auto=webp&s=a23ec551508874bcae48faccf783a80637639ce2', 'width': 108}, {'height': 261, 'url': '...
Trying to fine tune llama 3.2 3B on a custom data set for a random college to see how it goes ....but results are not as expected....new trained model can't seem to answer based on the new data.
1
[removed]
2025-05-23T19:32:03
https://www.reddit.com/r/LocalLLaMA/comments/1ktse6o/trying_to_fine_tune_llama_32_3b_on_a_custom_data/
Adorable-Device-2732
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktse6o
false
null
t3_1ktse6o
/r/LocalLLaMA/comments/1ktse6o/trying_to_fine_tune_llama_32_3b_on_a_custom_data/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': '...
Having trouble getting to 1-2req/s with vllm and Qwen3 30B-A3B
0
Hey everyone, I'm currently renting out a single H100 GPU The Machine specs are: GPU:H100 SXM, GPU RAM: 80GB, CPU: Intel Xeon Platinum 8480 I run vllm with this setup behind nginx to monitor the HTTP connections: VLLM_DEBUG_LOG_API_SERVER_RESPONSE=TRUE nohup /home/ubuntu/.local/bin/vllm serve \ Qwen/...
2025-05-23T18:46:15
https://www.reddit.com/r/LocalLLaMA/comments/1ktrair/having_trouble_getting_to_12reqs_with_vllm_and/
bndrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktrair
false
null
t3_1ktrair
/r/LocalLLaMA/comments/1ktrair/having_trouble_getting_to_12reqs_with_vllm_and/
false
false
self
0
null
"Sarvam-M, a 24B open-weights hybrid model built on top of Mistral Small" can't they just say they have fine tuned mistral small or it's kind of wrapper?
47
2025-05-23T18:25:47
https://www.sarvam.ai/blogs/sarvam-m
WriedGuy
sarvam.ai
1970-01-01T00:00:00
0
{}
1ktqsog
false
null
t3_1ktqsog
/r/LocalLLaMA/comments/1ktqsog/sarvamm_a_24b_openweights_hybrid_model_built_on/
false
false
https://b.thumbs.redditm…TNTaz70-2-xI.jpg
47
{'enabled': False, 'images': [{'id': 'GJj34UkCJhlet2aj5Tqvrf3zg71S1UOaEIAXAb_MDNE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=108&crop=smart&auto=webp&s=abc6f04994a76bada9ad0e899c037e74f6ccc9ee', 'width': 108}, {'height': 113, 'url': 'h...
Tested Qwen3 all models on CPU (i5-10210U), RTX 3060 12GB, and RTX 3090 24GB
31
Qwen3 Model Testing Results (CPU + GPU) Model | Hardware | Load | Answer | Speed (t/s) \------------------|--------------------------------------------|--------------------|---------------------|------------ Qwen3-0.6B | Laptop (i5-10...
2025-05-23T18:12:05
https://www.reddit.com/r/LocalLLaMA/comments/1ktqgk0/tested_qwen3_all_models_on_cpu_i510210u_rtx_3060/
1BlueSpork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktqgk0
false
null
t3_1ktqgk0
/r/LocalLLaMA/comments/1ktqgk0/tested_qwen3_all_models_on_cpu_i510210u_rtx_3060/
false
false
self
31
{'enabled': False, 'images': [{'id': 'BnFEKSY4o57rP5nuVGxcnegc1wJJ3I7TvBcV8XhuEmk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ohd1ZF3MWwuIouVfDZ6jLtoVW-JZeSUwVWNpMvCmnXM.jpg?width=108&crop=smart&auto=webp&s=6a816275dfb13b62e8e2b2c9fe853be582055967', 'width': 108}, {'height': 162, 'url': 'h...
Understanding ternary quantization TQ2_0 and TQ1_0 in llama.cpp
1
[removed]
2025-05-23T18:01:38
https://www.reddit.com/r/LocalLLaMA/comments/1ktq7i6/understanding_ternary_quantization_tq2_0_and_tq1/
datashri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktq7i6
false
null
t3_1ktq7i6
/r/LocalLLaMA/comments/1ktq7i6/understanding_ternary_quantization_tq2_0_and_tq1/
false
false
self
1
null
Kanana 1.5 2.1B/8B, English/Korean bilingual by kakaocorp
7
2025-05-23T17:59:17
https://huggingface.co/collections/kakaocorp/kanana-15-682d75c83b5f51f4219a17fb
nananashi3
huggingface.co
1970-01-01T00:00:00
0
{}
1ktq54n
false
null
t3_1ktq54n
/r/LocalLLaMA/comments/1ktq54n/kanana_15_21b8b_englishkorean_bilingual_by/
false
false
https://a.thumbs.redditm…kIMVX4DXQB18.jpg
7
{'enabled': False, 'images': [{'id': 'NrT1Tg68IvhApW7FYuv-a8KpW1xoE2VvYWGtQbOwVWw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=108&crop=smart&auto=webp&s=3b7ee38a6d1322dc68dbf850a0031028e298c9ad', 'width': 108}, {'height': 116, 'url': 'h...
Anyone on Oahu want to let me borrow an RTX 6000 Pro to benchmark against this dual 5090 rig?
87
Sits on my office desk for running very large context prompts (50K words) with QwQ 32B. Gotta be offline because they have a lot of P.I.I. Had it in a Mechanic Master c34plus (25L) but CPU fans (Scythe Grand Tornado 3,000rpm) kept ramping up because two 5090s were blasting the radiator in a confined space, and could ...
2025-05-23T17:52:15
https://www.reddit.com/gallery/1ktpz29
Special-Wolverine
reddit.com
1970-01-01T00:00:00
0
{}
1ktpz29
false
null
t3_1ktpz29
/r/LocalLLaMA/comments/1ktpz29/anyone_on_oahu_want_to_let_me_borrow_an_rtx_6000/
false
false
https://b.thumbs.redditm…YXWlOm95gUJw.jpg
87
null
Comparing Small Language Models on RAG Tasks with SLM RAG Arena
1
[removed]
2025-05-23T17:46:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktpu54/comparing_small_language_models_on_rag_tasks_with/
unseenmarscai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktpu54
false
null
t3_1ktpu54
/r/LocalLLaMA/comments/1ktpu54/comparing_small_language_models_on_rag_tasks_with/
false
false
https://b.thumbs.redditm…p7yESVL492nE.jpg
1
null
SLM RAG Arena: Compare and Find Good Sub-5B Models for RAG
1
[removed]
2025-05-23T17:36:17
https://i.redd.it/vrc33zzphk2f1.png
unseenmarscai
i.redd.it
1970-01-01T00:00:00
0
{}
1ktpl17
false
null
t3_1ktpl17
/r/LocalLLaMA/comments/1ktpl17/slm_rag_arena_compare_and_find_good_sub5b_models/
false
false
https://b.thumbs.redditm…bH3QVXc_L8_I.jpg
1
{'enabled': True, 'images': [{'id': 'NmBaB7GMmRd_4LCUy7g8mkM-EkV72xawHFBQJRLB4Eg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=108&crop=smart&auto=webp&s=737ddcfd3d0fc814c67a712afbceaffd23903041', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png...
SLM RAG Arena - Compare and Find Good Sub-5B Models for RAG
1
[removed]
2025-05-23T17:31:27
https://www.reddit.com/r/LocalLLaMA/comments/1ktpgpe/slm_rag_arena_compare_and_find_good_sub5b_models/
No_Salamander1882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktpgpe
false
null
t3_1ktpgpe
/r/LocalLLaMA/comments/1ktpgpe/slm_rag_arena_compare_and_find_good_sub5b_models/
false
false
https://b.thumbs.redditm…bdhCDZc-TuCc.jpg
1
{'enabled': False, 'images': [{'id': '3T2rZ5JEPyEbxb2lh4vzMqmNAiDyv7lVg3dWa-ileyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=108&crop=smart&auto=webp&s=78bd57a9198a127549f20efee3faa66623e200d7', 'width': 108}, {'height': 116, 'url': 'h...
SLM RAG Arena - Compare and Find Good Sub-5B Models for RAG
1
[removed]
2025-05-23T17:28:13
https://i.redd.it/bb0y3vtegk2f1.png
unseenmarscai
i.redd.it
1970-01-01T00:00:00
0
{}
1ktpduj
false
null
t3_1ktpduj
/r/LocalLLaMA/comments/1ktpduj/slm_rag_arena_compare_and_find_good_sub5b_models/
false
false
https://b.thumbs.redditm…SVumCmn-ktMU.jpg
1
{'enabled': True, 'images': [{'id': 'fcHPsHhRwO8U3PrPTckzvN2tQXdzRQjpdecuIfjDvfI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=108&crop=smart&auto=webp&s=a0aa7222432b91615c5184de314d7b62b49a7335', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png...
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
1
[removed]
2025-05-23T17:24:07
https://i.redd.it/o8lcrzwefk2f1.png
unseenmarscai
i.redd.it
1970-01-01T00:00:00
0
{}
1ktpa1p
false
null
t3_1ktpa1p
/r/LocalLLaMA/comments/1ktpa1p/slm_rag_arena_compare_and_find_the_best_sub5b/
false
false
https://a.thumbs.redditm…7z0ALrsO9wE4.jpg
1
{'enabled': True, 'images': [{'id': 'Wiy8bgTQIsEOBtN3IGjgdD5PhQH5CdrnsaKRkpLuy2c', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=108&crop=smart&auto=webp&s=1dec5ee96a287ff88885638f5342b5e566216f0d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png...
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
1
[removed]
2025-05-23T17:16:28
https://www.reddit.com/r/LocalLLaMA/comments/1ktp2x5/slm_rag_arena_compare_and_find_the_best_sub5b/
unseenmarscai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktp2x5
false
null
t3_1ktp2x5
/r/LocalLLaMA/comments/1ktp2x5/slm_rag_arena_compare_and_find_the_best_sub5b/
false
false
https://a.thumbs.redditm…hpSo2CnTkjs0.jpg
1
{'enabled': False, 'images': [{'id': 'wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=108&crop=smart&auto=webp&s=ed166e32fba7f49963d6e6f4ebb00f2f43107edd', 'width': 108}, {'height': 144, 'url': 'h...
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
1
[removed]
2025-05-23T17:10:53
https://i.redd.it/0m03gj5edk2f1.png
Dazzling-Cap744
i.redd.it
1970-01-01T00:00:00
0
{}
1ktoxs3
false
null
t3_1ktoxs3
/r/LocalLLaMA/comments/1ktoxs3/slm_rag_arena_compare_and_find_the_best_sub5b/
false
false
https://b.thumbs.redditm…lUvTAa0K4PKU.jpg
1
{'enabled': True, 'images': [{'id': '-tBDynN3aOYvaGDf87ECfSf_gSqwScy_y6cLEPEQwI4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=108&crop=smart&auto=webp&s=2c2c84b53b44a68c2cdca664c9343ebebb59efc2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png...
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
1
[removed]
2025-05-23T16:59:48
https://i.redd.it/ikvqvvaaak2f1.png
unseenmarscai
i.redd.it
1970-01-01T00:00:00
0
{}
1ktonl6
false
null
t3_1ktonl6
/r/LocalLLaMA/comments/1ktonl6/slm_rag_arena_compare_and_find_the_best_sub5b/
false
false
https://a.thumbs.redditm…7GXjPuXpz8V0.jpg
1
{'enabled': True, 'images': [{'id': '55U7Y6g4TiBPTPu9wNpkYhC4p_9LUOfeCIvByvx4dvk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=108&crop=smart&auto=webp&s=4c200607742f9f8a42ad682ce0c5210eeacd57a4', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png...
So what are some cool projects you guys are running on you local llms?
58
Trying to find good ideas to implement on my setup, or maybe get some inspiration to do something on my own
2025-05-23T16:55:33
https://www.reddit.com/r/LocalLLaMA/comments/1ktojxe/so_what_are_some_cool_projects_you_guys_are/
itzikhan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktojxe
false
null
t3_1ktojxe
/r/LocalLLaMA/comments/1ktojxe/so_what_are_some_cool_projects_you_guys_are/
false
false
self
58
null
LLMI system I (not my money) got for our group
180
2025-05-23T16:52:23
https://i.redd.it/lgjexuw8ak2f1.jpeg
SandboChang
i.redd.it
1970-01-01T00:00:00
0
{}
1ktoh78
false
null
t3_1ktoh78
/r/LocalLLaMA/comments/1ktoh78/llmi_system_i_not_my_money_got_for_our_group/
false
false
https://b.thumbs.redditm…D_R6Ax4gyf7o.jpg
180
{'enabled': True, 'images': [{'id': 'Y9oM7DtsJUSL1S_CXDfvsrN56xHFNxKI0_W5nDUOHOY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=108&crop=smart&auto=webp&s=4e7502705e0b589d6e33a689210490d1546b1048', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jp...
LLMI system I (not my money) got for the group
1
[removed]
2025-05-23T16:50:04
https://i.redd.it/xlu1hsfj3k2f1.jpeg
SandboChang
i.redd.it
1970-01-01T00:00:00
0
{}
1ktof5c
false
null
t3_1ktof5c
/r/LocalLLaMA/comments/1ktof5c/llmi_system_i_not_my_money_got_for_the_group/
false
false
https://b.thumbs.redditm…Iv6mZJ0ye8pg.jpg
1
{'enabled': True, 'images': [{'id': 'zFAVAcLpeVKajxvMYo0NM4QH2u1z5gaf3i4_RnKaJcE', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=108&crop=smart&auto=webp&s=7cd906e4c120d560993394c54387518fba3c89ee', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jp...
Spatial Reasoning is Hot 🔥🔥🔥🔥🔥🔥
21
Notice the recent uptick in google search interest around "spatial reasoning." And now we have a fantastic new benchmark to better measure these capabilities. **SpatialScore:** [https://haoningwu3639.github.io/SpatialScore/](https://haoningwu3639.github.io/SpatialScore/) The **SpatialScore** benchmarks offer a co...
2025-05-23T15:26:50
https://www.reddit.com/gallery/1ktmdpo
remyxai
reddit.com
1970-01-01T00:00:00
0
{}
1ktmdpo
false
null
t3_1ktmdpo
/r/LocalLLaMA/comments/1ktmdpo/spatial_reasoning_is_hot/
false
false
https://b.thumbs.redditm…tup0fAClRAOg.jpg
21
null
What model should I choose?
5
I study in medical field and I cannot stomach hours of search in books anymore. So I would like to run AI that will take books(they will be both in Russian and English) as context and spew answer to the questions while also providing reference, so that I can check, memorise and take notes. I don't mind the waiting of 3...
2025-05-23T15:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1ktm248/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktm248
false
null
t3_1ktm248
/r/LocalLLaMA/comments/1ktm248/what_model_should_i_choose/
false
false
self
5
null
Sarvam-M a 24B open-weights hybrid reasoning model
6
Model Link: [https://huggingface.co/sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m) Model Info: It's a 2 staged post trained version of Mistral 24B on SFT and GRPO. It's a hybrid reasoning model which means that both reasoning and non-reasoning models are fitted in same model. You can choose when to reas...
2025-05-23T15:13:11
https://i.redd.it/8gk7kugnsj2f1.png
RealKingNish
i.redd.it
1970-01-01T00:00:00
0
{}
1ktm1n7
false
null
t3_1ktm1n7
/r/LocalLLaMA/comments/1ktm1n7/sarvamm_a_24b_openweights_hybrid_reasoning_model/
false
false
https://b.thumbs.redditm…zkLqrHDtvdlA.jpg
6
{'enabled': True, 'images': [{'id': 'DmMsBRPNbi849LghsoO44o2QAnMJPTmgh7bjxAlSNrE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=108&crop=smart&auto=webp&s=e758ae8dd0759d6b6eaaa31b4cdaf08d467f7ba4', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.pn...
AI becoming too sycophantic? Noticed Gemini 2.5 praising me instead of solving the issue
98
Hello there, I get the feeling that the trend of making AI more inclined towards flattery and overly focused on a user's feelings is somehow degrading its ability to actually solve problems. Is it just me? For instance, I've recently noticed that Gemini 2.5, instead of giving a direct solution, will spend time praising...
2025-05-23T15:11:54
https://www.reddit.com/r/LocalLLaMA/comments/1ktm0hd/ai_becoming_too_sycophantic_noticed_gemini_25/
Rrraptr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktm0hd
false
null
t3_1ktm0hd
/r/LocalLLaMA/comments/1ktm0hd/ai_becoming_too_sycophantic_noticed_gemini_25/
false
false
self
98
null
96GB VRAM! What should run first?
1,462
I had to make a fake company domain name to order this from a supplier. They wouldn’t even give me a quote with my Gmail address. I got the card though!
2025-05-23T15:10:20
https://i.redd.it/co0zhh06sj2f1.jpeg
Mother_Occasion_8076
i.redd.it
1970-01-01T00:00:00
0
{}
1ktlz3w
false
null
t3_1ktlz3w
/r/LocalLLaMA/comments/1ktlz3w/96gb_vram_what_should_run_first/
false
false
https://b.thumbs.redditm…L7CEw3G-_KrM.jpg
1,462
{'enabled': True, 'images': [{'id': 'uU6dM4WijM_cYbJ_ExiJXAu9rQhwKqr0Nz3u14SWZ3E', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=108&crop=smart&auto=webp&s=a35164fe77c202ec5b589dfe668feb1e80c255c0', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jp...
Strategies for aligning embedded text in PDF into a logical order
2
So I have some PDFs which have text information embedded and these are essentially bank statements with items in rows with amounts. However, if you try to select them in a PDF viewer, the text is everywhere as the embedded text is not in any sane order. This is massively frustrating since the accurate embedded text is...
2025-05-23T14:46:56
https://www.reddit.com/r/LocalLLaMA/comments/1ktleg0/strategies_for_aligning_embedded_text_in_pdf_into/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktleg0
false
null
t3_1ktleg0
/r/LocalLLaMA/comments/1ktleg0/strategies_for_aligning_embedded_text_in_pdf_into/
false
false
self
2
null
What model should I choose?
1
[removed]
2025-05-23T14:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1ktkmqh/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktkmqh
false
null
t3_1ktkmqh
/r/LocalLLaMA/comments/1ktkmqh/what_model_should_i_choose/
false
false
self
1
null
Unmute by Kyutai: Make LLMs listen and speak
188
Seems nicely polished and apparently works with any LLM. Open-source in the coming weeks. Demo uses Gemma 3 12B as base LLM (demo link in the blog post, reddit seems to auto-delete my post if I include it here). If any Kyutai dev happens to lurk here, would love to hear about the memory requirements of the TTS & STT ...
2025-05-23T14:12:46
https://kyutai.org/2025/05/22/unmute.html
rerri
kyutai.org
1970-01-01T00:00:00
0
{}
1ktklo5
false
null
t3_1ktklo5
/r/LocalLLaMA/comments/1ktklo5/unmute_by_kyutai_make_llms_listen_and_speak/
false
false
default
188
null
Claude 4 (Sonnet) isn't great for document understanding tasks: some surprising results
112
Finished benchmarking Claude 4 (Sonnet) across a range of document understanding tasks, and the results are… not that good. It's currently **ranked 7th overall** on the leaderboard. Key takeaways: * Weak performance in OCR – Claude 4 lags behind even smaller models like GPT-4.1-nano and InternVL3-38B-Instruct. * Rota...
2025-05-23T14:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1ktkhp8/claude_4_sonnet_isnt_great_for_document/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktkhp8
false
null
t3_1ktkhp8
/r/LocalLLaMA/comments/1ktkhp8/claude_4_sonnet_isnt_great_for_document/
false
false
https://b.thumbs.redditm…ut3-YAYKWjzQ.jpg
112
null
It never ends with these people, no matter how far you go
0
2025-05-23T14:08:11
https://i.redd.it/3z82k151hj2f1.png
baobabKoodaa
i.redd.it
1970-01-01T00:00:00
0
{}
1ktkhof
false
null
t3_1ktkhof
/r/LocalLLaMA/comments/1ktkhof/it_never_ends_with_these_people_no_matter_how_far/
false
false
https://b.thumbs.redditm…eVwFLwBl3jJw.jpg
0
{'enabled': True, 'images': [{'id': 'Fs6wKucI7XQXvgSK7-pb0m7HMZvAcyIEwKkAqmqfGQ0', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?width=108&crop=smart&auto=webp&s=73022bcd93e86066cf43a993fb8cbdc002a73589', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/3z82k151hj2f1.pn...
Unmute by Kyutai: Make LLMs listen and speak
1
[removed]
2025-05-23T14:06:54
https://kyutai.org/2025/05/22/unmute.html
rerri
kyutai.org
1970-01-01T00:00:00
0
{}
1ktkgl1
false
null
t3_1ktkgl1
/r/LocalLLaMA/comments/1ktkgl1/unmute_by_kyutai_make_llms_listen_and_speak/
false
false
default
1
null
Question for RAG LLMs and Qwen3 benchmark
1
[removed]
2025-05-23T13:57:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktk8lh/question_for_rag_llms_and_qwen3_benchmark/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk8lh
false
null
t3_1ktk8lh
/r/LocalLLaMA/comments/1ktk8lh/question_for_rag_llms_and_qwen3_benchmark/
false
false
self
1
null