title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Hunyuan-A13B released
541
From HF repo: >Model Introduction >With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while ...
2025-06-27T06:59:21
https://huggingface.co/tencent/Hunyuan-A13B-Instruct
kristaller486
huggingface.co
1970-01-01T00:00:00
0
{}
1llndut
false
null
t3_1llndut
/r/LocalLLaMA/comments/1llndut/hunyuana13b_released/
false
false
https://external-preview…e81c3d25e624896c
541
{'enabled': False, 'images': [{'id': 'B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=108&crop=smart&auto=webp&s=07fddabe91e442028f9a3c3afd189223a7d91fce', 'width': 108}, {'height': 116, 'url': 'h...
Gemma 3n Multimodal Input: Text, Audio, Image, and Video?
11
Regardless of the API, what is the “most multimodal” Gemma2n can be made to operate? The docs say Gemma 3n input supports: 1. text + audio 2. text+ image The release mentions “video”, can it input: 3. True video (t+v+a) 4. Text + video (or imgseq) + audio 5. Running 1+2 and sharing some weights Or another combo? I...
2025-06-27T06:45:49
https://ai.google.dev/gemma/docs/core/huggingface_inference#audio
doomdayx
ai.google.dev
1970-01-01T00:00:00
0
{}
1lln6ar
false
null
t3_1lln6ar
/r/LocalLLaMA/comments/1lln6ar/gemma_3n_multimodal_input_text_audio_image_and/
false
false
https://external-preview…4a6f94955cdab7ea
11
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'h...
Reverse Engineering Gemma 3n
61
2025-06-27T06:45:04
https://github.com/antimatter15/reverse-engineering-gemma-3n
AppearanceHeavy6724
github.com
1970-01-01T00:00:00
0
{}
1lln5uj
false
null
t3_1lln5uj
/r/LocalLLaMA/comments/1lln5uj/reverse_engineering_gemma_3n/
false
false
default
61
{'enabled': False, 'images': [{'id': 'VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VWUjHfeZBfEe00CQ4OXN4N4xfnF0YI65AE8Jt2eK1GQ.png?width=108&crop=smart&auto=webp&s=89fbb0342ed3a531d420566c59fa6176e8cf82f3', 'width': 108}, {'height': 108, 'url': 'h...
What is the best under-12B local model for text polishing, proofreading, and grammar checking?
0
Hi, I'm looking for some suggestions for local LLMs. I'm dealing with some internal documents of the organization I work with, and I want to improve its quality. Since the documents shouldn't be shared externally, I have to use local models. I've searched the internet and it seems there are some models performing rel...
2025-06-27T06:23:48
https://www.reddit.com/r/LocalLLaMA/comments/1llmu12/what_is_the_best_under12b_local_model_for_text/
pitchblackfriday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llmu12
false
null
t3_1llmu12
/r/LocalLLaMA/comments/1llmu12/what_is_the_best_under12b_local_model_for_text/
false
false
self
0
null
FYI to everyone: RTX 3090 prices crashed and are back to baseline. You can finally get $600something 3090s again in the USA.
198
If you've been priced out by the spike to $1000+ recently for the past ~3 months, the prices finally dropped to baseline recently. You can get a $650-750 Nvidia 3090 fairly easily now, instead of being nearly impossible. Future pricing is unpredictable- if we follow expected deprecation trends, the 3090 should be a...
2025-06-27T06:20:23
https://www.reddit.com/r/LocalLLaMA/comments/1llms46/fyi_to_everyone_rtx_3090_prices_crashed_and_are/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llms46
false
null
t3_1llms46
/r/LocalLLaMA/comments/1llms46/fyi_to_everyone_rtx_3090_prices_crashed_and_are/
false
false
self
198
null
General opinions on Gemma 3n Speech-to-Text (STT)?
12
Hi everyone, Gemma 3n's release just happened, and to some of us a good STT model is something we have been longing for a long time. It will take even longer until we can dictate into LMstudio or similar, but I wanted to create this post to discuss your findings with regards to Gemma 3n's STT abilities. What are your...
2025-06-27T06:02:01
https://www.reddit.com/r/LocalLLaMA/comments/1llmhof/general_opinions_on_gemma_3n_speechtotext_stt/
Karim_acing_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llmhof
false
null
t3_1llmhof
/r/LocalLLaMA/comments/1llmhof/general_opinions_on_gemma_3n_speechtotext_stt/
false
false
self
12
null
New LLM looking for input on license
0
Working on my llm. How is this for a license what should I change? # EchoChAI Non-Commercial License v1.1 **Copyright © Echo Chai LTD, 2025** --- ## 1. Definitions **“Model”** refers to the artificial intelligence model named **EchoChAI**, including its architecture, weights, training data (where applicable), sou...
2025-06-27T05:52:34
https://www.reddit.com/r/LocalLLaMA/comments/1llmc8b/new_llm_looking_for_input_on_license/
nntb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llmc8b
false
null
t3_1llmc8b
/r/LocalLLaMA/comments/1llmc8b/new_llm_looking_for_input_on_license/
false
false
self
0
null
Best model for HTML?
3
I've been using ChatGPT which has been great but I'm on the free version which runs out of tokens quickly. I have a 5090, which model is the best for coding websites? I tried Qwen 3 32B but it's not good.
2025-06-27T05:51:09
https://www.reddit.com/r/LocalLLaMA/comments/1llmbg3/best_model_for_html/
Nomski88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llmbg3
false
null
t3_1llmbg3
/r/LocalLLaMA/comments/1llmbg3/best_model_for_html/
false
false
self
3
null
Update on memX: a shared memory for LLM agents
17
A few days ago I shared a project I was working on: [https://www.reddit.com/r/LocalLLaMA/comments/1lehbra/built\_memx\_a\_shared\_memory\_backend\_for\_llm\_agents/](https://www.reddit.com/r/LocalLLaMA/comments/1lehbra/built_memx_a_shared_memory_backend_for_llm_agents/) I have made significant progress and now, you gu...
2025-06-27T05:27:43
https://www.reddit.com/r/LocalLLaMA/comments/1lllxey/update_on_memx_a_shared_memory_for_llm_agents/
Temporary-Tap-7323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lllxey
false
null
t3_1lllxey
/r/LocalLLaMA/comments/1lllxey/update_on_memx_a_shared_memory_for_llm_agents/
false
false
self
17
null
The performance of NetEase's new Open-Source mathematical model Confucius3-Math
34
https://arxiv.org/abs/2506.18330
2025-06-27T05:21:56
https://www.reddit.com/gallery/1llltv5
Fun-Doctor6855
reddit.com
1970-01-01T00:00:00
0
{}
1llltv5
false
null
t3_1llltv5
/r/LocalLLaMA/comments/1llltv5/the_performance_of_neteases_new_opensource/
false
false
https://external-preview…e5a28c4b99bca027
34
{'enabled': True, 'images': [{'id': 'dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/dEIhtoYICYZI8SaYhg6vcNm2oKuH_uj_36i_H0fDXag.png?width=108&crop=smart&auto=webp&s=9e0f5dff4c69a4255e20a711f400abcfb79a89c4', 'width': 108}, {'height': 96, 'url': 'htt...
China's NetEase Releases Open- Source Mathematical Model: Confucius3-Math
28
Official Demon:https://confucius.youdao.com/ GitHub:https://github.com/netease-youdao/Confucius3-Math Huggingface:https://huggingface.co/netease-youdao/Confucius3-Math
2025-06-27T05:18:47
https://github.com/netease-youdao/Confucius3-Math
Fun-Doctor6855
github.com
1970-01-01T00:00:00
0
{}
1lllry7
false
null
t3_1lllry7
/r/LocalLLaMA/comments/1lllry7/chinas_netease_releases_open_source_mathematical/
false
false
https://external-preview…614918a1443738b6
28
{'enabled': False, 'images': [{'id': 'aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aBHy0lrTBGIAmO4wnEIfas529xQDjo-nRzz7hjQEXBI.png?width=108&crop=smart&auto=webp&s=9445516cb87f032059fb2e8430a8cfb5cd59b48e', 'width': 108}, {'height': 108, 'url': 'h...
I built a document workflow system using VLMs: processes complex docs end-to-end (runs locally!!)
7
Hey r/LocalLLaMA We're building Morphik: a multimodal search layer for AI applications that works super well with complex documents. (runs locally :)) Our users kept using our search API in creative ways to build document workflows and we realized they needed proper workflow automation, not just search queries. So w...
2025-06-27T05:15:42
https://www.reddit.com/r/LocalLLaMA/comments/1lllpzt/i_built_a_document_workflow_system_using_vlms/
yes-no-maybe_idk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lllpzt
false
null
t3_1lllpzt
/r/LocalLLaMA/comments/1lllpzt/i_built_a_document_workflow_system_using_vlms/
false
false
self
7
null
Local coding AI agent?
3
Hi, I'm looking for a decent coding agent that can run with local models and is open-source. I've not found anything yet. I've mostly have been using Tabby, which is alright, but I recently learned that the coding agent they're working on does not seem to have the ability to use a fully local stack.
2025-06-27T05:01:01
https://www.reddit.com/r/LocalLLaMA/comments/1lllgy8/local_coding_ai_agent/
spaceman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lllgy8
false
null
t3_1lllgy8
/r/LocalLLaMA/comments/1lllgy8/local_coding_ai_agent/
false
false
self
3
null
Question about agent mode like GitHub copilot.
2
Hello, I’m new to this whole AI coding thing and I was wondering if there’s a way to run some model locally that would allow something like github copilot’s agent mode?
2025-06-27T04:29:14
https://www.reddit.com/r/LocalLLaMA/comments/1llkwvn/question_about_agent_mode_like_github_copilot/
Straight_Caramel7725
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llkwvn
false
null
t3_1llkwvn
/r/LocalLLaMA/comments/1llkwvn/question_about_agent_mode_like_github_copilot/
false
false
self
2
null
LLAMA Models, Perplexity, Claude, Deepseek - welcome to my ai party!
0
Today I added some AI friends to my Custom Framework that has emerged from 15,000 directed conversations from a non-coder. Welcome to the chaos! 🥳🥳🥳🥳🥳🎉
2025-06-27T03:59:08
https://www.youtube.com/watch?v=4RCHW84Oo-4
Silly_Classic1005
youtube.com
1970-01-01T00:00:00
0
{}
1llkcwp
false
{'oembed': {'author_name': 'James OKelly', 'author_url': 'https://www.youtube.com/@jjkmusicbot', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4RCHW84Oo-4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco...
t3_1llkcwp
/r/LocalLLaMA/comments/1llkcwp/llama_models_perplexity_claude_deepseek_welcome/
false
false
default
0
null
POLL Do you like the subreddit twitter?
8
Thought it’d be good to get a sample from you guys because i’m fairly conflicted on it. [View Poll](https://www.reddit.com/poll/1lljn6h)
2025-06-27T03:19:53
https://www.reddit.com/r/LocalLLaMA/comments/1lljn6h/poll_do_you_like_the_subreddit_twitter/
Capable-Ad-7494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lljn6h
false
null
t3_1lljn6h
/r/LocalLLaMA/comments/1lljn6h/poll_do_you_like_the_subreddit_twitter/
false
false
self
8
null
What's this star all over the feed for LocalLLaMA?
15
How's this Reddit associated with Twitter? If we must have it, isn't hugging face more appropriate? I vote for [https://huggingface.co/models](https://huggingface.co/models) page. Twitter has nothing to do with local LLMs (or LLMs at all). For now, I created this block rule for uBlock origin to hide it: ||emoji.r...
2025-06-27T03:05:34
https://www.reddit.com/r/LocalLLaMA/comments/1lljdk8/whats_this_star_all_over_the_feed_for_localllama/
crodjer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lljdk8
false
null
t3_1lljdk8
/r/LocalLLaMA/comments/1lljdk8/whats_this_star_all_over_the_feed_for_localllama/
false
false
self
15
null
How to train custom arch or custom flow for LLMs
3
I'm fairly new to the LLM world and have been exploring several repos around fine-tuning and training. However, I'm at a point where I want to do more than just tweak existing models, like 1. Train my own custom architecture (not just finetune a pre-existing one), 2. Use custom loss functions that require additional ...
2025-06-27T03:00:48
https://www.reddit.com/r/LocalLLaMA/comments/1lljabk/how_to_train_custom_arch_or_custom_flow_for_llms/
commander-trex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lljabk
false
null
t3_1lljabk
/r/LocalLLaMA/comments/1lljabk/how_to_train_custom_arch_or_custom_flow_for_llms/
false
false
self
3
null
Model to analyze images
2
Is there a model that can analyze images like chat gpt and give a commentary on what the image is?
2025-06-27T01:48:51
https://www.reddit.com/r/LocalLLaMA/comments/1llhvvv/model_to_analyze_images/
Technical_Whole_947
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llhvvv
false
null
t3_1llhvvv
/r/LocalLLaMA/comments/1llhvvv/model_to_analyze_images/
false
false
nsfw
2
null
I'm using a local Llama model for my game's dialogue system!
685
I'm blown away by how fast and intelligent Llama 3.2 is!
2025-06-27T01:23:40
https://v.redd.it/cgoobkv5gd9f1
LandoRingel
/r/LocalLLaMA/comments/1llhdoq/im_using_a_local_llama_model_for_my_games/
1970-01-01T00:00:00
0
{}
1llhdoq
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cgoobkv5gd9f1/DASHPlaylist.mpd?a=1753709027%2CZmM1OGFlMzM2NDMwYTVlZmQ1NDRlY2EyYmUzNDU2ZDlmNGM0NGM1NjYzMzlhYmY1NTA2ZDNkNzVlZjE0MGRmYw%3D%3D&v=1&f=sd', 'duration': 93, 'fallback_url': 'https://v.redd.it/cgoobkv5gd9f1/DASH_1080.mp4?source=fallback', 'h...
t3_1llhdoq
/r/LocalLLaMA/comments/1llhdoq/im_using_a_local_llama_model_for_my_games/
false
false
https://external-preview…70db2e88d05776da
685
{'enabled': False, 'images': [{'id': 'c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2JvZG9ndjVnZDlmMe7CY4SqtJeZEukasJn79Adjh2cJgmt44HDkzVTcUucN.png?width=108&crop=smart&format=pjpg&auto=webp&s=f634844c23c2333237b70ed0a10d6b6b518a2...
Should LocalLLaMA move to fediverse?
0
I'm not a fan of centralized platforms, and now with the latest developments and the apparent move towards enshittification of this subreddit and the new, suspicious moderator, I honestly see now as more than the right time to save the essence of our community. I don't want anything to do with x/twitter or discord or b...
2025-06-27T01:02:51
https://www.reddit.com/r/LocalLLaMA/comments/1llgz1e/should_localllama_move_to_fediverse/
Evening_Ad6637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llgz1e
false
null
t3_1llgz1e
/r/LocalLLaMA/comments/1llgz1e/should_localllama_move_to_fediverse/
false
false
self
0
null
Anyone put multiple RTX Pro 6000's in one case?
0
Specifically the 600W cards, since the Max-Q are sold out everywhere. If you're running multiple of them I'd love to hear about the thermals/any issues you've faced!
2025-06-27T00:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1llgswq/anyone_put_multiple_rtx_pro_6000s_in_one_case/
Prestigious_Thing797
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llgswq
false
null
t3_1llgswq
/r/LocalLLaMA/comments/1llgswq/anyone_put_multiple_rtx_pro_6000s_in_one_case/
false
false
self
0
null
Looking for Open Source Tools That Support DuckDB Querying (Like PandasAI etc.)
10
Hey everyone, I'm exploring tools that support **DuckDB** querying for CSVs or tabular data — preferably ones that integrate with LLMs or allow natural language querying. I already know about **PandasAI**, **LangChain’s CSV agent**, and **LlamaIndex’s PandasQueryEngine**, but I’m specifically looking for open-source p...
2025-06-27T00:50:25
https://www.reddit.com/r/LocalLLaMA/comments/1llgpxj/looking_for_open_source_tools_that_support_duckdb/
callmedevilthebad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llgpxj
false
null
t3_1llgpxj
/r/LocalLLaMA/comments/1llgpxj/looking_for_open_source_tools_that_support_duckdb/
false
false
self
10
null
Dear Mod, we don't want our posts on X/Twitter.
881
Especially with no credit in the title, but rather just put in a comment just deep in there. This is user generated content, and not the property of the mods to just regurgitate whereever they wants. No harm meant, and also it seems like the majority of the community agrees with this consensus, based on downvotes of co...
2025-06-27T00:07:54
https://i.redd.it/ber4b39v2d9f1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1llfufv
false
null
t3_1llfufv
/r/LocalLLaMA/comments/1llfufv/dear_mod_we_dont_want_our_posts_on_xtwitter/
false
false
https://external-preview…22bd0a4af25c6ca1
881
{'enabled': True, 'images': [{'id': 'duhbzZgd4dDVhblE5xtdnCoj09lqg6eCAnJlSlLj4Go', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/ber4b39v2d9f1.png?width=108&crop=smart&auto=webp&s=5acce585c4231e286e9ba0afffd5fb810f4e46f9', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/ber4b39v2d9f1.pn...
Anyone used the Qualcomm AI SDK/QC AI 100 GPUs
3
Curious....AWS has an instance running this as well. Any thoughts vs Nvidia stack?
2025-06-26T23:57:32
https://www.reddit.com/r/LocalLLaMA/comments/1llfm7d/anyone_used_the_qualcomm_ai_sdkqc_ai_100_gpus/
onemoreburrito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llfm7d
false
null
t3_1llfm7d
/r/LocalLLaMA/comments/1llfm7d/anyone_used_the_qualcomm_ai_sdkqc_ai_100_gpus/
false
false
self
3
null
The mod of this server is a mod of r/grok
4
Just saying. No hate meant.
2025-06-26T23:53:14
https://i.redd.it/rk0yad6f0d9f1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1llfiwc
false
null
t3_1llfiwc
/r/LocalLLaMA/comments/1llfiwc/the_mod_of_this_server_is_a_mod_of_rgrok/
false
false
default
4
{'enabled': True, 'images': [{'id': 'rk0yad6f0d9f1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=108&crop=smart&auto=webp&s=3357cd9933efc20c927d340a9034deb63b027e63', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/rk0yad6f0d9f1.png?width=216&crop=smart&auto=webp...
How valuable is the lmarena data and 7a any model being trained on it?
1
Would love to know! Anyone knows?
2025-06-26T23:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1llfilp/how_valuable_is_the_lmarena_data_and_7a_any_model/
Extra-Whereas-9408
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llfilp
false
null
t3_1llfilp
/r/LocalLLaMA/comments/1llfilp/how_valuable_is_the_lmarena_data_and_7a_any_model/
false
false
self
1
null
Chatterbox tts - tips or advice?
2
I've been working with Chatterbox tts ( https://github.com/resemble-ai/chatterbox ) and found that male older/elder voices tend to get a more pronounced accent or non-native English speaker quality as the voice is older, more elderly. Anyone seeing similar behavior? Anyone have any accent suppression, or accent consist...
2025-06-26T23:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1llf7pj/chatterbox_tts_tips_or_advice/
bsenftner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llf7pj
false
null
t3_1llf7pj
/r/LocalLLaMA/comments/1llf7pj/chatterbox_tts_tips_or_advice/
false
false
self
2
{'enabled': False, 'images': [{'id': 'CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CNJa_GCOexsvX8vhtQJM_DY2zD8GtgHPBw4Cfg9Subs.png?width=108&crop=smart&auto=webp&s=84cfed297e0434f54b028dddd8225154189fc57a', 'width': 108}, {'height': 108, 'url': 'h...
Trained Cloud hosted sector specific LLM
0
Basically Roofing company + vertex ai/Google Cloud + roofing job data (roof photos of damage, permit pdf with no sensitive customer data) and I just heard of RAG. With those components plus a web interface for employees and google olauth per employee would this be a useful feasible tool at work. Thoughts for people mor...
2025-06-26T23:30:44
https://www.reddit.com/r/LocalLLaMA/comments/1llf1d6/trained_cloud_hosted_sector_specific_llm/
Ill_Worth_3248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llf1d6
false
null
t3_1llf1d6
/r/LocalLLaMA/comments/1llf1d6/trained_cloud_hosted_sector_specific_llm/
false
false
self
0
null
Can Llamcpp run gemma 3n?
14
I followed the instructions here, but when I try to run I get unknown architecture gemma3n error. Is it not supported and I fell for a generate doc?
2025-06-26T23:25:10
https://docs.unsloth.ai/basics/gemma-3n-how-to-run-and-fine-tune
thebadslime
docs.unsloth.ai
1970-01-01T00:00:00
0
{}
1llewyp
false
null
t3_1llewyp
/r/LocalLLaMA/comments/1llewyp/can_llamcpp_run_gemma_3n/
false
false
default
14
{'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '...
3060 TI $70 start bid
0
https://ebay.us/m/h7hvqd
2025-06-26T23:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1lleuvl/3060_ti_70_start_bid/
ReceptorDeceptor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lleuvl
false
null
t3_1lleuvl
/r/LocalLLaMA/comments/1lleuvl/3060_ti_70_start_bid/
false
false
self
0
null
Gemini CLI - someone already made a pull request for Local LLM providers (and more)
31
It's there, but the contributor still has to complete a CLA and nobody has openly talked about reviewing it. Would giving the PR a thumbs up help it?
2025-06-26T23:09:59
https://github.com/google-gemini/gemini-cli/pull/1939
merrycachemiss
github.com
1970-01-01T00:00:00
0
{}
1lleks2
false
null
t3_1lleks2
/r/LocalLLaMA/comments/1lleks2/gemini_cli_someone_already_made_a_pull_request/
false
false
default
31
{'enabled': False, 'images': [{'id': '07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/07Svddxhws9NRhwiaZE7X8N_M-orx7gvT8GOb4RjL2I.png?width=108&crop=smart&auto=webp&s=8ca166df85804930dda9721ee25b257d821844a0', 'width': 108}, {'height': 108, 'url': 'h...
World's Fastest Virtual Try On Model Gets a 50% Resolution Boost [FASHN v1.6]
0
2025-06-26T22:36:37
https://v.redd.it/wdyfp0gemc9f1
parkh7
v.redd.it
1970-01-01T00:00:00
0
{}
1lldti5
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wdyfp0gemc9f1/DASHPlaylist.mpd?a=1753569411%2CNTljNDQyZDc2NzI0NjgwYmExNzU4YWI5NmU4YWE5YTM4NDQ2MzU4MjNkZGIwN2JkY2YxZTA5ZGM4YjBhYjE1ZA%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/wdyfp0gemc9f1/DASH_720.mp4?source=fallback', 'ha...
t3_1lldti5
/r/LocalLLaMA/comments/1lldti5/worlds_fastest_virtual_try_on_model_gets_a_50/
false
false
https://external-preview…a88858e99c57df11
0
{'enabled': False, 'images': [{'id': 'YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/YWw0Y2gxZ2VtYzlmMf0yiAB72EusbfRJqfWG6E2CAnZXN1-g4XzAzfeKBauk.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b99435a0fe48d2bdf0eeeb79b609882abea...
AutoInference: Multiple inference options in a single library
15
Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, and vLLM.
2025-06-26T22:25:30
https://i.redd.it/0isu7rxjkc9f1.jpeg
According-Local-9704
i.redd.it
1970-01-01T00:00:00
0
{}
1lldkdg
false
null
t3_1lldkdg
/r/LocalLLaMA/comments/1lldkdg/autoinference_multiple_inference_options_in_a/
false
false
https://external-preview…da233c8b6444ff38
15
{'enabled': True, 'images': [{'id': 'BpKJLPqKVyOSXtg4r3SwO_zCptqk0G7Ypvcjqtb0sLM', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.jpeg?width=108&crop=smart&auto=webp&s=1b348159b50876b28b3fd83126e99343e4b020ab', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/0isu7rxjkc9f1.j...
AutoInference: Multiple inference options in a single library
1
2025-06-26T22:22:40
https://github.com/VolkanSimsir/Auto-Inference
According-Local-9704
github.com
1970-01-01T00:00:00
0
{}
1lldi2y
false
null
t3_1lldi2y
/r/LocalLLaMA/comments/1lldi2y/autoinference_multiple_inference_options_in_a/
false
false
https://external-preview…d8efd1431cb7f838
1
{'enabled': False, 'images': [{'id': 'hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hw72UuisPLbZYITFt1vL9boadDewqC5BjrpvU_FTW3s.png?width=108&crop=smart&auto=webp&s=1f0f5ba8c6e831da483f3e35b1a1abc772da1be5', 'width': 108}, {'height': 108, 'url': 'h...
[Question] Recommended open model for large context window?
4
I'm running models on a vllm cluster, curious which ones ya'll like for large context windows + tool calling? Thanks!
2025-06-26T22:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1lldhth/question_recommended_open_model_for_large_context/
soorg_nalyd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lldhth
false
null
t3_1lldhth
/r/LocalLLaMA/comments/1lldhth/question_recommended_open_model_for_large_context/
false
false
self
4
null
Automatically Evaluating AI Coding Assistants with Each Git Commit (Open Source)
3
2025-06-26T22:15:10
https://www.tensorzero.com/blog/automatically-evaluating-ai-coding-assistants-with-each-git-commit/
bianconi
tensorzero.com
1970-01-01T00:00:00
0
{}
1lldbts
false
null
t3_1lldbts
/r/LocalLLaMA/comments/1lldbts/automatically_evaluating_ai_coding_assistants/
false
false
default
3
{'enabled': False, 'images': [{'id': 'K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=108&crop=smart&auto=webp&s=7f316b890b2a31a8f62865e9dee0569e96f0223c', 'width': 108}, {'height': 113, 'url': 'h...
New top of the table - MMLU-Pro
1
https://preview.redd.it/…iet for donkeys.
2025-06-26T22:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1lld7do/new_top_of_the_table_mmlupro/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lld7do
false
null
t3_1lld7do
/r/LocalLLaMA/comments/1lld7do/new_top_of_the_table_mmlupro/
false
false
https://external-preview…ee717c96c1e7d5df
1
{'enabled': False, 'images': [{'id': 'yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yuVyL4HKPanu58NTCiOZ_BWuJnrwddJmam-DxHRiK2k.png?width=108&crop=smart&auto=webp&s=a64044a7db355d0608c1dc00dfece93273e43bc8', 'width': 108}, {'height': 116, 'url': 'h...
Let's talk about Google's Gemma license
12
I was just reviewing Google's Gemma license, because it is discouraging me from using Gemma3 to generate synthetic training data, when something else occurred to me: By my layperson's understanding of the license, some Gemma derivative models (maybe Amoral and Fallen, but ***definitely*** Tiger-Gemma, Big-Tiger-Gemma,...
2025-06-26T21:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1llcyvu/lets_talk_about_googles_gemma_license/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llcyvu
false
null
t3_1llcyvu
/r/LocalLLaMA/comments/1llcyvu/lets_talk_about_googles_gemma_license/
false
false
self
12
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'h...
My first project. Looking for some feedback!
2
I have uploaded my first GitHub repo (ever) and it is about my first project in this community. My background is actually in materials science and aerospace engineering and i am working as a post grad in my local research institute FORTH, and i will be starting my PhD this winter with this project as a foundation. I w...
2025-06-26T21:31:47
https://github.com/MariosAdamidis/FORTHought
Exotic-Investment110
github.com
1970-01-01T00:00:00
0
{}
1llcbay
false
null
t3_1llcbay
/r/LocalLLaMA/comments/1llcbay/my_first_project_looking_for_some_feedback/
false
false
https://external-preview…97df393b0bf9f102
2
{'enabled': False, 'images': [{'id': 'ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ero0p_ShouIwSUkuCnRXm9TL-2pXiutSbGY83-h7PQE.png?width=108&crop=smart&auto=webp&s=040f0cffded33a86b5cee1e5736d81539579ccd6', 'width': 108}, {'height': 108, 'url': 'h...
Tilde pits DeepSeek’s “NSA” vs Kimi’s “MoBA” sparse attention - the key to long-context LLM
12
Just finished Tilde Research’s new blog on sparse attention. They benchmark the two schemes in Chinese long-context models—DeepSeek’s Native Sparse Attention (NSA) and Moonshot/Kimi’s Mixture of Block Attention (MoBA)—against full attention. Sparse attention exploits inherent sparsity in model attention patterns t...
2025-06-26T21:31:45
https://www.reddit.com/r/LocalLLaMA/comments/1llcb9x/tilde_pits_deepseeks_nsa_vs_kimis_moba_sparse/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llcb9x
false
null
t3_1llcb9x
/r/LocalLLaMA/comments/1llcb9x/tilde_pits_deepseeks_nsa_vs_kimis_moba_sparse/
false
false
https://b.thumbs.redditm…o72yV2AxNjnw.jpg
12
{'enabled': False, 'images': [{'id': 'b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b9PAY9Uys9eVz0QmKOo2RFmkWMPBPY0JczoG9wyn_wQ.png?width=108&crop=smart&auto=webp&s=5f7c3693a1864e9ca63abfa502b90db66881100f', 'width': 108}, {'height': 113, 'url': 'h...
DeepSeek tool calling with llama.cpp
1
What's been everyone's experience using deepseek models and tool calling? I've been struggling to get llama.cpp to properly call tools. Not sure if its my client, or where the issue may lie. My code works fine against the OpenAI API I crash llama.cpp. Built from source, compiled/fetched today Template support...
2025-06-26T21:29:14
https://www.reddit.com/r/LocalLLaMA/comments/1llc94w/deepseek_tool_calling_with_llamacpp/
Commercial-Screen973
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llc94w
false
null
t3_1llc94w
/r/LocalLLaMA/comments/1llc94w/deepseek_tool_calling_with_llamacpp/
false
false
self
1
null
4 x 3090 or 2 7900xtx?
2
I can buy 4x3090 or 2 7900xtx and I have already one 7900xtx so it makes 3 7900xtx. Which build makes more sense?
2025-06-26T21:21:03
https://www.reddit.com/r/LocalLLaMA/comments/1llc20x/4_x_3090_or_2_7900xtx/
tutami
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llc20x
false
null
t3_1llc20x
/r/LocalLLaMA/comments/1llc20x/4_x_3090_or_2_7900xtx/
false
false
self
2
null
Open Source Local LLM Web Extension!
2
Hi all! Just wanted to put a little project I've been working on here so people can check it out if they want to! I've always wanted to use local LLMs on the web, so I decided it would be fun to make my own interface for AI-assisted web browsing! Currently, CLAIRE is designed to be used with LMStudio models but Ollama ...
2025-06-26T21:17:56
https://www.reddit.com/r/LocalLLaMA/comments/1llbz9j/open_source_local_llm_web_extension/
jahyeet42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llbz9j
false
null
t3_1llbz9j
/r/LocalLLaMA/comments/1llbz9j/open_source_local_llm_web_extension/
false
false
self
2
{'enabled': False, 'images': [{'id': 'oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oXgXiBZxa3ZgVl1BVmdTEuLoddjNm3AP-3AjDuvBOBg.png?width=108&crop=smart&auto=webp&s=51458e19590c192bb4ecf59fcf925ba9eeacea48', 'width': 108}, {'height': 108, 'url': 'h...
I want to talk to a 1000 page long pdf book, but how? Basically i dont really have the time to read it fully, but still really do want to gain at least the most important bits of knowledge from it! Beside just dumping it straight into gemini, what are my options? got a maxed out macbook m2 if needed
5
2025-06-26T21:06:47
https://i.redd.it/ouiakv6l6c9f1.png
visionsmemories
i.redd.it
1970-01-01T00:00:00
0
{}
1llbp9u
false
null
t3_1llbp9u
/r/LocalLLaMA/comments/1llbp9u/i_want_to_talk_to_a_1000_page_long_pdf_book_but/
false
false
default
5
{'enabled': True, 'images': [{'id': 'ouiakv6l6c9f1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/ouiakv6l6c9f1.png?width=108&crop=smart&auto=webp&s=124dcee2887994ae23f3c71fed547b937e6f8fe4', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/ouiakv6l6c9f1.png?width=216&crop=smart&auto=webp...
Crazy how this subreddit started out focused on Meta's LLaMA and ended up becoming a full-blown AI channel.
267
2025-06-26T20:44:28
https://i.redd.it/x6kkfnuo2c9f1.png
SilverRegion9394
i.redd.it
1970-01-01T00:00:00
0
{}
1llb5e9
false
null
t3_1llb5e9
/r/LocalLLaMA/comments/1llb5e9/crazy_how_this_subreddit_started_out_focused_on/
false
false
default
267
{'enabled': True, 'images': [{'id': 'x6kkfnuo2c9f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=108&crop=smart&auto=webp&s=7f3c463fb555af419a8748f8ed4f61e046cbb40c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/x6kkfnuo2c9f1.png?width=216&crop=smart&auto=we...
I’ve been fine tuning a small llm 500m parameter on my MacBook !!!
26
It’s for a STT & TTS engine that I’m trying to build, but can’t figure out how to get it running in multiple threads 😮‍💨
2025-06-26T20:38:58
https://i.redd.it/tfvnaqas1c9f1.jpeg
Ok-Math-5601
i.redd.it
1970-01-01T00:00:00
0
{}
1llb0et
false
null
t3_1llb0et
/r/LocalLLaMA/comments/1llb0et/ive_been_fine_tuning_a_small_llm_500m_parameter/
false
false
default
26
{'enabled': True, 'images': [{'id': 'tfvnaqas1c9f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=108&crop=smart&auto=webp&s=1b5478b063a8957a06bd1f1db52605af969efa97', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/tfvnaqas1c9f1.jpeg?width=216&crop=smart&auto=...
What is this checkmark next to our subreddit name?
122
2025-06-26T20:35:31
https://i.redd.it/u8j9adw41c9f1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1llaxaz
false
null
t3_1llaxaz
/r/LocalLLaMA/comments/1llaxaz/what_is_this_checkmark_next_to_our_subreddit_name/
false
false
https://external-preview…d8c30ee171c6d135
122
{'enabled': True, 'images': [{'id': 'XgnF64BuiL4d_73cqgNOWeUkufsWWaQj81blQwD-hxw', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?width=108&crop=smart&auto=webp&s=e6d0e3eecefd8df3e4dd55d6435e77e22e967e1b', 'width': 108}, {'height': 61, 'url': 'https://preview.redd.it/u8j9adw41c9f1.png?...
Arch-Agent Family of LLMs - Designed for fast, multi-step agent orchestration.
14
https://preview.redd.it/…katanemo/archgw)
2025-06-26T20:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1llawcf/archagent_family_of_llms_designed_for_fast/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1llawcf
false
null
t3_1llawcf
/r/LocalLLaMA/comments/1llawcf/archagent_family_of_llms_designed_for_fast/
false
false
https://b.thumbs.redditm…fcxwkscqULNs.jpg
14
{'enabled': False, 'images': [{'id': '3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3XQOsvT905GnlfeEvoOsJJEZYArlF_pQKyhe7nzO-Iw.png?width=108&crop=smart&auto=webp&s=f306a4c12f18439ec988c0089df6e8d527172f8b', 'width': 108}, {'height': 116, 'url': 'h...
I made a "fake reasoning" model. Surprising Results.
0
[https://github.com/hassanhamza930/thinkfast](https://github.com/hassanhamza930/thinkfast) I just chained 4 instances of Gemini Flash 2.5 Lite to act essentially as a fake reasoning system to add artifical reasoning tokens to any OpenRouter LLM call. Gemini Flash 2.5 Lite is super cool cause its ultra low latency, i ...
2025-06-26T20:19:34
https://i.redd.it/cc6d470txb9f1.png
freakH3O
i.redd.it
1970-01-01T00:00:00
0
{}
1llaiuy
false
null
t3_1llaiuy
/r/LocalLLaMA/comments/1llaiuy/i_made_a_fake_reasoning_model_surprising_results/
false
false
default
0
{'enabled': True, 'images': [{'id': 'cc6d470txb9f1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=108&crop=smart&auto=webp&s=8614315d7b7c6078bfb46244c78523b12b56dc22', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/cc6d470txb9f1.png?width=216&crop=smart&auto=web...
Google DeepMind Releases AlphaGenome
113
2025-06-26T20:01:14
https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/
aithrowaway22
deepmind.google
1970-01-01T00:00:00
0
{}
1lla27f
false
null
t3_1lla27f
/r/LocalLLaMA/comments/1lla27f/google_deepmind_releases_alphagenome/
false
false
default
113
{'enabled': False, 'images': [{'id': '43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/43SAwvb1n5vlp2Qq_6_pefepMSOiGDZDO8afisrPhzg.png?width=108&crop=smart&auto=webp&s=4d20ac23e4c7510279a9f5cbed27f7beed93d5ef', 'width': 108}, {'height': 113, 'url': 'h...
Best model for writing style transfer/marketing script generation
5
I am playing around with a bot for marketing ad script generation for a particular product. As a reference I have some relatively brief documentation about the product/its previous marketing angles as well as a database of about 150 previous ad scripts for this product with their corresponding success metrics (CTR/CPA,...
2025-06-26T19:57:53
https://www.reddit.com/r/LocalLLaMA/comments/1ll9z2j/best_model_for_writing_style_transfermarketing/
Malkus3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll9z2j
false
null
t3_1ll9z2j
/r/LocalLLaMA/comments/1ll9z2j/best_model_for_writing_style_transfermarketing/
false
false
self
5
null
I built a minimal Web UI for interacting with locally running Ollama models – lightweight, fast, and clean ✨
0
Hey everyone! I was recently looking for a **simple and clean web UI** to interact with **locally running Ollama models**, but I couldn’t find anything that truly fit my needs. Everything I came across was either: * Too bloated with features I didn’t need * Not very good-looking * Or just plain slow So I decided to ...
2025-06-26T19:38:05
https://www.reddit.com/r/LocalLLaMA/comments/1ll9hid/i_built_a_minimal_web_ui_for_interacting_with/
princesaini97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll9hid
false
null
t3_1ll9hid
/r/LocalLLaMA/comments/1ll9hid/i_built_a_minimal_web_ui_for_interacting_with/
false
false
self
0
{'enabled': False, 'images': [{'id': '7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7QBW3VNxTz5SkQXWOSbAPse8lUTqdwlANZdwvqY36N8.png?width=108&crop=smart&auto=webp&s=8b70c076f9fc3d7c28125f996e557be954d5d31b', 'width': 108}, {'height': 108, 'url': 'h...
Any local llm's for voice to text. I am tired of scam callers and want to waste their time
13
thinking of using an esp32 and a button to tell my windows system to automatically switch over to a bluetooth headset/LLM and waste their time. Anyone have something simple with a github that I can use? Doing research so starting here first
2025-06-26T19:27:01
https://www.reddit.com/r/LocalLLaMA/comments/1ll979q/any_local_llms_for_voice_to_text_i_am_tired_of/
wwwzombocom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll979q
false
null
t3_1ll979q
/r/LocalLLaMA/comments/1ll979q/any_local_llms_for_voice_to_text_i_am_tired_of/
false
false
self
13
null
Ollama 0.9.3 released today and support gemma-3n e4b/e2b
1
https://preview.redd.it/…ma-3n e4b/e2b...
2025-06-26T19:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1ll91b3/ollama_093_released_today_and_support_gemma3n/
StormrageBG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll91b3
false
null
t3_1ll91b3
/r/LocalLLaMA/comments/1ll91b3/ollama_093_released_today_and_support_gemma3n/
false
false
https://b.thumbs.redditm…nnth8io1RocE.jpg
1
null
Installing Gemma3n via Ollama
0
Anyone else getting this issue: ollama run gemma3n:e4b pulling manifest  Error: pull model manifest: 412:  The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at: [https://ollama.com/download](https://ollama.com/download)
2025-06-26T18:57:20
https://www.reddit.com/r/LocalLLaMA/comments/1ll8gb5/installing_gemma3n_via_ollama/
LibraryAdditional347
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll8gb5
false
null
t3_1ll8gb5
/r/LocalLLaMA/comments/1ll8gb5/installing_gemma3n_via_ollama/
false
false
self
0
null
How to sync context across AI Assistants (ChatGPT, Claude, Perplexity, Grok, Gemini...) in your browser
0
I usually use multiple AI assistants (chatgpt, perplexity, claude) but most of the time I just end up repeating myself or forgetting past chats, it is really frustrating since there is no shared context. I found OpenMemory chrome extension (open source) that was launched recently which fixes this by adding a shared “m...
2025-06-26T18:52:12
https://levelup.gitconnected.com/how-to-sync-context-across-ai-assistants-chatgpt-claude-perplexity-etc-in-your-browser-c4de54fe9b33?source=friends_link&sk=7ed1c3eebe1210a27e424ef9e4eaaffb
anmolbaranwal
levelup.gitconnected.com
1970-01-01T00:00:00
0
{}
1ll8bmw
false
null
t3_1ll8bmw
/r/LocalLLaMA/comments/1ll8bmw/how_to_sync_context_across_ai_assistants_chatgpt/
false
false
default
0
{'enabled': False, 'images': [{'id': 'UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/UPNBanEM5YOZ_1hQiNBz1MMudJ9WQ7rphhyfLBzrWTc.png?width=108&crop=smart&auto=webp&s=ac047e4a215e084b82eba7afd37c099bb1eaf5a7', 'width': 108}, {'height': 90, 'url': 'ht...
Gemma 3n vs Gemma 3 (4B/12B) Benchmarks
104
I compiled all of the available official first-party benchmark results from google's model cards available here [https://ai.google.dev/gemma/docs/core/model\_card\_3#benchmark\_results](https://ai.google.dev/gemma/docs/core/model_card_3#benchmark_results) into a table to compare how the new 3N models do compared to the...
2025-06-26T18:49:09
https://www.reddit.com/r/LocalLLaMA/comments/1ll88pe/gemma_3n_vs_gemma_3_4b12b_benchmarks/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll88pe
false
null
t3_1ll88pe
/r/LocalLLaMA/comments/1ll88pe/gemma_3n_vs_gemma_3_4b12b_benchmarks/
false
false
self
104
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'h...
Will an H270 board + RTX 3090 handle vLLM (Mistral-7B/12B) well?
3
Hey all, I’m putting together a budget‐friendly workstation to tinker with vLLM and run Mistral-7B/12B locally on a single RTX 3090. Parts I already have: * Intel i7-7700K + Corsair 240 mm AIO * EVGA RTX 3090 (24 GB) * 32 GB DDR4-3000 * Corsair Carbide 270R case What I still need to buy: * ASUS Prime H270M-PLUS (mA...
2025-06-26T18:46:46
https://www.reddit.com/r/LocalLLaMA/comments/1ll86jw/will_an_h270_board_rtx_3090_handle_vllm/
RedMapSec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll86jw
false
null
t3_1ll86jw
/r/LocalLLaMA/comments/1ll86jw/will_an_h270_board_rtx_3090_handle_vllm/
false
false
self
3
null
AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
0
ERROR: type should be string, got "\n\nhttps://preview.redd.it/f6l8m4i2hb9f1.png?width=582&format=png&auto=webp&s=c44963182fd8c562d5e56b61c6180367ca633cc2\n\nJust finished reading AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor When I first started reading the book, I thought it would be just another one of those AI books full of big promises and hype. But I was totally wrong. This one is different, it’s clear, honest, and based on real facts. It explains what AI is really good at, and just as importantly, what it can’t do. Here are some of the key things I learned: \n\n\n\nLet’s start with a basic question, especially for those who, like me, hadn’t heard this term before: In simplest term, AI snake oil like a fake miracle cure. Back in the day, people used to sell bottles of magic medicine that promised to fix everything, but didn’t really work. The authors use this term to describe AI tools or products that are sold with big promises but don’t actually deliver what they claim. So AI snake oil is when people use fancy terms and hype to sell AI tools that sound amazing, but don’t really do much, or aren’t trustworthy. This book helps you figure out what’s real and what’s just marketing fluff. \n\n1️⃣ Specialized Skills ≠ General Intelligence Most AI tools are built to do one job really well, like translating a sentence or finding objects in a photo. But just because they do that one thing well doesn’t mean they understand language or think like we do. The authors explain that many people make the mistake of thinking these small wins mean AI is becoming like a human brain. But that’s not true. These systems are specialists, not all-rounders. It’s important not to confuse doing one task well with having real intelligence. I somewhat disagree with that, because while it’s true for traditional machine learning, general-purpose AI models like ChatGPT perform reasonably well across a wide range of tasks, But after reading further, I realized that what the author means is that even these advanced models aren’t truly thinking like humans. They’re really good at mimicking patterns from the data they were trained on, but they don’t actually understand meaning the way people do. So while tools like ChatGPT are impressive and useful, we still need to be careful not to overestimate what they’re capable of. \n\n2️⃣ The Problem with Predictive AI This is a problem we’re all aware of, A lot of AI tools used today, especially in hiring, lending, or even policing, make decisions based on past data. But here’s the issue: if that data includes human bias , the AI ends up repeating those same biases. For example, if a company’s past hiring favored certain groups, an AI trained on that data might keep favoring them and unfairly reject good candidates from other backgrounds. The same thing can happen with loan approvals or predicting someone’s risk in law enforcement. The authors explain that this isn’t just a tech problem, it’s a real-world problem. In sensitive areas like jobs, healthcare, or justice, these biased predictions can hurt people in serious ways. So the takeaway is: if we don’t fix the bias in the data, the AI will keep making the same unfair choices. \n\n3️⃣ Can AI Really Moderate Content? We’ve all heard claims that AI will fix problems like hate speech, fake news, or harmful content online. But the book explains why that’s not so simple. AI can spot some things pretty well like violent images, nudity, or banned symbols. But when it comes to things like sarcasm, jokes, or cultural references, it often gets confused. For example, it might wrongly flag a joke as hate speech, or miss something that’s actually harmful because it doesn't understand the context. The authors say that while AI can help, it’s not ready to replace human moderators. Real people are still better at understanding the full picture and making fair decisions. \n\n\n\n✅ Smarter Rules, Not Total Bans The authors aren’t saying we should stop using AI. They’re actually pro-AI but they believe we need to use it wisely. Instead of banning AI completely, they suggest putting smarter rules in place. For example, AI shouldn’t be allowed to make important decisions like hiring someone without a human being involved. They also say it’s super important for more people to understand how AI works. Whether you're a student or a CEO, learning the basics of AI can help you make better choices and avoid being fooled by hype. \n\n\n\n🌟 A Realistic but Hopeful Message Even though the book points out a lot of problems, it’s not negative. The authors believe AI has the potential to do a lot of good like helping students learn better, supporting people with disabilities, or speeding up research. \n\n\n\nTheir final message is inspiring: Don’t just believe the hype. Stay curious, ask tough questions, and be part of shaping how AI is used. That way, we get more real progress and less snake oil.\n\nBook link: [https://www.amazon.com/dp/0691249148/](https://www.amazon.com/dp/0691249148/)"
2025-06-26T18:43:22
https://www.reddit.com/r/LocalLLaMA/comments/1ll83ip/ai_snake_oil_what_artificial_intelligence_can_do/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll83ip
false
null
t3_1ll83ip
/r/LocalLLaMA/comments/1ll83ip/ai_snake_oil_what_artificial_intelligence_can_do/
false
false
https://b.thumbs.redditm…hoHTusj86bXU.jpg
0
null
Gemma 3n Benchmarks VS Gemma 3 (4B/12B)
1
I compiled the official benchmark results from google's model cards available here [https://ai.google.dev/gemma/docs/core/model\_card\_3#benchmark\_results](https://ai.google.dev/gemma/docs/core/model_card_3#benchmark_results) into a table to compare how the new 3N models do compared to their older non-n Gemma 3 siblin...
2025-06-26T18:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1ll82cf/gemma_3n_benchmarks_vs_gemma_3_4b12b/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll82cf
false
null
t3_1ll82cf
/r/LocalLLaMA/comments/1ll82cf/gemma_3n_benchmarks_vs_gemma_3_4b12b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'h...
Benchmarked Google’s new Gemma 3 models on our inference runtime — sub-second cold starts
2
We ran cold start benchmarks for both text and image variants of the 4B Gemma-3-it models: •Text2Text • Start Latency: 427 ms • Time to First Token: 274 ms •Image2Text • Start Latency: 432 ms • Time to First Token: 854 ms These numbers are from a true cold start. no preloading, no tricks. We’re working on making m...
2025-06-26T18:38:09
https://www.reddit.com/gallery/1ll7yv2
pmv143
reddit.com
1970-01-01T00:00:00
0
{}
1ll7yv2
false
null
t3_1ll7yv2
/r/LocalLLaMA/comments/1ll7yv2/benchmarked_googles_new_gemma_3_models_on_our/
false
false
https://external-preview…514b25cef652ef4c
2
{'enabled': True, 'images': [{'id': 'DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo', 'resolutions': [{'height': 181, 'url': 'https://external-preview.redd.it/DdpKH3P-lxWt4Kr8J65DbsvX-Ew6ME4j8VU5UReRmfo.jpeg?width=108&crop=smart&auto=webp&s=75cdcf2c2ec130e6f4529673896ce4e49bd559f2', 'width': 108}, {'height': 363, 'url': '...
Notebook to supervised fine tune Google Gemma 3n for GUI
3
This notebook demonstrates how to fine-tune the Gemma-3n vision-language model on the ScreenSpot dataset using TRL (Transformers Reinforcement Learning) with PEFT (Parameter Efficient Fine-Tuning) techniques. **Model**: `google/gemma-3n-E2B-it` * **Dataset**: `rootsautomation/ScreenSpot` * **Task**: Training the m...
2025-06-26T18:21:27
https://colab.research.google.com/drive/1ML9XAjGKKUmFObAsZbEw__G1di24lenX?usp=sharing
Zealousideal-Cut590
colab.research.google.com
1970-01-01T00:00:00
0
{}
1ll7jo1
false
null
t3_1ll7jo1
/r/LocalLLaMA/comments/1ll7jo1/notebook_to_supervised_fine_tune_google_gemma_3n/
false
false
default
3
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': '...
Phone is the best media for web applications. When it comes to AI, what is the best medium?
0
Over the past five years, AI has rapidly evolved—from something confined to big enterprise (recommendation👍, surveillance📹) to something you can casually interact with on your phone (ChatGPT💬). As AI is becoming a daily utility, as essential as water, electricity, or the internet, it prompts me to thin*k: If the sma...
2025-06-26T17:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1ll6ppk/phone_is_the_best_media_for_web_applications_when/
Pleasant-Type2044
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll6ppk
false
null
t3_1ll6ppk
/r/LocalLLaMA/comments/1ll6ppk/phone_is_the_best_media_for_web_applications_when/
false
false
self
0
null
DeepSeek R2 delayed
764
> Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information. However, a fast adoption of R2 could be difficult due to a shortage of Nvidia server chips in China as a result of U.S. export regulations, the report said, ci...
2025-06-26T17:43:13
https://i.redd.it/718m48of6b9f1.jpeg
FeathersOfTheArrow
i.redd.it
1970-01-01T00:00:00
0
{}
1ll6jo5
false
null
t3_1ll6jo5
/r/LocalLLaMA/comments/1ll6jo5/deepseek_r2_delayed/
false
false
default
764
{'enabled': True, 'images': [{'id': '718m48of6b9f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=108&crop=smart&auto=webp&s=fa90b74c3d17f64d0a6dd4cd7b1df872fd8f2bd6', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/718m48of6b9f1.jpeg?width=216&crop=smart&auto=w...
DeepSeek R2 delayed
2
> Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information, opens new tab. However, a fast adoption of R2 could be difficult due to a shortage of Nvidia server chips in China as a result of U.S. export regulations, the ...
2025-06-26T17:39:12
https://i.redd.it/qjeqkwup5b9f1.jpeg
FeathersOfTheArrow
i.redd.it
1970-01-01T00:00:00
0
{}
1ll6fta
false
null
t3_1ll6fta
/r/LocalLLaMA/comments/1ll6fta/deepseek_r2_delayed/
false
false
https://external-preview…fddf48b4ec84d5e8
2
{'enabled': True, 'images': [{'id': 'jeZCmZUOceaUP8pLRnzTWrLDlA_vPenkUSFzxR9N-v0', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jpeg?width=108&crop=smart&auto=webp&s=339b974575af78315deef7204b023a43c1b91e36', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/qjeqkwup5b9f1.jp...
DeepSeek R2 delayes
1
[deleted]
2025-06-26T17:38:29
[deleted]
1970-01-01T00:00:00
0
{}
1ll6f3s
false
null
t3_1ll6f3s
/r/LocalLLaMA/comments/1ll6f3s/deepseek_r2_delayes/
false
false
default
1
null
Gemma 3n Full Launch - Developers Edition
269
Hi! Today we have the full launch of Gemma 3n, meaning we have support for your favorite tools as well as full support for its capabilities [https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/](https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/) Recap * Audio, video, i...
2025-06-26T17:31:27
https://www.reddit.com/r/LocalLLaMA/comments/1ll68iz/gemma_3n_full_launch_developers_edition/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll68iz
false
null
t3_1ll68iz
/r/LocalLLaMA/comments/1ll68iz/gemma_3n_full_launch_developers_edition/
false
false
self
269
{'enabled': False, 'images': [{'id': 'eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eHfI39XOwyE8P4IjKYb0B5m67lOoScxaDCjqtf9pEIE.png?width=108&crop=smart&auto=webp&s=7a286ebaa447682e25572dc4784595f50927da16', 'width': 108}, {'height': 108, 'url': 'h...
The cost effective way to run Deepseek R1 models on cheaper hardware
4
It's possible to run Deepseek R1 in full size if you have a lot of GPUs in one machine with NVLink, the problem is that it's very expensive. What are the options for running it on a budget (say up to 15k$) while quantizing wihtout substantial loss of performance? My understanding is that R1 is MoE model, and thus coul...
2025-06-26T17:24:49
https://www.reddit.com/r/LocalLLaMA/comments/1ll62i4/the_cost_effective_way_to_run_deepseek_r1_models/
ArtisticHamster
self.LocalLLaMA
2025-06-26T17:30:18
0
{}
1ll62i4
false
null
t3_1ll62i4
/r/LocalLLaMA/comments/1ll62i4/the_cost_effective_way_to_run_deepseek_r1_models/
false
false
self
4
null
Gemma 3n is now stable on HuggingFace
35
2025-06-26T17:18:14
https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4
best_codes
huggingface.co
1970-01-01T00:00:00
0
{}
1ll5w6m
false
null
t3_1ll5w6m
/r/LocalLLaMA/comments/1ll5w6m/gemma_3n_is_now_stable_on_huggingface/
false
false
https://external-preview…a38d72cd29ece81f
35
{'enabled': False, 'images': [{'id': 'ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=108&crop=smart&auto=webp&s=a470f003a2d346b549c38415c0f02ae6b0caad25', 'width': 108}, {'height': 116, 'url': 'h...
Privacy / Data
2
Hello. I'm currently creating an automation in N8N (I'm going to switch to cloud hosting on my own server) and was wondering, are there any APIs that are private. As in no data tracking? It's not an absolute must, but it would be nice. Internet access is a necessity though (real-time search). Thank you!
2025-06-26T17:13:57
https://www.reddit.com/r/LocalLLaMA/comments/1ll5rxq/privacy_data/
Short_Move6167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll5rxq
false
null
t3_1ll5rxq
/r/LocalLLaMA/comments/1ll5rxq/privacy_data/
false
false
self
2
null
Which is the best small local LLM models for tasks like doing research and generating insights
2
I have been working with lot of local LLMs and building complex workflows and I have recently tested out qwen3:8b and gemma3:12b both are really good for few tasks, but I also want to know if there are even better models then this
2025-06-26T17:13:38
https://www.reddit.com/r/LocalLLaMA/comments/1ll5rmh/which_is_the_best_small_local_llm_models_for/
Solid_Woodpecker3635
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll5rmh
false
null
t3_1ll5rmh
/r/LocalLLaMA/comments/1ll5rmh/which_is_the_best_small_local_llm_models_for/
false
false
self
2
null
Gemini = cooked
0
I asked gemini to compare the services offered vs Claude. Turns out gemini only knows of Claude 3 bit more poking and i got the below out of it. "My internal knowledge, the vast dataset I was trained on, has a cutoff date in early 2023"
2025-06-26T16:53:23
https://www.reddit.com/r/LocalLLaMA/comments/1ll58oa/gemini_cooked/
Nangatang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll58oa
false
null
t3_1ll58oa
/r/LocalLLaMA/comments/1ll58oa/gemini_cooked/
false
false
self
0
null
Gemma 3n is on out on Hugging Face!
129
Google just dropped the perfect local model! https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4 https://huggingface.co/blog/gemma3n
2025-06-26T16:52:30
https://www.reddit.com/r/LocalLLaMA/comments/1ll57uz/gemma_3n_is_on_out_on_hugging_face/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll57uz
false
null
t3_1ll57uz
/r/LocalLLaMA/comments/1ll57uz/gemma_3n_is_on_out_on_hugging_face/
false
false
self
129
{'enabled': False, 'images': [{'id': 'ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ELsux0mwxWZPvalnHOQRSKe_mzDmS7uNjsKunK8e1U8.png?width=108&crop=smart&auto=webp&s=a470f003a2d346b549c38415c0f02ae6b0caad25', 'width': 108}, {'height': 116, 'url': 'h...
Roast My SaaS Application
0
Guys - I have built an app which creates a roadmap of chapters that you need to read to learn a given topic. It is personalized, so chapters are created in runtime based on user's learning curve. User has to pass each quiz to unlock the next chapter. below is the video , check this out and tell me what you think and...
2025-06-26T16:39:07
https://v.redd.it/revua3bwua9f1
Significant_Abroad36
/r/LocalLLaMA/comments/1ll4vet/roast_my_saas_application/
1970-01-01T00:00:00
0
{}
1ll4vet
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/revua3bwua9f1/DASHPlaylist.mpd?a=1753677551%2CNGI2MjE3YzE4MjY1ZWU2YjRmOGVkMWMxNGUyOWJmMmEyNGMxYjViOGRmOWY2YzY0OTg0Mjc0ZTliODgyZDcyOQ%3D%3D&v=1&f=sd', 'duration': 170, 'fallback_url': 'https://v.redd.it/revua3bwua9f1/DASH_720.mp4?source=fallback', 'h...
t3_1ll4vet
/r/LocalLLaMA/comments/1ll4vet/roast_my_saas_application/
false
false
https://external-preview…e6d63350bf752255
0
{'enabled': False, 'images': [{'id': 'czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/czFxYjh4ZHd1YTlmMWde8gwunT_bYnWBdtlOsaKnzitVEHx3CN-s6EvsSOjA.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f2ab8e7268926f8ede91236a3714e66c14a7...
Rtx 5000 support in oobabooga?
1
Hey. Is RTX 5000 already supported normally or I need to black magic it through Pytorch Nightly and all the EXL2/3 compilations forced in manually?
2025-06-26T16:26:41
https://www.reddit.com/r/LocalLLaMA/comments/1ll4k4s/rtx_5000_support_in_oobabooga/
Nicholas_Matt_Quail
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll4k4s
false
null
t3_1ll4k4s
/r/LocalLLaMA/comments/1ll4k4s/rtx_5000_support_in_oobabooga/
false
false
self
1
null
What are the best lightweight llm models (individuals can run on the cloud) to fine tune at the moment?
0
Thank you in advance for sharing your wisdom
2025-06-26T16:25:10
https://www.reddit.com/r/LocalLLaMA/comments/1ll4iqz/what_are_the_best_lightweight_llm_models/
kunyoungpark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll4iqz
false
null
t3_1ll4iqz
/r/LocalLLaMA/comments/1ll4iqz/what_are_the_best_lightweight_llm_models/
false
false
self
0
null
My Python AI Dev Tool: Avakin - Local LLMs, Project-Specific + Global RAG, & More
27
Hey r/LocalLLaMA, I've been working on a project called Avakin, a desktop AI development environment for Python, and wanted to share it with this community. My goal was to create a tool that deeply integrates with the development workflow, leverages local LLMs for privacy and control, and actually understands the cont...
2025-06-26T16:15:09
https://i.redd.it/qiuq20a1pa9f1.gif
One_Negotiation_2078
i.redd.it
1970-01-01T00:00:00
0
{}
1ll49jc
false
null
t3_1ll49jc
/r/LocalLLaMA/comments/1ll49jc/my_python_ai_dev_tool_avakin_local_llms/
false
false
default
27
{'enabled': True, 'images': [{'id': 'qiuq20a1pa9f1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=108&crop=smart&format=png8&s=333f01598941b0247a6581f7e8c19ea86d4a9937', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/qiuq20a1pa9f1.gif?width=216&crop=smart&format...
NotebookLM explaining Sparsity in LLMs using Deja Vu & LLM in a Flash
9
We ran an experiment with NotebookLM where we fed it: * Context from our GitHub repo * Two key papers: Deja Vu and LLM in a Flash * Comments and community insights from LocaLLaMA reddit discussion It is surprisingly clear and digestible podcast on sparsity, memory access patterns, and efficient inference in LLMs. W...
2025-06-26T16:09:21
https://open.spotify.com/episode/0540o6A17BhyHkJwFOFd89?si=vjlIj_eZRYqjHDytPux9sQ
Economy-Mud-6626
open.spotify.com
1970-01-01T00:00:00
0
{}
1ll442i
false
null
t3_1ll442i
/r/LocalLLaMA/comments/1ll442i/notebooklm_explaining_sparsity_in_llms_using_deja/
false
false
https://external-preview…c94ae0f25a7dbb04
9
{'enabled': False, 'images': [{'id': 'qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/qv-trUgr_F5dUKSisR1EF7whOER7-4P323ECjDOJaU0.jpeg?width=108&crop=smart&auto=webp&s=7651dc1827b40bae7f734146ee5a907018580342', 'width': 108}, {'height': 216, 'url': ...
gemma 3n has been released on huggingface
431
[https://huggingface.co/google/gemma-3n-E2B](https://huggingface.co/google/gemma-3n-E2B) [https://huggingface.co/google/gemma-3n-E2B-it](https://huggingface.co/google/gemma-3n-E2B-it) [https://huggingface.co/google/gemma-3n-E4B](https://huggingface.co/google/gemma-3n-E4B) [https://huggingface.co/google/gemma-3n-E4B-...
2025-06-26T16:07:25
https://www.reddit.com/r/LocalLLaMA/comments/1ll429p/gemma_3n_has_been_released_on_huggingface/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll429p
false
null
t3_1ll429p
/r/LocalLLaMA/comments/1ll429p/gemma_3n_has_been_released_on_huggingface/
false
false
self
431
{'enabled': False, 'images': [{'id': '2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2lP-GN54cGmOIsbXNzAw711WUlBuh5xp-z-S27FbNXY.png?width=108&crop=smart&auto=webp&s=2f4e42b4fa204710ccedd3cdaef109fde9142520', 'width': 108}, {'height': 116, 'url': 'h...
I built an MCP that finally makes your local AI models shine with SQL
20
Hey r/LocalLLaMA  👋 I'm a huge fan of using local AI models for queries & analytics, but my workflow has been quite painful. I feel like SQL tools never works as intended, and I spend half my day just copy-pasting schemas and table info into the context. I got so fed up with this, I decided to build [ToolFront](http...
2025-06-26T15:54:38
https://i.redd.it/2h8s7lagma9f1.png
Durovilla
i.redd.it
1970-01-01T00:00:00
0
{}
1ll3qej
false
null
t3_1ll3qej
/r/LocalLLaMA/comments/1ll3qej/i_built_an_mcp_that_finally_makes_your_local_ai/
false
false
https://external-preview…6ff81803caf3d198
20
{'enabled': True, 'images': [{'id': 'BiWnwJFD2HoHn5WG2OhNGxxWvlKK0waOiN1oUJoWE88', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?width=108&crop=smart&auto=webp&s=838e9057d8f818809b45fa4def593c42fded6754', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/2h8s7lagma9f1.png?...
Anubis 70B v1.1 - Just another RP tune... unlike any other L3.3! (allegedly) A breath of fresh prose and lack of positivity (YMMV ofc) + bonus Fallen 70B for mergefuel! (because tuners aren't limited to RP)
27
Did you like Fallen R1? Here's the non-R1 version: [https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-70B-v1) Enjoy the mergefuel!
2025-06-26T15:46:44
https://huggingface.co/TheDrummer/Anubis-70B-v1.1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1ll3j07
false
null
t3_1ll3j07
/r/LocalLLaMA/comments/1ll3j07/anubis_70b_v11_just_another_rp_tune_unlike_any/
false
false
default
27
{'enabled': False, 'images': [{'id': '_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_eIF0Xo1buph34Tuk-bXjGb0GyE839b8Ocdfqz4UUok.png?width=108&crop=smart&auto=webp&s=7cd9d5b4ab5402820ec24f9ccc1f4518d5bdac5f', 'width': 108}, {'height': 116, 'url': 'h...
FLUX.1 Kontext [dev] - an open weights model for proprietary-level image editing performance.
393
weights: [https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev) release news: [https://x.com/bfl\_ml/status/1938257909726519640](https://x.com/bfl_ml/status/1938257909726519640)
2025-06-26T15:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1ll38zu/flux1_kontext_dev_an_open_weights_model_for/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll38zu
false
null
t3_1ll38zu
/r/LocalLLaMA/comments/1ll38zu/flux1_kontext_dev_an_open_weights_model_for/
false
false
self
393
{'enabled': False, 'images': [{'id': 'pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEPHFg7yJbWyIk0w0OkPks6RLik8idr1RbqSAmxGVq8.png?width=108&crop=smart&auto=webp&s=355c2573c96fc9c23faaa614495bef0686fde982', 'width': 108}, {'height': 116, 'url': 'h...
I rebuilt Google's Gemini CLI system prompt with better engineering practices
14
## TL;DR Google's Gemini CLI system prompt is publicly available but it's a monolithic mess. I refactored it into a maintainable, modular architecture that preserves all functionality while making it actually usable for the rest of us. ## The Problem Google's official Gemini CLI system prompt ([prompts.ts](https://g...
2025-06-26T15:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1ll340q/i_rebuilt_googles_gemini_cli_system_prompt_with/
PsiACE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll340q
false
null
t3_1ll340q
/r/LocalLLaMA/comments/1ll340q/i_rebuilt_googles_gemini_cli_system_prompt_with/
false
false
self
14
{'enabled': False, 'images': [{'id': 'TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TCffS_Kskx_Of4tKzDPQtJFViIUD-EMDvG3g7XqIOVA.png?width=108&crop=smart&auto=webp&s=80d4978dc35409de38a43dce9d882a2739e3ed76', 'width': 108}, {'height': 108, 'url': 'h...
From "LangGraph is trash" to "pip install langgraph": A Stockholm Syndrome Story
84
Listen, I get it. We all hate LangGraph. The documentation reads like it was written by someone explaining quantum mechanics to their dog. The examples are either "Hello World" or "Here's how to build AGI, figure out the middle part yourself." But I was different. I was going to be the hero LocalLlama needed. "LangGr...
2025-06-26T15:28:08
https://www.reddit.com/r/LocalLLaMA/comments/1ll321h/from_langgraph_is_trash_to_pip_install_langgraph/
FailingUpAllDay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll321h
false
null
t3_1ll321h
/r/LocalLLaMA/comments/1ll321h/from_langgraph_is_trash_to_pip_install_langgraph/
false
false
self
84
null
Deepseek V3 0324 vs R1 0528 for coding tasks.
14
I tested with java and js coding tasks both locally, both with the largest version i can accommodate on my system, unsloth Q3-XL-UD (almost 300GB) following the recomended settings for coding, temp 0 for V3 and 0.6 for R1 and, to my surprise I find the V3 to make less mistakes and to generate better code for me. I hav...
2025-06-26T15:03:50
https://www.reddit.com/r/LocalLLaMA/comments/1ll2fyh/deepseek_v3_0324_vs_r1_0528_for_coding_tasks/
ciprianveg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll2fyh
false
null
t3_1ll2fyh
/r/LocalLLaMA/comments/1ll2fyh/deepseek_v3_0324_vs_r1_0528_for_coding_tasks/
false
false
self
14
null
1 9070XT vs 2 9060XT
2
Basically I was thinking that at the price of one 9070XT, I can get 2 9060XTs where i stay. I have a few questions about this. Please help me with those. - Is it feasible? (For LLM use and Image Gen) - What will be it's drawbacks? - Will the 32GB vram be used properly? - Any additional things i should onow about this ...
2025-06-26T15:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1ll2epp/1_9070xt_vs_2_9060xt/
Friendly-Gur-3289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll2epp
false
null
t3_1ll2epp
/r/LocalLLaMA/comments/1ll2epp/1_9070xt_vs_2_9060xt/
false
false
self
2
null
In RAG systems, who's really responsible for hallucination... the model, the retriever, or the data?
1
I've been thinking a lot about how we define and evaluate hallucinations in Retrieval-Augmented Generation (RAG) setups. Let’s say a model "hallucinates", but it turns out the context retrieved although semantically similar was factually wrong or irrelevant. Is that really the model’s fault? Or is the failure in: 1....
2025-06-26T14:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1ll1z0j/in_rag_systems_whos_really_responsible_for/
Fredthedeve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll1z0j
false
null
t3_1ll1z0j
/r/LocalLLaMA/comments/1ll1z0j/in_rag_systems_whos_really_responsible_for/
false
false
self
1
null
LLM Tuning Method 12,000x more efficient than full fine-tuning and 30% faster than LoRA 🚀
115
Paper Link: https://huggingface.co/papers/2506.16406 Project Link: https://jerryliang24.github.io/DnD/
2025-06-26T14:44:50
https://www.reddit.com/gallery/1ll1yjh
Additional_Top1210
reddit.com
1970-01-01T00:00:00
0
{}
1ll1yjh
false
null
t3_1ll1yjh
/r/LocalLLaMA/comments/1ll1yjh/llm_tuning_method_12000x_more_efficient_than_full/
false
false
https://external-preview…1ad9334352952ab6
115
{'enabled': True, 'images': [{'id': 'GPs8oonK03Al4q6HtUFhFxh4J-39nPu_HZOBEQOCcn8', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/GPs8oonK03Al4q6HtUFhFxh4J-39nPu_HZOBEQOCcn8.jpeg?width=108&crop=smart&auto=webp&s=673af10f907c6b6a74038ea676ba232a29d26127', 'width': 108}, {'height': 132, 'url': 'h...
2 GPU's: Cuda + Vulkan - llama.cpp build setup
5
What the best approach to build llama.cpp to support 2 GPUs simultaneously? Should I use Vulkan for both?
2025-06-26T14:43:32
https://www.reddit.com/r/LocalLLaMA/comments/1ll1xdj/2_gpus_cuda_vulkan_llamacpp_build_setup/
Ok-Panda-78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll1xdj
false
null
t3_1ll1xdj
/r/LocalLLaMA/comments/1ll1xdj/2_gpus_cuda_vulkan_llamacpp_build_setup/
false
false
self
5
null
9070XT Rocm ollama
2
Hi Guys do you know if 9070xt supports ollama now? I’ve been waiting for some time and if it works then I’ll get it set up today
2025-06-26T14:22:05
https://www.reddit.com/r/LocalLLaMA/comments/1ll1eeh/9070xt_rocm_ollama/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll1eeh
false
null
t3_1ll1eeh
/r/LocalLLaMA/comments/1ll1eeh/9070xt_rocm_ollama/
false
false
self
2
null
Feeding it text messages
4
Has anyone fed Khoj (or another local LLM) a huge amount of personal chat history, like say, years of iMessages? I’m wondering if there’s some recommended pre-processing or any other tips people may have from personal experience? I’m building an app to help me ~~argue~~ text better with my partner. It’s working well, ...
2025-06-26T14:17:05
https://www.reddit.com/r/LocalLLaMA/comments/1ll1a1o/feeding_it_text_messages/
eRetArDeD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll1a1o
false
null
t3_1ll1a1o
/r/LocalLLaMA/comments/1ll1a1o/feeding_it_text_messages/
false
false
self
4
null
We will build a comprehensive collection of data quality project
2
We will build a comprehensive collection of data quality project: [https://github.com/MigoXLab/awesome-data-quality](https://github.com/MigoXLab/awesome-data-quality), welcome to contribute with us.
2025-06-26T14:13:59
https://www.reddit.com/r/LocalLLaMA/comments/1ll17e0/we_will_build_a_comprehensive_collection_of_data/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll17e0
false
null
t3_1ll17e0
/r/LocalLLaMA/comments/1ll17e0/we_will_build_a_comprehensive_collection_of_data/
false
false
self
2
{'enabled': False, 'images': [{'id': 'oXlf24g_7BXxtTJqGi5qCPV2cNUFDjKLqHGaMGpSSKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oXlf24g_7BXxtTJqGi5qCPV2cNUFDjKLqHGaMGpSSKM.png?width=108&crop=smart&auto=webp&s=dd1721c615330653abcca99fd7e2ddfd525a39d1', 'width': 108}, {'height': 108, 'url': 'h...
Day 4 of 50 Days of Building a Small Language Model from Scratch — Understanding Byte Pair Encoding (BPE) Tokenizer
19
*Processing img yars4a5sy99f1...* *So far, we’ve explored what a tokenizer is and even built our own from scratch. However, one of the key limitations of building a custom tokenizer is handling unknown or rare words. This is where advanced tokenizers like OpenAI’s tiktoken, which uses Byte Pair Encoding (BPE), real...
2025-06-26T13:38:53
https://www.reddit.com/r/LocalLLaMA/comments/1ll0e5d/day_4_of_50_days_of_building_a_small_language/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll0e5d
false
null
t3_1ll0e5d
/r/LocalLLaMA/comments/1ll0e5d/day_4_of_50_days_of_building_a_small_language/
false
false
self
19
{'enabled': False, 'images': [{'id': 'eMGOFT-dCyqrcGU8o4sNWdjVcmCnEWFc2iYXpXWsCCc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eMGOFT-dCyqrcGU8o4sNWdjVcmCnEWFc2iYXpXWsCCc.png?width=108&crop=smart&auto=webp&s=3b71c21d9722a42e30bdbd2120d95e8ac1f5d808', 'width': 108}, {'height': 108, 'url': 'h...
I am making an AI batteries included Web Framework (like Django but for AI)
0
I started [Robyn](https://github.com/sparckles/Robyn) four years ago because I wanted something like Flask, but really fast and async-native - without giving up the simplicity.  But over the last two years, it became obvious: I was duct taping a lot of AI frameworks with existing web frameworks. We’ve been forcing ag...
2025-06-26T13:38:34
https://www.reddit.com/r/LocalLLaMA/comments/1ll0dw1/i_am_making_an_ai_batteries_included_web/
stealthanthrax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ll0dw1
false
null
t3_1ll0dw1
/r/LocalLLaMA/comments/1ll0dw1/i_am_making_an_ai_batteries_included_web/
false
false
self
0
{'enabled': False, 'images': [{'id': 'OaR0XSrLePLh2DghiieSWQl7vupZONqOx5W6BQlGdn4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OaR0XSrLePLh2DghiieSWQl7vupZONqOx5W6BQlGdn4.png?width=108&crop=smart&auto=webp&s=5d5d90a5087d22d99701736f0381b1b78d96c221', 'width': 108}, {'height': 108, 'url': 'h...
The Real Performance Penalty of GPU Passthrough into a VM (It's... boring)
191
Running GPUs in virtual machines for AI workloads is quickly becoming the golden standard - especially for isolation, orchestration, and multi-tenant setups. So I decided to measure the actual performance penalty of this approach. I benchmarked some LLMs (via ollama-benchmark) on an AMD RX 9060 XT 16GB - first on bare...
2025-06-26T13:19:50
https://www.reddit.com/gallery/1lkzynl
aospan
reddit.com
1970-01-01T00:00:00
0
{}
1lkzynl
false
null
t3_1lkzynl
/r/LocalLLaMA/comments/1lkzynl/the_real_performance_penalty_of_gpu_passthrough/
false
false
https://external-preview…4eaab647a3fa7dc9
191
{'enabled': True, 'images': [{'id': '1wJhDztWCANroswcLW3p5i3oMCiTskJ82JKTdTfiCRM', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/1wJhDztWCANroswcLW3p5i3oMCiTskJ82JKTdTfiCRM.jpeg?width=108&crop=smart&auto=webp&s=43f0bc2ac3e3685b3f57c530c364f5c7b3241703', 'width': 108}, {'height': 130, 'url': 'h...
I built an AI Home Assistant with EPC32 and I2S. It works with local models and has my personal context / tools. It’s also helping me become a better Redditor
36
I have an iPhone, and holding the side button always activates Siri... which I'm not crazy about. I tried using back-tap to open ChatGPT, but it takes too long, and it's inconsistent. Wired up a quick circuit to immediately interact with language models of my choice (along with my data / integrations)
2025-06-26T13:08:22
https://v.redd.it/kkt198rdt99f1
zuluana
/r/LocalLLaMA/comments/1lkzpdc/i_built_an_ai_home_assistant_with_epc32_and_i2s/
1970-01-01T00:00:00
0
{}
1lkzpdc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kkt198rdt99f1/DASHPlaylist.mpd?a=1753664910%2CNTc4MGZlMDc1MTJkMGFmMzczNTFmNjE4NTU0MDNlZTVlNDJhYzdmOTk5YzU0YTEyZWUzYTE2N2Y5YTc2ODZkMQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/kkt198rdt99f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lkzpdc
/r/LocalLLaMA/comments/1lkzpdc/i_built_an_ai_home_assistant_with_epc32_and_i2s/
false
false
https://external-preview…bf3a8337b5e10e87
36
{'enabled': False, 'images': [{'id': 'dHdxbjd0Z2R0OTlmMYJTg58zegrAzYwLDecY21tQ6Q7YMhgJ9y6C6hMRxDnx', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/dHdxbjd0Z2R0OTlmMYJTg58zegrAzYwLDecY21tQ6Q7YMhgJ9y6C6hMRxDnx.png?width=108&crop=smart&format=pjpg&auto=webp&s=d9d3741b89e7f551e58015a49ad6219b27e1...
Meta wins AI copyright lawsuit as US judge rules against authors | Meta
324
2025-06-26T12:35:26
https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authors
swagonflyyyy
theguardian.com
1970-01-01T00:00:00
0
{}
1lkz0hg
false
null
t3_1lkz0hg
/r/LocalLLaMA/comments/1lkz0hg/meta_wins_ai_copyright_lawsuit_as_us_judge_rules/
false
false
default
324
{'enabled': False, 'images': [{'id': 'P24oFDRu9fwfx1j87kht5i8PPJV3CyEIC0aLVuyN_0U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/P24oFDRu9fwfx1j87kht5i8PPJV3CyEIC0aLVuyN_0U.jpeg?width=108&crop=smart&auto=webp&s=9d94dcb8c151b4912e761aa907f10f409b0549ba', 'width': 108}, {'height': 113, 'url': '...
[Discussion] Tavkhid-Method: Prompt-based memory injection bypassing 128K token limit in DeepSeek R1
1
Hi everyone, I’m Tavkhid Nataev, an independent researcher. I’ve discovered a method to simulate persistent memory in DeepSeek-R1 by injecting JSON-encoded instructions and controlling context behavior through prompt engineering. This method, named **Tavkhid-Method**, uses dialog ID-based JSON containers, base64-payl...
2025-06-26T12:29:39
https://www.reddit.com/r/LocalLLaMA/comments/1lkyw69/discussion_tavkhidmethod_promptbased_memory/
Ecstatic-Dance-1498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lkyw69
false
null
t3_1lkyw69
/r/LocalLLaMA/comments/1lkyw69/discussion_tavkhidmethod_promptbased_memory/
false
false
self
1
null
Just Picked up a 16" M3 Pro 36GB MacBook Pro for $1,250. What should I run?
3
Just picked up a 16" M3 Pro MacBook Pro with 36GB RAM for $1990AUD (Around $1250USD). Was planning on getting a higher spec 16" (64 or 96GB Model) but couldn't pass on this deal. Pulled up LMStudio and got Qwen3 32GB running at around 7-8Tok/s and Gemma3 12B@ 17-18Tok/s What are the best models people are running at ...
2025-06-26T12:25:33
https://www.reddit.com/r/LocalLLaMA/comments/1lkytbg/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/
mentalasf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lkytbg
false
null
t3_1lkytbg
/r/LocalLLaMA/comments/1lkytbg/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/
false
false
self
3
null