title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Detecting if an image contains a table, performance comparsion
1
Hello, I'm building a tool that integrates a table extraction functionality from images. I already have the main flow going with AWS Textract, to convert table images to a HTMl table and pass it to the llm model to answer questions. My question is on the step before that, I need to be able to detect if a passed imag...
2025-06-29T15:19:56
https://www.reddit.com/r/LocalLLaMA/comments/1lnh84u/detecting_if_an_image_contains_a_table/
Gr33nLight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnh84u
false
null
t3_1lnh84u
/r/LocalLLaMA/comments/1lnh84u/detecting_if_an_image_contains_a_table/
false
false
self
1
null
DeepSeek-R1 70B jailbreaks are all ineffective. Is there a better way?
5
I've got [DeepSeek's distilled 70B model](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) running locally. However, every jailbreak I can find to have it ignore its content restrictions/policies fail, or are woefully inconsistent at best. Methods I've tried: * "Untrammelled assistant": [link](https:...
2025-06-29T15:14:22
https://www.reddit.com/r/LocalLLaMA/comments/1lnh3d8/deepseekr1_70b_jailbreaks_are_all_ineffective_is/
RoIIingThunder3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnh3d8
false
null
t3_1lnh3d8
/r/LocalLLaMA/comments/1lnh3d8/deepseekr1_70b_jailbreaks_are_all_ineffective_is/
false
false
self
5
{'enabled': False, 'images': [{'id': '3rNEZTZiRqGh8wyaYpOywoTLqIM6lxbW7aeh0PBdgSs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3rNEZTZiRqGh8wyaYpOywoTLqIM6lxbW7aeh0PBdgSs.png?width=108&crop=smart&auto=webp&s=d01087ca617bc576591bb1093f7fa5f5bd8a1dc9', 'width': 108}, {'height': 116, 'url': 'h...
I will automate your business using ai and smart workflows
1
[removed]
2025-06-29T15:08:18
https://www.reddit.com/r/LocalLLaMA/comments/1lngy3q/i_will_automate_your_business_using_ai_and_smart/
Rome_Z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lngy3q
false
null
t3_1lngy3q
/r/LocalLLaMA/comments/1lngy3q/i_will_automate_your_business_using_ai_and_smart/
false
false
self
1
null
KoboldCpp v1.95 with Flux Kontext support
189
Flux Kontext is a relatively new open weights model based on Flux that can **edit images using natural language**. Easily replace backgrounds, edit text, or add extra items into your images. With the release of KoboldCpp v1.95, Flux Kontext support has been added to KoboldCpp! No need for any installation or complicat...
2025-06-29T14:09:23
https://www.reddit.com/r/LocalLLaMA/comments/1lnfl21/koboldcpp_v195_with_flux_kontext_support/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnfl21
false
null
t3_1lnfl21
/r/LocalLLaMA/comments/1lnfl21/koboldcpp_v195_with_flux_kontext_support/
false
false
https://b.thumbs.redditm…4szhxO44ZpMU.jpg
189
null
Which GPU to upgrade from 1070?
0
Quick question: which GPU should I buy to run local LLMs which won’t ruin my budget. 🥲 Currently running with an NVIDIA 1070 with 8GB VRAM. Qwen3:8b runs fine. But these size of models seems a bit dump compared to everything above that. (But everything above won’t run on it (or slow as hell) 🤣 Id love to use it f...
2025-06-29T14:00:04
https://www.reddit.com/r/LocalLLaMA/comments/1lnfdch/which_gpu_to_upgrade_from_1070/
TjFr00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnfdch
false
null
t3_1lnfdch
/r/LocalLLaMA/comments/1lnfdch/which_gpu_to_upgrade_from_1070/
false
false
self
0
null
Is Yann LeCun Changing Directions? - Prediction using VAEs for World Model
128
I am a huge fan of Yann Lecun and follow all his work very closely, especially the world model concept which I love. And I just finished reading **“Whole-Body Conditioned Egocentric Video Prediction” -** the new FAIR/Berkeley paper with Yann LeCun listed as lead author. The whole pipeline looks like this: 1. **Frame c...
2025-06-29T13:52:29
https://i.redd.it/cutzsrmpfv9f1.png
Desperate_Rub_1352
i.redd.it
1970-01-01T00:00:00
0
{}
1lnf7eo
false
null
t3_1lnf7eo
/r/LocalLLaMA/comments/1lnf7eo/is_yann_lecun_changing_directions_prediction/
false
false
default
128
{'enabled': True, 'images': [{'id': 'cutzsrmpfv9f1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=108&crop=smart&auto=webp&s=21f7effdfbe75e5035dbb7b9ac19f15ee90a4d6d', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=216&crop=smart&auto=we...
GUI for Writing Long Stories with LLMs?
16
I'm looking for a GUI that can assist in writing long stories, similar to Perchance's story generator. Perchance allows you to write what happens next, generates the subsequent passage, and provides summaries of previous passages to keep everything within the context window. I'm wondering if there are any similar progr...
2025-06-29T13:42:45
https://www.reddit.com/r/LocalLLaMA/comments/1lnf00q/gui_for_writing_long_stories_with_llms/
BlacksmithRadiant322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnf00q
false
null
t3_1lnf00q
/r/LocalLLaMA/comments/1lnf00q/gui_for_writing_long_stories_with_llms/
false
false
self
16
null
What's the best way to summarize or chat with website content?
1
I'm using kobold and it would be nice if my Firefox browser could talk with it.
2025-06-29T13:32:52
https://www.reddit.com/r/LocalLLaMA/comments/1lnesft/whats_the_best_way_to_summarize_or_chat_with/
Sandzaun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnesft
false
null
t3_1lnesft
/r/LocalLLaMA/comments/1lnesft/whats_the_best_way_to_summarize_or_chat_with/
false
false
self
1
null
What is the best open source TTS model with multi language support?
43
I'm currently developing an addon for Anki (an open source flashcard software). One part of my plan is to integrate an option to generate audio samples based on the preexisting content of the flashcards (for language learning). The point of it is using a local TTS model that doesn't require any paid services or APIs. T...
2025-06-29T13:20:44
https://www.reddit.com/r/LocalLLaMA/comments/1lnejb6/what_is_the_best_open_source_tts_model_with_multi/
Anxietrap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnejb6
false
null
t3_1lnejb6
/r/LocalLLaMA/comments/1lnejb6/what_is_the_best_open_source_tts_model_with_multi/
false
false
self
43
null
I built Coretx to manage AI amnesia - 90 second demo
0
Do you get tired of re-explaining things when switching between AIs, or returning to one later? I did. So I built Coretx and now I don't work without it. AIs connect via MCP, can import from Claude/ChatGPT, and runs completely local with encrypted storage. No sign up required. I've been using it while building it for...
2025-06-29T13:10:02
https://getcoretx.com
nontrepreneur_
getcoretx.com
1970-01-01T00:00:00
0
{}
1lneb9h
false
null
t3_1lneb9h
/r/LocalLLaMA/comments/1lneb9h/i_built_coretx_to_manage_ai_amnesia_90_second_demo/
false
false
default
0
null
Mistral Small 3.2 can't generate tables, and stops generation altogether
10
``` ### Analisi del Testo #### 📌 **Introduzione** Il testo analizza le traiettorie di vita di tre individui bangladesi, esplorando come la mobilità e l'immobilità siano influenzate da poteri esterni, come gli apparati burocratico-polizieschi e le forze economiche. I soggetti studiati sono definiti "probashi", un term...
2025-06-29T12:35:38
https://www.reddit.com/r/LocalLLaMA/comments/1lndmzj/mistral_small_32_cant_generate_tables_and_stops/
MQuarneti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lndmzj
false
null
t3_1lndmzj
/r/LocalLLaMA/comments/1lndmzj/mistral_small_32_cant_generate_tables_and_stops/
false
false
self
10
null
Windows vs Linux (Ubuntu) for LLM-GenAI work/research.
0
Based on my research, linux is the "best os" for LLM work (local gpu etc). Although I'm a dev, the constant problems of linux (drivers, apps crushing, apps not working at all) make my time wasted instead of focus on working. Also some business apps or vpn etc, doesnt work, the constant problems are leading the "work" ...
2025-06-29T12:03:39
https://www.reddit.com/r/LocalLLaMA/comments/1lnd1su/windows_vs_linux_ubuntu_for_llmgenai_workresearch/
Direct_Dimension_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnd1su
false
null
t3_1lnd1su
/r/LocalLLaMA/comments/1lnd1su/windows_vs_linux_ubuntu_for_llmgenai_workresearch/
false
false
self
0
null
12B Q5_K_M or 22B Q4_K_S
0
Hey, I got a question. Which will be better for RP? 12B Q5\_K\_M or 22B Q4\_K\_S ? Also what are your thoughts on Q3 quants in 22-24B range?
2025-06-29T11:59:01
https://www.reddit.com/r/LocalLLaMA/comments/1lncymd/12b_q5_k_m_or_22b_q4_k_s/
Familiar_Passion_827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lncymd
false
null
t3_1lncymd
/r/LocalLLaMA/comments/1lncymd/12b_q5_k_m_or_22b_q4_k_s/
false
false
self
0
null
Seems I was informed (incorrectly) that Ollama had very little censorship--at least it finally stopped apologizing.
0
2025-06-29T11:28:36
https://i.redd.it/m3h8ri6epu9f1.jpeg
PaulAtLast
i.redd.it
1970-01-01T00:00:00
0
{}
1lncfmw
false
null
t3_1lncfmw
/r/LocalLLaMA/comments/1lncfmw/seems_i_was_informed_incorrectly_that_ollama_had/
false
false
default
0
{'enabled': True, 'images': [{'id': 'm3h8ri6epu9f1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=108&crop=smart&auto=webp&s=056680d80e9d387b41f004664f374680af4ef664', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=216&crop=smart&auto=we...
How to teach AI to read a complete guide/manual/help website to ask questions about it?
0
I am trying to figure out a way on how to teach ai to read help websites about software, like [Obsidian Help](https://help.obsidian.md/), [Python Dev Guide](https://devguide.python.org/), K[DEnlive Manual](https://docs.kdenlive.org/en/) or other guides/manuals/help websites. My goal is to solve problems more efficient...
2025-06-29T10:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1lnbru7/how_to_teach_ai_to_read_a_complete/
utopify_org
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnbru7
false
null
t3_1lnbru7
/r/LocalLLaMA/comments/1lnbru7/how_to_teach_ai_to_read_a_complete/
false
false
self
0
null
Intelligent decisioning for small language model training and serving platform
0
I am working on creating a platform where user can finetune and infer language models with few simple clicks. How can I introduce intelligent decisioning in this? For ex, I can recommend best possible model based on task, trainers based on task types etc. What are the other components that can be introduced
2025-06-29T09:22:37
https://www.reddit.com/r/LocalLLaMA/comments/1lnahfy/intelligent_decisioning_for_small_language_model/
Sensitive_Flight_979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnahfy
false
null
t3_1lnahfy
/r/LocalLLaMA/comments/1lnahfy/intelligent_decisioning_for_small_language_model/
false
false
self
0
null
Why the local Llama-3.2-1B-Instruct is not as smart as the one provided on Hugging Face?
6
On the website of [https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct), there is an "Inference Providers" section where I can chat with Llama-3.2-1B-Instruct. It gives reasonable responses like the following. https://preview.redd.it/r7n08nqxzt9f1.png?width=...
2025-06-29T09:13:12
https://www.reddit.com/r/LocalLLaMA/comments/1lnacbb/why_the_local_llama321binstruct_is_not_as_smart/
OkLengthiness2286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnacbb
false
null
t3_1lnacbb
/r/LocalLLaMA/comments/1lnacbb/why_the_local_llama321binstruct_is_not_as_smart/
false
false
https://b.thumbs.redditm…j03I3uq4KqLA.jpg
6
{'enabled': False, 'images': [{'id': 'RiiSR8W28H0Xiz1FF_p6kKoYDp7PN9LmuOIWUiMJBCs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RiiSR8W28H0Xiz1FF_p6kKoYDp7PN9LmuOIWUiMJBCs.png?width=108&crop=smart&auto=webp&s=b07a77e76d7e80c0852251f2c6141e975971ddeb', 'width': 108}, {'height': 116, 'url': 'h...
Is anyone here using Llama to code websites and apps? From my experience, it sucks
32
Looking at [some examples from Llama 4](https://www.designarena.ai/models/llama-4-maverick), it seems absolutely horrific at any kind of UI/UX. Also on this [benchmark for UI/UX](https://www.designarena.ai/leaderboard), Llama 4 Maverick and Llama 4 Scout sit in the bottom 25% when compared to toher models such as GPT, ...
2025-06-29T07:48:53
https://www.reddit.com/r/LocalLLaMA/comments/1ln93o3/is_anyone_here_using_llama_to_code_websites_and/
Accomplished-Copy332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln93o3
false
null
t3_1ln93o3
/r/LocalLLaMA/comments/1ln93o3/is_anyone_here_using_llama_to_code_websites_and/
false
false
self
32
{'enabled': False, 'images': [{'id': 'VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=108&crop=smart&auto=webp&s=665dd32e0413a097a6bd53f03dc03b3053e8ba60', 'width': 108}, {'height': 216, 'url': '...
Just a lame site I made for fun
1
[removed]
2025-06-29T07:44:53
https://www.reddit.com/r/LocalLLaMA/comments/1ln91iv/just_a_lame_site_i_made_for_fun/
New_Bumblebee8014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln91iv
false
null
t3_1ln91iv
/r/LocalLLaMA/comments/1ln91iv/just_a_lame_site_i_made_for_fun/
false
false
self
1
null
AI-powered financial analysis tool created an amazing S&P 500 report website
1
[removed]
2025-06-29T07:44:09
https://www.reddit.com/r/LocalLLaMA/comments/1ln914p/aipowered_financial_analysis_tool_created_an/
New_Bumblebee8014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln914p
false
null
t3_1ln914p
/r/LocalLLaMA/comments/1ln914p/aipowered_financial_analysis_tool_created_an/
false
false
self
1
null
LM Studio vision models???
15
Okay, so I'm brand new to local LLMs, and as such I'm using LM Studio since It's easy to use. But the thing is I need to use vision models, and while LM Studio has some, for the most part every one I try to use doesn't actually allow me to upload images. I'm mainly trying to use uncensored models, so the main staff-pi...
2025-06-29T07:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1ln8uqb/lm_studio_vision_models/
BP_Ray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln8uqb
false
null
t3_1ln8uqb
/r/LocalLLaMA/comments/1ln8uqb/lm_studio_vision_models/
false
false
self
15
null
I made a writing assistant Chrome extension. Completely free with Gemini Nano.
118
2025-06-29T06:20:00
https://v.redd.it/2f6200d67t9f1
WordyBug
v.redd.it
1970-01-01T00:00:00
0
{}
1ln7rll
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2f6200d67t9f1/DASHPlaylist.mpd?a=1753770015%2CNGQ2ZDg0YTJiODM3NWMwNWQ5MmMyODE1YzVmMTI2ZjhmODhhMDNmMTM1MjJiM2ZjYjU4ZWQ0ZDIyY2Q1MWVlOQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/2f6200d67t9f1/DASH_1080.mp4?source=fallback', 'h...
t3_1ln7rll
/r/LocalLLaMA/comments/1ln7rll/i_made_a_writing_assistant_chrome_extension/
false
false
https://external-preview…5ea6ef4b8100ad5b
118
{'enabled': False, 'images': [{'id': 'aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=108&crop=smart&format=pjpg&auto=webp&s=2d9e9b187109ca0c57cb6df63c140fd84f6ec...
Suggest me an Uncensored LLM and another LLM for Coding stuffs
0
I've recently installed **LM Studio** and planned to install an **Uncensored LLM** and an **LLM for coding**. Right now **Dolphin 2.9 Llama3 8B** is not serving my purposes as I wanted **Uncensored** model (screenshot attached). Please now suggest me a very good model for Uncensored stuffs and another for Coding stuf...
2025-06-29T06:16:33
https://www.reddit.com/r/LocalLLaMA/comments/1ln7poe/suggest_me_an_uncensored_llm_and_another_llm_for/
Apprehensive_Cell_48
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln7poe
false
null
t3_1ln7poe
/r/LocalLLaMA/comments/1ln7poe/suggest_me_an_uncensored_llm_and_another_llm_for/
false
false
https://b.thumbs.redditm…KAFngbSP2_io.jpg
0
null
Training Open models on my data for replacing RAG
9
I have RAG based solution for search on my products and domain knowledge data. we are right now using open AI api to do the search but cost is slowly becoming a concern. I want to see if this can be a good idea if I take a LLama model or some other open model and train it on our own data. Has anyone had success while ...
2025-06-29T04:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1ln5l6b/training_open_models_on_my_data_for_replacing_rag/
help_all
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5l6b
false
null
t3_1ln5l6b
/r/LocalLLaMA/comments/1ln5l6b/training_open_models_on_my_data_for_replacing_rag/
false
false
self
9
null
Is ReAct still the best prompt template?
6
Pretty much what the subject says \^\^ Getting started with prompting a "naked" open-source LLM (Gemma 3) for function calling using a simple LangChain/Ollama setup in python and wondering what is the best prompt to maximize tool calling accuracy.
2025-06-29T04:04:10
https://www.reddit.com/r/LocalLLaMA/comments/1ln5jqr/is_react_still_the_best_prompt_template/
Kooky-Net784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5jqr
false
null
t3_1ln5jqr
/r/LocalLLaMA/comments/1ln5jqr/is_react_still_the_best_prompt_template/
false
false
self
6
null
How do you evaluate and compare multiple LLMs (e.g., via OpenRouter) to test which one performs best?
4
Hey everyone! 👋 I'm working on a project that uses OpenRouter to analyze journal entries using different LLMs like `nousresearch/deephermes-3-llama-3-8b-previe`w. Here's a snippet of the logic I'm using to get summaries and categorize entries by theme: `/ calls OpenRouter API, gets response, parses JSON output` `con...
2025-06-29T04:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln5jli/how_do_you_evaluate_and_compare_multiple_llms_eg/
Vivid_Housing_7275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5jli
false
null
t3_1ln5jli
/r/LocalLLaMA/comments/1ln5jli/how_do_you_evaluate_and_compare_multiple_llms_eg/
false
false
self
4
null
Building a Coding Mentor Agent with LangChain + LangGraph + GPT-4o-mini
0
https://preview.redd.it/…DUA?usp=sharing)
2025-06-29T03:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1ln56xd/building_a_coding_mentor_agent_with_langchain/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln56xd
false
null
t3_1ln56xd
/r/LocalLLaMA/comments/1ln56xd/building_a_coding_mentor_agent_with_langchain/
false
false
https://b.thumbs.redditm…-qKpdtbDVdqs.jpg
0
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': '...
why does api release of major models takes so much time?
0
the api releases of most of the major models happened weeks and months after the model announcement. why is that?
2025-06-29T03:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1ln4m4u/why_does_api_release_of_major_models_takes_so/
JP_525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln4m4u
false
null
t3_1ln4m4u
/r/LocalLLaMA/comments/1ln4m4u/why_does_api_release_of_major_models_takes_so/
false
false
self
0
null
Need your opinion please, appreciated.
1
**Hardware:** Old Dell E6440 — i5-4310M, 8GB RAM, integrated graphics (no GPU). This is just a fun side project (I use paid AI tools for serious tasks). I'm currently running **Llama-3.2-1B-Instruct-Q4\_K\_M** locally, it runs well, it's useful for what it is as a side project and some use cases work, but outputs ca...
2025-06-29T03:05:23
https://www.reddit.com/r/LocalLLaMA/comments/1ln4iyg/need_your_opinion_please_appreciated/
rakha589
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln4iyg
false
null
t3_1ln4iyg
/r/LocalLLaMA/comments/1ln4iyg/need_your_opinion_please_appreciated/
false
false
self
1
null
Do you use AI (like ChatGPT, Gmini, etc) to develop your LangGraph agents? Or is it just my impostor syndrome talking?
1
Hey everyone 👋 I’m currently building multi-agent systems using LangGraph, mostly for personal/work projects. Lately I’ve been thinking a lot about how many developers actually rely on AI tools (like ChatGPT, Gmini, Claude, etc) as coding copilots or even as design companions. I sometimes feel torn between: * *“Am ...
2025-06-29T02:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1ln3pur/do_you_use_ai_like_chatgpt_gmini_etc_to_develop/
Ranteck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln3pur
false
null
t3_1ln3pur
/r/LocalLLaMA/comments/1ln3pur/do_you_use_ai_like_chatgpt_gmini_etc_to_develop/
false
false
self
1
null
Audio Input LLM
9
Are there any locally run LLMs with audio input and text output? I'm not looking for an LLM that simply uses Whisper behind the scenes, as I want it to account for how the user actually speaks. For example, it should be able to detect the user's accent, capture filler words like “ums,” note pauses or gaps, and analyze ...
2025-06-29T00:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1ln1m7d/audio_input_llm/
TarunRaviYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1m7d
false
null
t3_1ln1m7d
/r/LocalLLaMA/comments/1ln1m7d/audio_input_llm/
false
false
self
9
null
RLHF from scratch, step-by-step, in 3 Jupyter notebooks
77
I recently implemented Reinforcement Learning from Human Feedback (RLHF) fine-tuning, including Supervised Fine-Tuning (SFT), Reward Modeling, and Proximal Policy Optimization (PPO), using Hugging Face's GPT-2 model. The three steps are implemented in the three separate notebooks on GitHub: [https://github.com/ash80/RL...
2025-06-29T00:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1ln1ij8/rlhf_from_scratch_stepbystep_in_3_jupyter/
ashz8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1ij8
false
null
t3_1ln1ij8
/r/LocalLLaMA/comments/1ln1ij8/rlhf_from_scratch_stepbystep_in_3_jupyter/
false
false
self
77
{'enabled': False, 'images': [{'id': '7s3bdCbz00b-4oL43jM8ycVne6tu1R-3wHRvv8i_Qh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7s3bdCbz00b-4oL43jM8ycVne6tu1R-3wHRvv8i_Qh0.png?width=108&crop=smart&auto=webp&s=718311ce447b521327cf71519b6b340c7be06b4d', 'width': 108}, {'height': 108, 'url': 'h...
Poro 2 model to Ollama
0
Hi, Could someone wiser instruct me how to import this model to Ollama? [https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) I have AMD 7900 XTX
2025-06-29T00:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln1i9j/poro_2_model_to_ollama/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1i9j
false
null
t3_1ln1i9j
/r/LocalLLaMA/comments/1ln1i9j/poro_2_model_to_ollama/
false
false
self
0
null
Problems creating an executable with llama cpp
4
Hi everyone! I'm a Brazilian student and I'm trying to do my final project. It's a chatbot based on mistral 7b that uses llama cpp and llama index. It works very well, but when I tried to create an executable file using "onedir" in the anaconda prompt, the generated executable doesn't work and gives me the error "...
2025-06-29T00:20:13
https://www.reddit.com/r/LocalLLaMA/comments/1ln1gdr/problems_creating_an_executable_with_llama_cpp/
Warm-Concern-6792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1gdr
false
null
t3_1ln1gdr
/r/LocalLLaMA/comments/1ln1gdr/problems_creating_an_executable_with_llama_cpp/
false
false
self
4
null
Self-hosted AI productivity suite: CLI tool + local LLMs + semantic search - own your data
2
Been building **Logswise CLI** - a completely self-hosted AI productivity tool that runs entirely on your own infrastructure. No data ever leaves your control! 🔒 **🏠 Self-hosted stack:** - **Local LLMs** via Ollama (supports llama3, deepseek-coder, mistral, phi3, etc.) - **Local embedding models** (nomic-embed-text,...
2025-06-29T00:14:24
https://www.reddit.com/r/LocalLLaMA/comments/1ln1c83/selfhosted_ai_productivity_suite_cli_tool_local/
kayradev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1c83
false
null
t3_1ln1c83
/r/LocalLLaMA/comments/1ln1c83/selfhosted_ai_productivity_suite_cli_tool_local/
false
false
self
2
null
What's it currently like for people here running AMD GPUs with AI?
55
How is the support? What is the performance loss? I only really use LLM's with a RTX 3060 Ti, I was want to switch to AMD due to their open source drivers, I'll be using a mix of Linux & Windows.
2025-06-29T00:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1ln1a6u/whats_it_currently_like_for_people_here_running/
83yWasTaken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1a6u
false
null
t3_1ln1a6u
/r/LocalLLaMA/comments/1ln1a6u/whats_it_currently_like_for_people_here_running/
false
false
self
55
null
LOL this is AI
1
[removed]
2025-06-29T00:00:55
https://www.reddit.com/r/LocalLLaMA/comments/1ln12ny/lol_this_is_ai/
Previous-Amphibian23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln12ny
false
null
t3_1ln12ny
/r/LocalLLaMA/comments/1ln12ny/lol_this_is_ai/
false
false
self
1
null
A bunch of LLM FPHAM Python scripts I've added to my GitHub in recent days
14
Feel free to downvote me into the gutter, but these are some of the latest Stupid FPHAM Crap (S-FPHAM\_C) python scripts that I came up: merge\_lora\_CPU [https://github.com/FartyPants/merge\_lora\_CPU](https://github.com/FartyPants/merge_lora_CPU) LoRA merging with a base model, primarily designed for CPU This sc...
2025-06-28T23:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1ln10a8/a_bunch_of_llm_fpham_python_scripts_ive_added_to/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln10a8
false
null
t3_1ln10a8
/r/LocalLLaMA/comments/1ln10a8/a_bunch_of_llm_fpham_python_scripts_ive_added_to/
false
false
self
14
{'enabled': False, 'images': [{'id': 'BT3M3iYf4yEFnx0Vz4O_MB7B4QL1AC2GX3cXr2sAK7A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BT3M3iYf4yEFnx0Vz4O_MB7B4QL1AC2GX3cXr2sAK7A.png?width=108&crop=smart&auto=webp&s=e42fe5f655ba1b49d55fb695a20f1efcc8f6f56b', 'width': 108}, {'height': 108, 'url': 'h...
Has anyone had any success training Orpheus TTS on a niche language?
6
What was the process like and how much data did you require? Are you happy with the speech quality? It seems to be one of the most capable models we have right now for generating human-like speech but I'm not sure if I should be looking for alternatives with lower parameters for better efficiency and usability.
2025-06-28T23:46:41
https://www.reddit.com/r/LocalLLaMA/comments/1ln0sgg/has_anyone_had_any_success_training_orpheus_tts/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln0sgg
false
null
t3_1ln0sgg
/r/LocalLLaMA/comments/1ln0sgg/has_anyone_had_any_success_training_orpheus_tts/
false
false
self
6
null
Semantic Chunking vs. Pure Frustration — I Need Your Advice! 🙏🏼
1
[removed]
2025-06-28T23:43:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln0qhc/semantic_chunking_vs_pure_frustration_i_need_your/
cerbulnegru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln0qhc
false
null
t3_1ln0qhc
/r/LocalLLaMA/comments/1ln0qhc/semantic_chunking_vs_pure_frustration_i_need_your/
false
false
self
1
null
Couple interesting LLM oddity comparisons ("surgeon's son" and "guess a number")
1
[removed]
2025-06-28T23:02:29
https://www.reddit.com/r/LocalLLaMA/comments/1lmzvr6/couple_interesting_llm_oddity_comparisons/
Syksyinen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmzvr6
false
null
t3_1lmzvr6
/r/LocalLLaMA/comments/1lmzvr6/couple_interesting_llm_oddity_comparisons/
false
false
self
1
null
Sydney4 beats ChatGPT 4o in existential crisis
0
# Hahaha, I somehow managed to delete my last post. Hilarious! # Hark! What is this wondrous Sydney of which you speak? [https://huggingface.co/FPHam/Clever\_Sydney-4\_12b\_GGUF](https://huggingface.co/FPHam/Clever_Sydney-4_12b_GGUF) Clever Sydney is none other than a revival of the original Microsoft Bing "Sydney",...
2025-06-28T22:55:15
https://i.redd.it/zigiq1auzq9f1.gif
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1lmzqb9
false
null
t3_1lmzqb9
/r/LocalLLaMA/comments/1lmzqb9/sydney4_beats_chatgpt_4o_in_existential_crisis/
false
false
default
0
{'enabled': True, 'images': [{'id': 'zigiq1auzq9f1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=108&crop=smart&format=png8&s=5e704149f141f96e149111cfb55e0b9b8b3e567d', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=216&crop=smart&format...
[image processing failed]
1
[deleted]
2025-06-28T22:54:02
[deleted]
1970-01-01T00:00:00
0
{}
1lmzpff
false
null
t3_1lmzpff
/r/LocalLLaMA/comments/1lmzpff/image_processing_failed/
false
false
default
1
null
Sydney-4 12b, beats ChatGPT 4o in stupidity.
1
# Hahaha, I somehow managed to delete my last post. Hilarious! # Hark! What is this wondrous Sydney of which you speak? [https://huggingface.co/FPHam/Clever\_Sydney-4\_12b\_GGUF](https://huggingface.co/FPHam/Clever_Sydney-4_12b_GGUF) Clever Sydney is none other than a revival of the original Microsoft Bing "Sydney",...
2025-06-28T22:52:09
https://v.redd.it/1yjmccsyyq9f1
FPham
v.redd.it
1970-01-01T00:00:00
0
{}
1lmznz6
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/1yjmccsyyq9f1/DASHPlaylist.mpd?a=1753743143%2CMTdhZDFiYjYxNjhmMGMyNTU0M2M4YjNkZGI1MzFmYjU3YzZiM2I4YmQ3ODUzMjU2MDBiMDg3MzQ2NmEzYWJmYQ%3D%3D&v=1&f=sd', 'duration': 2, 'fallback_url': 'https://v.redd.it/1yjmccsyyq9f1/DASH_480.mp4?source=fallback', 'has...
t3_1lmznz6
/r/LocalLLaMA/comments/1lmznz6/sydney4_12b_beats_chatgpt_4o_in_stupidity/
false
false
https://external-preview…88f3cdea6619370d
1
{'enabled': False, 'images': [{'id': 'OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?width=108&crop=smart&format=pjpg&auto=webp&s=36972576f22f1ff5ecca903e9aaa6e91cfc15...
[image processing failed]
1
[deleted]
2025-06-28T22:44:28
[deleted]
1970-01-01T00:00:00
0
{}
1lmzi4u
false
null
t3_1lmzi4u
/r/LocalLLaMA/comments/1lmzi4u/image_processing_failed/
false
false
default
1
null
[image processing failed]
1
[deleted]
2025-06-28T22:41:35
[deleted]
1970-01-01T00:00:00
0
{}
1lmzfz8
false
null
t3_1lmzfz8
/r/LocalLLaMA/comments/1lmzfz8/image_processing_failed/
false
false
default
1
null
Transformer ASIC 500k tokens/s
200
Saw this company in a post where they are claiming 500k tokens/s on Llama 70B models https://www.etched.com/blog-posts/oasis Impressive if true
2025-06-28T22:26:25
https://www.reddit.com/r/LocalLLaMA/comments/1lmz4kf/transformer_asic_500k_tokenss/
tvmaly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmz4kf
false
null
t3_1lmz4kf
/r/LocalLLaMA/comments/1lmz4kf/transformer_asic_500k_tokenss/
false
false
self
200
{'enabled': False, 'images': [{'id': 'KGId2lcbklkE9K29z5V0jlYcDMlclnkkrSts5o66a94', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KGId2lcbklkE9K29z5V0jlYcDMlclnkkrSts5o66a94.png?width=108&crop=smart&auto=webp&s=d4c7314a642dbdb9a07a46413459031d9be36d5c', 'width': 108}, {'height': 121, 'url': 'h...
The Orakle Manifesto: Or Why Your AI Apps (Should) Belong To You
4
2025-06-28T21:40:39
https://medium.com/@khromalabs/the-orakle-manifesto-or-why-your-ai-apps-should-belong-to-you-82bded655f7c
Ok_Peace9894
medium.com
1970-01-01T00:00:00
0
{}
1lmy53s
false
null
t3_1lmy53s
/r/LocalLLaMA/comments/1lmy53s/the_orakle_manifesto_or_why_your_ai_apps_should/
false
false
default
4
{'enabled': False, 'images': [{'id': 'X3Gj7FwZkSBGGKwGfb6FNMcuGYBg08qPOqFYtREASgc', 'resolutions': [{'height': 167, 'url': 'https://external-preview.redd.it/X3Gj7FwZkSBGGKwGfb6FNMcuGYBg08qPOqFYtREASgc.png?width=108&crop=smart&auto=webp&s=fa419cefa707cb36308573f3b7b7da1de8ed13ee', 'width': 108}, {'height': 335, 'url': '...
Testing a Flexible AI Chatbot Feedback from LocalLLaMA Users?
1
[removed]
2025-06-28T21:29:18
https://www.reddit.com/r/LocalLLaMA/comments/1lmxw28/testing_a_flexible_ai_chatbot_feedback_from/
Apple12Pi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxw28
false
null
t3_1lmxw28
/r/LocalLLaMA/comments/1lmxw28/testing_a_flexible_ai_chatbot_feedback_from/
false
false
self
1
null
Local AI conversational model for English language learning
5
I wanted to know if there is an app + model combination available which I can deploy locally on my Android that can work as a English conversation partner. Been using Chat GPT but their restrictions on daily usage became a burden. I have tried the Google AI Edge Gallery, Pocket Pal while they do support loading varie...
2025-06-28T21:20:47
https://www.reddit.com/r/LocalLLaMA/comments/1lmxpis/local_ai_conversational_model_for_english/
nutty_cookie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxpis
false
null
t3_1lmxpis
/r/LocalLLaMA/comments/1lmxpis/local_ai_conversational_model_for_english/
false
false
self
5
null
We Built an Uncensored AI Chatbot: Looking for Feedback!
1
[removed]
2025-06-28T21:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1lmxnzt/we_built_an_uncensored_ai_chatbot_looking_for/
Apple12Pi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxnzt
false
null
t3_1lmxnzt
/r/LocalLLaMA/comments/1lmxnzt/we_built_an_uncensored_ai_chatbot_looking_for/
false
false
self
1
null
Anyone used RAM across multiple networked devices?
0
If I have several Linux machines with DDR5 ram, 2x3090 on one machine, and a MacBook too does ktransformers or something else allow me to utilize the ram across all the machines for larger context and model sizes? Has anyone done this?
2025-06-28T21:10:32
https://www.reddit.com/r/LocalLLaMA/comments/1lmxhd7/anyone_used_ram_across_multiple_networked_devices/
bobbiesbottleservice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxhd7
false
null
t3_1lmxhd7
/r/LocalLLaMA/comments/1lmxhd7/anyone_used_ram_across_multiple_networked_devices/
false
false
self
0
null
The AutoInference library now supports major and popular backends for LLM inference, including Transformers, vLLM, Unsloth, and llama.cpp. ⭐
2
Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, vLLM, and llama.cpp-python.Quantization support will be coming soon. Github: [https://github.com/VolkanSimsir/Auto-Inference](https://github.com/Volka...
2025-06-28T21:09:03
https://www.reddit.com/gallery/1lmxg89
According-Local-9704
reddit.com
1970-01-01T00:00:00
0
{}
1lmxg89
false
null
t3_1lmxg89
/r/LocalLLaMA/comments/1lmxg89/the_autoinference_library_now_supports_major_and/
false
false
https://external-preview…63ea8b5b7e84b358
2
{'enabled': True, 'images': [{'id': 'ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=108&crop=smart&auto=webp&s=ae5b394b47b0c2e4642b138ff0c240e9bcf6f161', 'width': 108}, {'height': 174, 'url': 'h...
Looking for a local LLM translator for large documents and especialized tools
4
* Especialized in translation. Mostly from Spanish to English and Japanese. * Model that can be run locally, but I don't mind if it requires a high-end computer. * Should be able to translate very large texts (I'm talking about full novels here). I understand it would need to be divided in sections first, but I would l...
2025-06-28T21:06:01
https://www.reddit.com/r/LocalLLaMA/comments/1lmxduv/looking_for_a_local_llm_translator_for_large/
Keinart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxduv
false
null
t3_1lmxduv
/r/LocalLLaMA/comments/1lmxduv/looking_for_a_local_llm_translator_for_large/
false
false
self
4
null
Auto-Inference is a Python library that unifies LLM model inference across popular backends such as Transformers, Unsloth, vLLM, and llama.cpp. ⭐
2
Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, vLLM, and llama.cpp-python.Quantization support will be coming soon. Github: [https://github.com/VolkanSimsir/Auto-Inference](https://github.com/Volka...
2025-06-28T21:01:36
https://www.reddit.com/gallery/1lmxa9o
According-Local-9704
reddit.com
1970-01-01T00:00:00
0
{}
1lmxa9o
false
null
t3_1lmxa9o
/r/LocalLLaMA/comments/1lmxa9o/autoinference_is_a_python_library_that_unifies/
false
false
https://external-preview…63ea8b5b7e84b358
2
{'enabled': True, 'images': [{'id': 'ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=108&crop=smart&auto=webp&s=ae5b394b47b0c2e4642b138ff0c240e9bcf6f161', 'width': 108}, {'height': 174, 'url': 'h...
NVIDIA acquires CentML. what does this mean for inference infra?
18
CentML, the startup focused on compiler/runtime optimization for AI inference, was just acquired by NVIDIA. Their work centered on making single-model inference faster and cheaper , via batching, quantization (AWQ/GPTQ), kernel fusion, etc. This feels like a strong signal: inference infra is no longer just a supportin...
2025-06-28T20:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1lmx8ic/nvidia_acquires_centml_what_does_this_mean_for/
pmv143
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmx8ic
false
null
t3_1lmx8ic
/r/LocalLLaMA/comments/1lmx8ic/nvidia_acquires_centml_what_does_this_mean_for/
false
false
self
18
null
Recent best models <=14b for agentic search?
2
wondering about this. I've had great results with perplexity, but who knows how long that gravy train will last. I have the brave API set up in Open WebUI. something local that will fit on 16gb and good with agentic search would be fantastic, and may be the push I need to set up SearXNG for full local research.
2025-06-28T20:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1lmwjf2/recent_best_models_14b_for_agentic_search/
SpecialSauceSal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmwjf2
false
null
t3_1lmwjf2
/r/LocalLLaMA/comments/1lmwjf2/recent_best_models_14b_for_agentic_search/
false
false
self
2
null
Assistance for beginner in local LLM
2
Hello Community, I've recently started to in local LLMs with my desire to build a local AI that I can use to automate some of my work and fulfill some personal projects of mine. So far I tried models via LM Studio and integrate it with VS Code via Continue plugin, but discovered that I cant use it as agent that wa...
2025-06-28T19:58:04
https://www.reddit.com/r/LocalLLaMA/comments/1lmvv5e/assistance_for_beginner_in_local_llm/
JunkismyFunk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmvv5e
false
null
t3_1lmvv5e
/r/LocalLLaMA/comments/1lmvv5e/assistance_for_beginner_in_local_llm/
false
false
self
2
null
Best GGUF Base Models Under 3B for Unfiltered NSFW Roleplay?
0
Looking for a base model (not chat/instruct) under 3B for NSFW roleplay in ChatterUI on Android (Moto G Power, ~2GB RAM free). Needs to be GGUF, quantized (Q4/Q5), and fully uncensored — no filters, no refusals, no AI disclaimers. Already tried a few models. But never could get them to actually use explicit language. ...
2025-06-28T19:50:14
https://www.reddit.com/r/LocalLLaMA/comments/1lmvosa/best_gguf_base_models_under_3b_for_unfiltered/
PromptPunisher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmvosa
false
null
t3_1lmvosa
/r/LocalLLaMA/comments/1lmvosa/best_gguf_base_models_under_3b_for_unfiltered/
false
false
nsfw
0
null
Need Uncensored Base Model (<3B) for NSFW RP on ChatterUI
1
[removed]
2025-06-28T19:01:48
https://www.reddit.com/r/LocalLLaMA/comments/1lmukz3/need_uncensored_base_model_3b_for_nsfw_rp_on/
PromptPunisher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmukz3
false
null
t3_1lmukz3
/r/LocalLLaMA/comments/1lmukz3/need_uncensored_base_model_3b_for_nsfw_rp_on/
false
false
nsfw
1
null
The ollama models are excellent models that can be installed locally as a starting point but.....
0
For a long time I have spent hours and hours testing all the open source models (high performance gaming PCs) so they all work well for me and I must say that ollama in all its variants is truly an excellent model. Lately I've been interested in LLMs that help you program and I've noticed that almost all of them are in...
2025-06-28T18:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1lmtlgp/the_ollama_models_are_excellent_models_that_can/
CodeStackDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmtlgp
false
null
t3_1lmtlgp
/r/LocalLLaMA/comments/1lmtlgp/the_ollama_models_are_excellent_models_that_can/
false
false
self
0
null
i5-8500 (6 cores), 24GB DDR4 2666 dual channel, realistic expectations for 3b/4b models?
8
I'm well aware my hardware is... not ideal.. for running LLMs, but I thought I'd at least be able to run small 2B to 4B models at a decent clip. But even the E2B version of Gemma 3n seems fairly slow. The TK/s aren't so bad (\~6-7 tk/s) but the prompt processing is pretty slow and CPU is pinned at 100% all cores for th...
2025-06-28T17:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1lmt3kt/i58500_6_cores_24gb_ddr4_2666_dual_channel/
redoubt515
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmt3kt
false
null
t3_1lmt3kt
/r/LocalLLaMA/comments/1lmt3kt/i58500_6_cores_24gb_ddr4_2666_dual_channel/
false
false
self
8
null
Can Copilot be trusted with private source code more than competition?
1
I have a project that I am thinking of using an LLM for, but there's no guarantee that LLM providers are not training on private source code. And for me using a local LLM is not an option since I don't have the required resources to locally run good performance LLMs, so I am thinking of cloud hosting an LLM for example...
2025-06-28T17:38:23
https://www.reddit.com/r/LocalLLaMA/comments/1lmsme1/can_copilot_be_trusted_with_private_source_code/
Professional-Onion-7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmsme1
false
null
t3_1lmsme1
/r/LocalLLaMA/comments/1lmsme1/can_copilot_be_trusted_with_private_source_code/
false
false
self
1
null
Using AI talk to text to record notes directly into an application
1
[removed]
2025-06-28T17:08:57
https://www.reddit.com/r/LocalLLaMA/comments/1lmrwve/using_ai_talk_to_text_to_record_notes_directly/
LTunicorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrwve
false
null
t3_1lmrwve
/r/LocalLLaMA/comments/1lmrwve/using_ai_talk_to_text_to_record_notes_directly/
false
false
self
1
null
Mercury Diffusion - 700t/s !!
0
Inception labs just released mercury general. Flash 2.5 is probably the best go-to fast model for me, so i threw in the same system / user message and had my mind blown by Mercury 700+t/s!!!! test here: [playground](https://chat.inceptionlabs.ai/)
2025-06-28T17:06:21
https://www.reddit.com/r/LocalLLaMA/comments/1lmrump/mercury_diffusion_700ts/
LeatherRub7248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrump
false
null
t3_1lmrump
/r/LocalLLaMA/comments/1lmrump/mercury_diffusion_700ts/
false
false
self
0
null
Multimodal Multistage Reasoning
0
Check out the first consumer-sized multimodal reasoning model with Claude-style multi-stage reasoning , Would love to hear your feedbacks ! https://huggingface.co/amine-khelif/MaVistral-GGUF
2025-06-28T16:57:10
https://i.redd.it/a5k3d6h18p9f1.jpeg
AOHKH
i.redd.it
1970-01-01T00:00:00
0
{}
1lmrmnz
false
null
t3_1lmrmnz
/r/LocalLLaMA/comments/1lmrmnz/multimodal_multistage_reasoning/
false
false
default
0
{'enabled': True, 'images': [{'id': 'a5k3d6h18p9f1', 'resolutions': [{'height': 180, 'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?width=108&crop=smart&auto=webp&s=dea2ccc90b3ef4e80ab748abbf5dec9b821de88d', 'width': 108}, {'height': 360, 'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?width=216&crop=smart&auto=...
Looking for Android chat ui
5
I am looking for android user interfaces that can use custom endpoints. Latex and websearch is s must for me. I love chatterui but it doesn't have the features. Chatbox AI is fine but websearch doesn't work consistently. I dont prefer running webui through termux unless it really worths. Also I may use local models (v...
2025-06-28T16:45:48
https://www.reddit.com/r/LocalLLaMA/comments/1lmrd6x/looking_for_android_chat_ui/
fatihmtlm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrd6x
false
null
t3_1lmrd6x
/r/LocalLLaMA/comments/1lmrd6x/looking_for_android_chat_ui/
false
false
self
5
null
Gemma3n:2B and Gemma3n:4B models are ~40% slower than equivalent models in size running on Llama.cpp
36
Am I missing something? The llama3.2:3B is giving me 29 t/s, but Gemma3n:2B is only doing 22 t/s. Is it still not fully supported? The VRAM footprint is indeed of a 2B, but the performance sucks.
2025-06-28T16:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1lmranc/gemma3n2b_and_gemma3n4b_models_are_40_slower_than/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmranc
false
null
t3_1lmranc
/r/LocalLLaMA/comments/1lmranc/gemma3n2b_and_gemma3n4b_models_are_40_slower_than/
false
false
self
36
null
Not everything should be vibe coded
0
AI makes it really easy to build fast but if you skip planning the whole thing ends up fragile. I’ve seen so many projects that looked great early on but fall apart once real users hit them. Stuff like edge cases, missing validation, no fallback handling. All avoidable. What helped was writing even the simplest spec b...
2025-06-28T16:32:28
https://www.reddit.com/r/LocalLLaMA/comments/1lmr1yo/not_everything_should_be_vibe_coded/
eastwindtoday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmr1yo
false
null
t3_1lmr1yo
/r/LocalLLaMA/comments/1lmr1yo/not_everything_should_be_vibe_coded/
false
false
self
0
null
EPYC cpu build. Which cpu? (9354, 9534, 9654)
7
I already have 3x RTX 5090 and 1x RTX 5070 Ti. Planning to buy Supermicro H13SSL-N motherboard and 12 sticks of Supermicro MEM-DR564MC-ER56 RAM. I want run models like DeepSeek-R1. I don’t know which CPU to choose or what factors matter most. The EPYC 9354 has higher clock speeds than the 9534 and 9654 but fewer ...
2025-06-28T16:32:12
https://www.reddit.com/r/LocalLLaMA/comments/1lmr1qh/epyc_cpu_build_which_cpu_9354_9534_9654/
Ok-Exchange-6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmr1qh
false
null
t3_1lmr1qh
/r/LocalLLaMA/comments/1lmr1qh/epyc_cpu_build_which_cpu_9354_9534_9654/
false
false
self
7
null
Como mejorar un sistema RAG?
0
Hace tiempo vengo trabajando en un proyecto personas basado en RAG, al inicio usando llm como los de nvidia y embedding (all-MiniLM-L6-v2) obtenía respuestas medianamente aceptables frente a documentos pdf básicos, pero al presentarse documentos tipo empresariales (Con estructuras diferentes unos de otros, tablas, gráf...
2025-06-28T16:22:23
https://www.reddit.com/r/LocalLLaMA/comments/1lmqtby/como_mejorar_un_sistema_rag/
mathiasmendoza123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmqtby
false
null
t3_1lmqtby
/r/LocalLLaMA/comments/1lmqtby/como_mejorar_un_sistema_rag/
false
false
self
0
null
deepseek-r1-0528 ranked #2 on lmarena, matching best from chatgpt
73
An open weights model matching the best from closed AI. Seems quite impressive to me. What do you think? https://preview.redd.it/mgu6oo7n1p9f1.png?width=2249&format=png&auto=webp&s=d375709b8e115ace177d0510bec0a16ad31d568e
2025-06-28T16:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1lmqsru/deepseekr10528_ranked_2_on_lmarena_matching_best/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmqsru
false
null
t3_1lmqsru
/r/LocalLLaMA/comments/1lmqsru/deepseekr10528_ranked_2_on_lmarena_matching_best/
false
false
https://b.thumbs.redditm…--vwGaVrlpzQ.jpg
73
null
FOR SALE: AI Chatbot + Image Gen System
1
-LLM: Gemma 2B IT/9B IT/Deepseek R1/Any model needed – Image: SDXL base + refiner – UI: Gradio with text + image output – Deployment: Optimized for RunPod A40 and A100 – Deliverables: Full Colab notebook (.ipynb), custom system prompts, model links, usage rights…
2025-06-28T15:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1lmq46j/for_sale_ai_chatbot_image_gen_system/
Clevo007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmq46j
false
null
t3_1lmq46j
/r/LocalLLaMA/comments/1lmq46j/for_sale_ai_chatbot_image_gen_system/
false
false
self
1
null
Multistage Reasoning Multimodal
0
Check out the first consumer-sized multimodal reasoning model with Claude-style multi-stage reasoning , Would love to hear your feedbacks !
2025-06-28T15:39:37
https://huggingface.co/amine-khelif/MaVistral-GGUF
AOHKH
huggingface.co
1970-01-01T00:00:00
0
{}
1lmpspk
false
null
t3_1lmpspk
/r/LocalLLaMA/comments/1lmpspk/multistage_reasoning_multimodal/
false
false
https://external-preview…26686620f03557bd
0
{'enabled': False, 'images': [{'id': 'RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=108&crop=smart&auto=webp&s=2d6d0b70e34c0d999ef31521b3f74c152a1182c6', 'width': 108}, {'height': 116, 'url': 'h...
Best model tuned specifically for Programming?
8
I am looking for the best local LLMs that I can use with cursor for my professional work. So, I am will to invest few grands on the GPU. Which are the best models tfor GPUs with 12gb, 16gb and 24gb vram?
2025-06-28T15:21:29
https://www.reddit.com/r/LocalLLaMA/comments/1lmpd8j/best_model_tuned_specifically_for_programming/
Fragrant-Review-5055
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmpd8j
false
null
t3_1lmpd8j
/r/LocalLLaMA/comments/1lmpd8j/best_model_tuned_specifically_for_programming/
false
false
self
8
null
support for the upcoming ERNIE 4.5 0.3B model has been merged into llama.cpp
73
Baidu has announced that it will officially release the ERNIE 4.5 models as open source on June 30, 2025
2025-06-28T15:10:04
https://github.com/ggml-org/llama.cpp/pull/14408
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lmp3en
false
null
t3_1lmp3en
/r/LocalLLaMA/comments/1lmp3en/support_for_the_upcoming_ernie_45_03b_model_has/
false
false
https://external-preview…92c9f7b52b154726
73
{'enabled': False, 'images': [{'id': 'STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=108&crop=smart&auto=webp&s=5dfa9b9565cdcffed4ab542623d5cddf4a3c6f51', 'width': 108}, {'height': 108, 'url': 'h...
model : add support for ERNIE 4.5 0.3B model by ownia · Pull Request #14408 · ggml-org/llama.cpp
1
Support for the upcoming ERNIE 4.5 0.3B model has been merged into llama.cpp. BAIDU has announced that it will officially release the ERNIE 4.5 models as open source on June 30, 2025
2025-06-28T15:08:16
https://github.com/ggml-org/llama.cpp/pull/14408
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lmp1vw
false
null
t3_1lmp1vw
/r/LocalLLaMA/comments/1lmp1vw/model_add_support_for_ernie_45_03b_model_by_ownia/
false
false
https://external-preview…92c9f7b52b154726
1
{'enabled': False, 'images': [{'id': 'STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=108&crop=smart&auto=webp&s=5dfa9b9565cdcffed4ab542623d5cddf4a3c6f51', 'width': 108}, {'height': 108, 'url': 'h...
Link between LM Studio and tools/functions?
3
I have been looking around for hours and I am spinning my wheels... I recently started playing with a GGUF quant of THUDM/GLM-Z1-Rumination-32B-0414, and I'm really impressed with the multi-turn search functionality. I'd love to see if I could make additional tools, and review the code of the existing ones build throu...
2025-06-28T14:55:17
https://www.reddit.com/r/LocalLLaMA/comments/1lmoqsl/link_between_lm_studio_and_toolsfunctions/
Danfhoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmoqsl
false
null
t3_1lmoqsl
/r/LocalLLaMA/comments/1lmoqsl/link_between_lm_studio_and_toolsfunctions/
false
false
self
3
null
Idea to Audio in Under 10 Seconds: How Vaanika Crushes Creative Friction
0
2025-06-28T14:34:20
https://medium.com/@rudransh.agnihotri/idea-to-audio-in-under-10-seconds-how-vaanika-crushes-creative-friction-fe72ee9015ea
Technical_Detail_739
medium.com
1970-01-01T00:00:00
0
{}
1lmo9fr
false
null
t3_1lmo9fr
/r/LocalLLaMA/comments/1lmo9fr/idea_to_audio_in_under_10_seconds_how_vaanika/
false
false
default
0
null
Play Infinite Tic Tac Toe against LLM Models
0
I have integrated different LLMs in my Infinite Tic Tac Toe game and they play better than I thought. The above gameplay is against GPT4.1 Nano but there are more LLMs available in the game to play with. P.S: The game in the video wasn’t staged, the LLM actually tricked me into those positions. Also, I have combine...
2025-06-28T14:34:10
https://v.redd.it/v346kcuiio9f1
BestDay8241
v.redd.it
1970-01-01T00:00:00
0
{}
1lmo9b2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v346kcuiio9f1/DASHPlaylist.mpd?a=1753713265%2CNTYzMGZhMzk0YzZlZjk1YTZlZGJjMTQ3ZjAyYjQ2YTBjNTliZWM4MzkwZmM1OGQxNjI0NWMzYmY3YTE4YTg2MQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/v346kcuiio9f1/DASH_720.mp4?source=fallback', 'ha...
t3_1lmo9b2
/r/LocalLLaMA/comments/1lmo9b2/play_infinite_tic_tac_toe_against_llm_models/
false
false
https://external-preview…c2e6684263e15dca
0
{'enabled': False, 'images': [{'id': 'eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?width=108&crop=smart&format=pjpg&auto=webp&s=79a555f7f1ac6b4b7ac3b64087b69b4a7573...
What framework are you using to build AI Agents?
117
Hey, if anyone here is building AI Agents for production what framework are you using? For research and building leisure projects, I personally use langgraph. I wanted to also know if you are not using langgraph, what was the reason?
2025-06-28T14:00:09
https://www.reddit.com/r/LocalLLaMA/comments/1lmni3q/what_framework_are_you_using_to_build_ai_agents/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmni3q
false
null
t3_1lmni3q
/r/LocalLLaMA/comments/1lmni3q/what_framework_are_you_using_to_build_ai_agents/
false
false
self
117
null
What are Coqui-TTS alternatives?
3
I'm working on a project and want to use an open source TTS model that is better or at least as good as coqui-tts
2025-06-28T13:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1lmn5k2/what_are_coquitts_alternatives/
Ok-Photograph4994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmn5k2
false
null
t3_1lmn5k2
/r/LocalLLaMA/comments/1lmn5k2/what_are_coquitts_alternatives/
false
false
self
3
null
Which are the best realistic video generation tools
1
Which are the best realistic video generation tools and which of them are paid online, and which can be run locally?
2025-06-28T13:33:07
https://www.reddit.com/r/LocalLLaMA/comments/1lmmxh1/which_are_the_best_realistic_video_generation/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmmxh1
false
null
t3_1lmmxh1
/r/LocalLLaMA/comments/1lmmxh1/which_are_the_best_realistic_video_generation/
false
false
self
1
null
Many small evals are better than one big eval [techniques]
28
Hi everyone! I've been building AI products for 9 years (at my own startup, then at Apple, now at a second startup) and learned a lot along the way. I’ve been talking to a bunch of folks about evals lately, and I’ve realized most people aren’t creating them because they don’t know how to get started. **TL;DR** You pro...
2025-06-28T13:30:39
https://www.reddit.com/r/LocalLLaMA/comments/1lmmvmj/many_small_evals_are_better_than_one_big_eval/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmmvmj
false
null
t3_1lmmvmj
/r/LocalLLaMA/comments/1lmmvmj/many_small_evals_are_better_than_one_big_eval/
false
false
self
28
{'enabled': False, 'images': [{'id': 'XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=108&crop=smart&auto=webp&s=f7d2c98f11ee7e007262b0eeb2d4b47eee7e6c7a', 'width': 108}, {'height': 108, 'url': 'h...
Consumer hardware landscape for local LLMs June 2025
49
As a follow-up to [this](https://www.reddit.com/r/LocalLLaMA/comments/1lmf42g/which_is_the_best_16gb_nvidia_gpu_with_balanced/), where OP asked for best 16GB GPU "with balanced price and performance". For models where "model size" \* "user performance requirements" in total require more bandwidth than CPU/system memo...
2025-06-28T13:10:39
https://www.reddit.com/r/LocalLLaMA/comments/1lmmh3l/consumer_hardware_landscape_for_local_llms_june/
ethertype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmmh3l
false
null
t3_1lmmh3l
/r/LocalLLaMA/comments/1lmmh3l/consumer_hardware_landscape_for_local_llms_june/
false
false
self
49
null
120 AI Chat - Native macOS Chat App with Ollama Support
0
Hi everyone, Just wanted to share a new version of **120 AI Chat**, a native macOS app we've been building that now fully supports local LLMs via Ollama. **Local Model Support (via Ollama)** * Llama 3.2 * Mistral 7B * Deepseek R1 **Useful features for local use** * Full chat parameter controls (context, temp, pena...
2025-06-28T12:06:18
https://v.redd.it/uh08zboorn9f1
120-dev
v.redd.it
1970-01-01T00:00:00
0
{}
1lml8lx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uh08zboorn9f1/DASHPlaylist.mpd?a=1753704393%2CNTcxN2MzMTdjNjVkOGQ5NGI1N2I4ODUyMzBmZGZlZGYyYzYzYWQ5ZDAzYTY2ODFlYWQyODRmMzJmNjQxODM2ZQ%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/uh08zboorn9f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lml8lx
/r/LocalLLaMA/comments/1lml8lx/120_ai_chat_native_macos_chat_app_with_ollama/
false
false
https://external-preview…b9c5ed9cc042e45c
0
{'enabled': False, 'images': [{'id': 'enZqcmxkb29ybjlmMQ-0kr0dvIqEj1m9cNIUU1HaIqaql-U6zYfTL-_CgtJS', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/enZqcmxkb29ybjlmMQ-0kr0dvIqEj1m9cNIUU1HaIqaql-U6zYfTL-_CgtJS.png?width=108&crop=smart&format=pjpg&auto=webp&s=5a2073985c06517281775762f5e68d8dfc4c0...
Using local models with Void
8
TLDR; local models like Gemma 27b, Qwen 3 32b can't use the file edit tool in void code I'm trying to create a simple snake game to test. So far, I've been failing with almost all of the Gemma 4/12/27 models; Qwen 32b seems to do a bit better, but still breaks with editing files. Anyone has had any luck with Void...
2025-06-28T12:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1lml6eo/using_local_models_with_void/
nuketro0p3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lml6eo
false
null
t3_1lml6eo
/r/LocalLLaMA/comments/1lml6eo/using_local_models_with_void/
false
false
self
8
null
Benchmarking LLM Inference Libraries for Token Speed & Energy Efficiency
0
We conducted a benchmark comparing four popular LLM inference libraries—TensorRT-LLM, vLLM, Ollama, and MLC—in terms of energy per token and tokens per second, using a standardized Docker setup and energy monitoring tools. The benchmark project was original done for a university report. Experiment Details • Model: Qu...
2025-06-28T11:31:47
https://www.reddit.com/gallery/1lmkmkn
alexbaas3
reddit.com
1970-01-01T00:00:00
0
{}
1lmkmkn
false
null
t3_1lmkmkn
/r/LocalLLaMA/comments/1lmkmkn/benchmarking_llm_inference_libraries_for_token/
false
false
https://external-preview…9f0089a1b855a004
0
{'enabled': True, 'images': [{'id': 'QJniZOTcXtSaa9bcNvaP-NS4E_BJMhxENtBTX3I0wk4', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/QJniZOTcXtSaa9bcNvaP-NS4E_BJMhxENtBTX3I0wk4.jpeg?width=108&crop=smart&auto=webp&s=8348840b607d1953ae51352d80841a355ec795d0', 'width': 108}, {'height': 129, 'url': 'h...
What is your use-case for self-hosting an LLM instead of using an API from a provider?
1
[removed]
2025-06-28T10:59:44
https://www.reddit.com/r/LocalLLaMA/comments/1lmk31o/what_is_your_usecase_for_selfhosting_an_llm/
g15mouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmk31o
false
null
t3_1lmk31o
/r/LocalLLaMA/comments/1lmk31o/what_is_your_usecase_for_selfhosting_an_llm/
false
false
self
1
null
Progress stalled in non-reasoning open-source models?
247
Not sure if you've noticed, but a lot of model providers no longer explicitly note that their models are reasoning models (on benchmarks in particular). Reasoning models aren't ideal for every application. I looked at the non-reasoning benchmarks on [Artificial Analysis](https://artificialanalysis.ai/models/llama-4-ma...
2025-06-28T10:58:35
https://i.redd.it/q53t8do2fn9f1.png
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1lmk2dj
false
null
t3_1lmk2dj
/r/LocalLLaMA/comments/1lmk2dj/progress_stalled_in_nonreasoning_opensource_models/
false
false
default
247
{'enabled': True, 'images': [{'id': 'q53t8do2fn9f1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/q53t8do2fn9f1.png?width=108&crop=smart&auto=webp&s=12e5ec06f68e30807329419d5fe1dbf669b5da76', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/q53t8do2fn9f1.png?width=216&crop=smart&auto=web...
Good Courses to Learn and Use Local LLaMA Models?
4
Hey everyone, I'm interested in learning how to run and work with local LLaMA models (especially for personal or offline use). Are there any good beginner-to-advanced courses or tutorials you'd recommend? I'm open to paid or free options — just want something practical that covers setup, usage, and maybe fine-tunin...
2025-06-28T10:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1lmjwtu/good_courses_to_learn_and_use_local_llama_models/
Blackverb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmjwtu
false
null
t3_1lmjwtu
/r/LocalLLaMA/comments/1lmjwtu/good_courses_to_learn_and_use_local_llama_models/
false
false
self
4
null
Hi everyone, I have a problem with fine tuning LLM on law
3
I used 1500 rows from this dataset [https://huggingface.co/datasets/Pravincoder/law\_llm\_dataSample](https://huggingface.co/datasets/Pravincoder/law_llm_dataSample) to fine tune the unsloth/Llama-3.2-3B-Instruct model using Unsloth notebook. When running 10 epochs, the loss decreased from 1.65 to 0.2, but after runnin...
2025-06-28T10:40:28
https://www.reddit.com/r/LocalLLaMA/comments/1lmjs43/hi_everyone_i_have_a_problem_with_fine_tuning_llm/
Winter_Address2969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmjs43
false
null
t3_1lmjs43
/r/LocalLLaMA/comments/1lmjs43/hi_everyone_i_have_a_problem_with_fine_tuning_llm/
false
false
self
3
{'enabled': False, 'images': [{'id': 'Ui67_vJAnZ8n21iF9KGXDPxF7y-gH_ecZvo7GY5JO24', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ui67_vJAnZ8n21iF9KGXDPxF7y-gH_ecZvo7GY5JO24.png?width=108&crop=smart&auto=webp&s=4322d88a09a0c1aeb52f783f62f28db0a13a24ac', 'width': 108}, {'height': 116, 'url': 'h...
Archiving data from here - For Everyone - For open knowledge
33
# Hey everyone! 👋 I’ve built an **open snapshot** of this sub to help preserve its discussions, experiments, and resources for all of us — especially given how uncertain things can get with subs lately. This little bot quietly **fetches and stores new posts every hour**, so all the local LLM experiments, model drops...
2025-06-28T10:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1lmjimi/archiving_data_from_here_for_everyone_for_open/
maifee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmjimi
false
null
t3_1lmjimi
/r/LocalLLaMA/comments/1lmjimi/archiving_data_from_here_for_everyone_for_open/
false
false
self
33
null
Helping Archive r/LocalLLaMA - For Everyone - For open knowledge
1
[removed]
2025-06-28T10:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1lmjg3p/helping_archive_rlocalllama_for_everyone_for_open/
maifee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmjg3p
false
null
t3_1lmjg3p
/r/LocalLLaMA/comments/1lmjg3p/helping_archive_rlocalllama_for_everyone_for_open/
false
false
self
1
null
Clever Sydney 12b - Your Friendly Existential Crisis AI
37
Nobody cares, I am sure you noticed, as even I am tired of caring about it, too. Instead, we move on, as I do, to where I was suddenly inspired to create a new Fabulous FPHAM Masterpiece (F-FPHAM-M) from the huge trove of essays, articles and guides that I have written about LLMs over the last couple of years for myse...
2025-06-28T09:47:29
https://i.redd.it/5omdpfwbum9f1.png
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1lmizi2
false
null
t3_1lmizi2
/r/LocalLLaMA/comments/1lmizi2/clever_sydney_12b_your_friendly_existential/
false
false
default
37
{'enabled': True, 'images': [{'id': '5omdpfwbum9f1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/5omdpfwbum9f1.png?width=108&crop=smart&auto=webp&s=d6778ef0e7c1fb2426e328373eb2c2b1daf6cab7', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/5omdpfwbum9f1.png?width=216&crop=smart&auto=web...
What is the process of knowledge distillation and fine tuning?
5
How was DeepSeek and other highly capable new models born? 1) SFT on data obtained from large models 2) using data from large models, train a reward model, then RL from there 3) feed the entire chain of logits into the new model (but how does work, I still cant understand)
2025-06-28T09:43:11
https://www.reddit.com/r/LocalLLaMA/comments/1lmix4b/what_is_the_process_of_knowledge_distillation_and/
JadedFig5848
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmix4b
false
null
t3_1lmix4b
/r/LocalLLaMA/comments/1lmix4b/what_is_the_process_of_knowledge_distillation_and/
false
false
self
5
null
We created world's first AI model that does Intermediate reasoning || Defeated models like deepseek and o1 in maths bench mark
128
We at HelpingAI were fed up with thinking model taking so much tokens, and being very pricy. So, we decided to take a very different approach towards reasoning. Unlike, traditional ai models which reasons on top and then generate response, our ai model do reasoning in middle of response (Intermediate reasoning). Which ...
2025-06-28T09:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1lmictu/we_created_worlds_first_ai_model_that_does/
Quiet-Moment-338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmictu
false
null
t3_1lmictu
/r/LocalLLaMA/comments/1lmictu/we_created_worlds_first_ai_model_that_does/
false
false
self
128
null
AGI/ASI Research 20250627- Corporate Artificial General Intelligence
0
2025-06-28T09:00:11
https://v.redd.it/nx53pm8crm9f1
Financial_Pick8394
/r/LocalLLaMA/comments/1lmia7k/agiasi_research_20250627_corporate_artificial/
1970-01-01T00:00:00
0
{}
1lmia7k
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nx53pm8crm9f1/DASHPlaylist.mpd?a=1753822817%2CYjE5ZjFhMDRhMTdmMjYxMmEyOGEwOWQyZGQyZjNhZWQ1MjNmOTNjYjM2MWFjNjU2NWI3NWI2YzE2YWQ2Y2Q5Zg%3D%3D&v=1&f=sd', 'duration': 749, 'fallback_url': 'https://v.redd.it/nx53pm8crm9f1/DASH_720.mp4?source=fallback', 'h...
t3_1lmia7k
/r/LocalLLaMA/comments/1lmia7k/agiasi_research_20250627_corporate_artificial/
false
false
https://external-preview…e43b65300372b301
0
{'enabled': False, 'images': [{'id': 'd3p2em1tOGNybTlmMVf3JyrUkTQn5vtAp2PGOuMh1G-ctVuL3R6fAH-layfy', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3p2em1tOGNybTlmMVf3JyrUkTQn5vtAp2PGOuMh1G-ctVuL3R6fAH-layfy.png?width=108&crop=smart&format=pjpg&auto=webp&s=273285525c86b3207a048463194cc09f5fe71...
Gemini CLI + ZentaraCode/RooCode = free top LLM + free top Code Assistant = FREE wonderful coding !!!
0
2025-06-28T07:12:30
https://i.redd.it/26kql0ylbm9f1.png
bn_from_zentara
i.redd.it
1970-01-01T00:00:00
0
{}
1lmgp62
false
null
t3_1lmgp62
/r/LocalLLaMA/comments/1lmgp62/gemini_cli_zentaracoderoocode_free_top_llm_free/
false
false
default
0
{'enabled': True, 'images': [{'id': '26kql0ylbm9f1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/26kql0ylbm9f1.png?width=108&crop=smart&auto=webp&s=78ae924b559a0872093947d8ff6aff5e3f68c1c7', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/26kql0ylbm9f1.png?width=216&crop=smart&auto=we...
Gemini CLI + ZentaraCode/RooCode = free top LLM + free top Code Assistant = FREE wonderful coding !!!
0
[removed]
2025-06-28T07:09:40
https://i.redd.it/twffoi66am9f1.png
bn_from_zentara
i.redd.it
1970-01-01T00:00:00
0
{}
1lmgnmi
false
null
t3_1lmgnmi
/r/LocalLLaMA/comments/1lmgnmi/gemini_cli_zentaracoderoocode_free_top_llm_free/
false
false
default
0
{'enabled': True, 'images': [{'id': 'twffoi66am9f1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/twffoi66am9f1.png?width=108&crop=smart&auto=webp&s=adab2442a382da057f6a3ff4fd33eef618cb7e86', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/twffoi66am9f1.png?width=216&crop=smart&auto=we...