title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Interesting evidence that Openrouter's Polaris Alpha was Gemini 3.0
0
If true, that would mean Google uses Openrouter, so you'll be able to watch for future Gemini models there. Perhaps someone else knows if Google uses openrouter, but not me. Here's my evidence: I have a complex prompt that only a few models can currently do well. Polaris Alpha did it in a particular way that was impressive and unique -- no models I have tested have done even remotely the same thing. It also was successful every time I tested where most models would take three 0-shots or more before having a working version. Gemini 3.0 gives results that are similar to Polaris Alpha, but be the judge of it yourself. Here is the prompt: `Create a noninteractive html file which implements ping pong buffers in webgl. The ping pong buffer should render the previous frame at partial opacity with additive blending to a black screen with a fragment shader applied. The fragment shader should distort the previous frame in interesting ways as it is rendered to the new frame. This rerendering makes a bleed and blur effect. The initial color that is bled should be seeded with another fragment shader that simulates fluid dynamics. Many aspects of the shaders should change such as color and characteristics of the distortion.` Polaris Alpha 0-shot: [https://codepen.io/gsaslwez-the-flexboxer/pen/qEbzbKW](https://codepen.io/gsaslwez-the-flexboxer/pen/qEbzbKW) Gemini 3.0 0-shots (3 of 3 attempts): [https://codepen.io/gsaslwez-the-flexboxer/pen/JoXJLZg](https://codepen.io/gsaslwez-the-flexboxer/pen/JoXJLZg) [https://codepen.io/gsaslwez-the-flexboxer/pen/yyOXKqq](https://codepen.io/gsaslwez-the-flexboxer/pen/yyOXKqq) [https://codepen.io/gsaslwez-the-flexboxer/pen/dPMRmqo](https://codepen.io/gsaslwez-the-flexboxer/pen/dPMRmqo) GPT 5.1 Thinking 0-shot (1 of 2 attempts, other failed): [https://codepen.io/gsaslwez-the-flexboxer/pen/QwNgmzm](https://codepen.io/gsaslwez-the-flexboxer/pen/QwNgmzm) You can check out results from other models for that prompt and they generally don't work or are very basic/buggy. To me this is a clear indication that Polaris Alpha was Gemini 3.0.
2025-11-18T23:11:23
https://www.reddit.com/r/LocalLLaMA/comments/1p0r5je/interesting_evidence_that_openrouters_polaris/
1ncehost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0r5je
false
null
t3_1p0r5je
/r/LocalLLaMA/comments/1p0r5je/interesting_evidence_that_openrouters_polaris/
false
false
self
0
null
Larger model Q_5 or smaller model Q_8?
1
[removed]
2025-11-18T23:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1p0qzrl/larger_model_q_5_or_smaller_model_q_8/
urrgkh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0qzrl
false
null
t3_1p0qzrl
/r/LocalLLaMA/comments/1p0qzrl/larger_model_q_5_or_smaller_model_q_8/
false
false
self
1
null
How to make autocomplete not generate comments?
0
I am using a qwen2.5-coder:14b I created from Ollama from ipex-llm\[cpp\] (Intel GPU stuff). I created that using a Modelfile and all I did was to increase the context to 16k. I am using Tabby on IntelliJ to provide the autocompletion. This is my autocomplete config from Tabby: ``` [model.completion.http] kind = "ollama/completion" model_name = "qwen2.5-coder:14b-16k" api_endpoint = "http://0.0.0.0:11434" prompt_template = "<|fim_prefix|>{prefix}<|fim_suffix|>{suffix}<|fim_middle|>" ``` It works great, but it is generating comments all the time and I dont want that. I want it to generate comments only if there is a comment on the line immediately before or after the current line. Any ideas on how I could specify it in the prompt or somewhere else? I tried adding "Do not generate comments" before the fim stuff, but that didnt seem to work
2025-11-18T22:49:03
https://www.reddit.com/r/LocalLLaMA/comments/1p0qlky/how_to_make_autocomplete_not_generate_comments/
WizardlyBump17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0qlky
false
null
t3_1p0qlky
/r/LocalLLaMA/comments/1p0qlky/how_to_make_autocomplete_not_generate_comments/
false
false
self
0
null
Dealing with multiple versions of llama.cpp
0
I used `brew` to install `llama.cpp`, but since it only uses my CPU, and I have a dGPU available in my laptop, I want to now try building `llama.cpp` from the GitHub repo using the CUDA build method to get it to use my dGPU. How do I set up the new `llama.cpp` instance so that I can call it specifically, without accidentally calling the brew version?
2025-11-18T22:40:08
https://www.reddit.com/r/LocalLLaMA/comments/1p0qdkw/dealing_with_multiple_versions_of_llamacpp/
VegetableJudgment971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0qdkw
false
null
t3_1p0qdkw
/r/LocalLLaMA/comments/1p0qdkw/dealing_with_multiple_versions_of_llamacpp/
false
false
self
0
null
Offline Epstein File Ranker Using GPT-OSS-120B (Built on tensonaut’s dataset)
189
I’ve been playing with the new 25k-page Epstein Files drop that [tensonaut posted](https://www.reddit.com/r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file). Instead of reading 100MB of chaotic OCR myself like a medieval scribe, I threw an open-source model at it and built a local tool that **ranks every document by “investigative usefulness.”** Everything runs on a single M3 Max MacBook Pro with **open-source** models only. No cloud, no API calls, no data leaving the machine. **What it does** • Streams the entire House Oversight release through **openai/gpt-oss-120b** running locally via LM Studio. • Scores each passage based on actionable leads, controversy, novelty, and power-linkage. • Outputs a fully structured JSONL dataset with headline, score, key insights, implicated actors, financial-flow notes, etc. • Ships with an interactive local viewer so you can filter by score, read full source text, explore lead types, and inspect charts. • Designed for investigative triage, RAG, IR experiments, or academic analysis. **Why it matters** This corpus is massive, messy, and full of OCR noise. Doing a systematic pass manually is impossible. Doing it with cloud models would be expensive and slow. Doing it locally means it’s cheap, private, and reproducible. A full run costs about **$1.50 in electricity**. **Tech details** • Model: openai/gpt-oss-120b served at `localhost:5002/v1` • Hardware: M3 Max, 128 GB RAM • Viewer: simple JS dashboard with AG Grid, charts, and chunked JSONL loading • Input dataset: [tensonaut’s EPSTEIN\_FILES\_20K on Hugging Face](https://huggingface.co/datasets/tensonaut/EPSTEIN_FILES_20K) • Output: ranked chunks in `contrib/`, auto-indexed by the viewer • Prompt: optimized for investigative lead scoring, with a consistent numerical scale (0–100) Repo: [https://github.com/latent-variable/epstein-ranker](https://github.com/latent-variable/epstein-ranker) So far I’ve processed the first 5,000 rows myself and published the scored chunks in the repo. If anyone wants to help triage more of the dataset, the GitHub includes simple instructions for claiming a slice and submitting it as a contrib chunk. The workflow supports clean collaboration with automatic deduping. If you’d rather build your own tools on top of the scored output or adapt the ranking method for other document dumps, go for it. Everything is MIT-licensed, fully local, and easy to extend. Contributions, forks, or experiments are all welcome.
2025-11-18T22:29:30
https://i.redd.it/nkktzj83y22g1.png
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
1p0q3z1
false
null
t3_1p0q3z1
/r/LocalLLaMA/comments/1p0q3z1/offline_epstein_file_ranker_using_gptoss120b/
false
false
default
189
{'enabled': True, 'images': [{'id': 'nkktzj83y22g1', 'resolutions': [{'height': 171, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?width=108&crop=smart&auto=webp&s=d85d6202968741fa53ccc63beff3859cb6f1b43e', 'width': 108}, {'height': 342, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?width=216&crop=smart&auto=webp&s=f7b131a2120c4d64ac0831c4fc1dc6ffcea63b7b', 'width': 216}, {'height': 507, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?width=320&crop=smart&auto=webp&s=980a273d45f6f634c2e11c08d0404d427d427afe', 'width': 320}, {'height': 1015, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?width=640&crop=smart&auto=webp&s=a55f8da7446aedd9d2f482226b72c19b4e4ebbf9', 'width': 640}, {'height': 1523, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?width=960&crop=smart&auto=webp&s=709c4ffb45d9107775808f67da629bf4d0939cbe', 'width': 960}], 'source': {'height': 1658, 'url': 'https://preview.redd.it/nkktzj83y22g1.png?auto=webp&s=2f84b571c5620bf41fc8545839e5c9cdc04be249', 'width': 1045}, 'variants': {}}]}
What skills and courses do I need in order to break into AI from an unrelated field (linguistics & e-commerce advertising)?
0
Hello everyone: I'm looking for a career change and I narrowed down my fields of interest on a few fields, AI being one of them.  TL;DR right away: I'm working in advertising, have a BA in linguistics, and would like to switch careers. What would I need to do for a career in AI, and are jobs or projects available remotely? LONG VERSION: Before I continue let me clarify that I understand I can only enter your field through a very junior, low-level position, or some menial part-time gigs/something of that sort. I want to emphasize this because I sincerely hope no one feels like I am disrespecting their profession by wanting to switch careers with a simple 2 month long course or something.  I am currently working in e-commerce advertising and I'm severely burnt out from it. I would like to switch to a field that inspires me, and I'm looking around for industries that make sense in terms of actually getting a job + that I would actually like to work in. AI development stood out the most to me. I ended up in advertising “accidentally”, I have a BA in linguistics which I was hoping I could use for an AI position somehow… so I applied for a data annotator job at X and got rejected. That was my only application which bummed me out a bit, because oddly I was quite a good fit based on the job description. I don’t have to be a data annotator even though I do believe it would be the most seamless transition and require the least from me in terms of obtaining new qualifications.  But after years of working with advertising reports I realized I’m much better at some “mathematical” skills than I previously thought, it’s actually one of the most enjoyable part of this job for me (I briefly considered data analysis which would probably be an easier switch but I don’t quite like the job description upon learning more about it). I think AI makes more sense for the future even inside of my current field. So, if I want to learn about ways to develop AI, what skillset would I need? Where could I start? Could you recommend a SERIOUS course? Once that’s done, what would I need to showcase my skills to potential employers? Where can small gigs and similar resume-boosting jobs be found? Which people should I follow on LinkedIn? Is there another network or a website where I can learn and follow important people from the industry? Lastly, from a purely practical point of view: how typical is it to work remotely and hire internationally in this industry? I live in a small town in Eastern Europe, the capital may have my desired job (or may not), but working remotely is almost a non-negotiable for me at this stage of my life, and will remain for several more years. 
2025-11-18T22:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1p0q3bp/what_skills_and_courses_do_i_need_in_order_to/
GlitteringCap3570
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0q3bp
false
null
t3_1p0q3bp
/r/LocalLLaMA/comments/1p0q3bp/what_skills_and_courses_do_i_need_in_order_to/
false
false
self
0
null
Students: What’s one thing you wish ChatGPT did better for homework or studying?
0
I’m doing some research on how students use AI for homework, studying, and problem-solving. If you use ChatGPT (or any AI) for STEM classes, math, physics, engineering, chem, etc, what’s ONE thing you wish it did better? Examples: * step-by-step clarity * diagrams or visuals * less BS / fewer mistakes * matching your professor’s method * faster explanations * multiple explanation styles * better LaTeX * practice problems Genuinely curious what the biggest pain point is for you all. Trying to understand how students study now.
2025-11-18T22:17:45
https://www.reddit.com/r/LocalLLaMA/comments/1p0pt26/students_whats_one_thing_you_wish_chatgpt_did/
Background_Film_1338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0pt26
false
null
t3_1p0pt26
/r/LocalLLaMA/comments/1p0pt26/students_whats_one_thing_you_wish_chatgpt_did/
false
false
self
0
null
Code Web Chat - now supports Gemini 3 Pro in AI Studio and Ollama
1
A free and open-source plugin I work on that help you AI pair program with chatbots like AI Studio now supports Gemini 3. I've also added Ollama provider for local inference. Code Web Chat is an initiative promoting AI coding without agents. I believe this approach is safer and more efficient token-wise, thus faster.
2025-11-18T22:17:24
https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder
robertpiosik
marketplace.visualstudio.com
1970-01-01T00:00:00
0
{}
1p0psp1
false
null
t3_1p0psp1
/r/LocalLLaMA/comments/1p0psp1/code_web_chat_now_supports_gemini_3_pro_in_ai/
false
false
default
1
null
Blogs to Follow
5
I'm not in the AI space directly, but want to be aware of all the happenings in the industry without being overloaded with too-specific posts. For example, idk when RAG was first developed but that was a major development milestone (maybe this was around awhile?). Any suggestions for blogs to follow that may give insights into new developments in the AI world in terms of new technology and software as it becomes available?
2025-11-18T21:41:47
https://www.reddit.com/r/LocalLLaMA/comments/1p0ovc7/blogs_to_follow/
TopNo6605
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0ovc7
false
null
t3_1p0ovc7
/r/LocalLLaMA/comments/1p0ovc7/blogs_to_follow/
false
false
self
5
null
Best Edge AI LLM Model: End of 2025
7
Hi, Lets talk real LocalLLaMA, Looking for Edge AI model, something ultra small like 700Mb-1400Mb capable to run on phones, small devices, in CLI everywhere without video cards What is current best Edge LLM model?
2025-11-18T21:37:42
https://www.reddit.com/r/LocalLLaMA/comments/1p0orkq/best_edge_ai_llm_model_end_of_2025/
AleksHop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0orkq
false
null
t3_1p0orkq
/r/LocalLLaMA/comments/1p0orkq/best_edge_ai_llm_model_end_of_2025/
false
false
self
7
{'enabled': False, 'images': [{'id': 'HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=108&crop=smart&auto=webp&s=33d2a486bc5c9fd6d935e5b58976ff8f4173f7c8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=216&crop=smart&auto=webp&s=ff75afb9ac143cf8f799435f6e2101f96670e8f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=320&crop=smart&auto=webp&s=301f001ac974d1a9368d6d135640815d2ca107ce', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=640&crop=smart&auto=webp&s=22ee611a5f01a6d2ef718f6b85b2b7d7c56620bd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=960&crop=smart&auto=webp&s=bd4e53805368c984d480c8bfcdfd27dccdb82d0d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?width=1080&crop=smart&auto=webp&s=75218e46c01e74cece84b34c22840776de9200e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HCU1znTqm_kTDnL_9dq4WqE77baFM0QPwcafs5dndEQ.png?auto=webp&s=9086caa166e7bcb230d6d615fe2f3a48ae6058c1', 'width': 1200}, 'variants': {}}]}
Nvidia Parakeet-Realtime-EOU-120m-v1
57
Parakeet-Realtime-EOU-120m-v1 is a streaming speech recognition model that also performs end-of-utterance (EOU) detection. It achieves low latency (80ms~160 ms) and signals EOU by emitting an <EOU> token at the end of each utterance. The model supports only English and does not output punctuation or capitalization.
2025-11-18T21:30:09
https://huggingface.co/nvidia/parakeet_realtime_eou_120m-v1
nuclearbananana
huggingface.co
1970-01-01T00:00:00
0
{}
1p0okh8
false
null
t3_1p0okh8
/r/LocalLLaMA/comments/1p0okh8/nvidia_parakeetrealtimeeou120mv1/
false
false
default
57
{'enabled': False, 'images': [{'id': 'zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=108&crop=smart&auto=webp&s=cbc66c6aa84b7246dbb3c470f3785d46bc9233fe', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=216&crop=smart&auto=webp&s=5a3a70150e6409453667397b6fd14eaed5c09267', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=320&crop=smart&auto=webp&s=f60e5b4614d7c2862e72b53ac6688c1cf31be973', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=640&crop=smart&auto=webp&s=595bf506e0637ff1f4f22d1c370ac082639d0dbd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=960&crop=smart&auto=webp&s=05051d0ad4d051ae2783fa754c07f99d3dbef5d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=1080&crop=smart&auto=webp&s=f3ca98723e44060dbd3670d7088e8f7cf426f58a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?auto=webp&s=6d7e861d25b65ed8bb7849f2661c5c79c2837f92', 'width': 1200}, 'variants': {}}]}
Fix for Google Antigravity Infinite Login Loop on macOS.
0
If anyone else is struggling to get past the initial infinite loading loop on the sign-in window today for the Antigravity IDE Setup Wizard, I found a workaround: Ignore the stuck setup wizard. Go to the menu bar: Antigravity > Settings > Antigravity Settings. Open the Account tab. Sign in through that panel instead. Restart the app. Continue through the setup wizard as normal; when you sign in again, it should detect the session and let you in. It seems to be a specific conflict with macOS 26.1 Tahoe on Apple Silicon. The `Antigravity Helper` process is getting blocked by the App Sandbox when trying to receive the OAuth handoff token from the browser. (system) <Warning>: denied lookup: name = com.apple.tccd.system, flags = 0x8, requestor = Antigravity Hel, rror = 159: Sandbox restriction (gui/501) <Warning>: denied lookup: name = com.apple.distributed_notifications@Uv3, requestor = Antigravity Hel, error = 159: Sandbox restriction The Setup Wizard UI is waiting for a distributed notification that macOS is blocking. The "Settings" menu likely uses a different auth flow that bypasses this specific Helper process. My Specs: OS: macOS 26.1 Tahoe Hardware: Mac Silicon (M2) App: Google Antigravity (Public Preview, Nov 18) The issue is likely the recent macOS 26 platform update. Apple significantly tightened security for background helpers in this release, and Google likely tested against older kernels. Hope this helps!
2025-11-18T21:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1p0okfp/fix_for_google_antigravity_infinite_login_loop_on/
thr33eyedraven
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0okfp
false
null
t3_1p0okfp
/r/LocalLLaMA/comments/1p0okfp/fix_for_google_antigravity_infinite_login_loop_on/
false
false
self
0
null
This CDW deal has to be a scam??
0
They're selling [AMD Instinct MI210 64gb](https://www.cdw.com/product/amd-instinct-mi210-4x-infinity-fabric-accelerator-graphic-card/7837317?cm_ven=acquirgy&cm_cat=google&cm_pla=NA-NA-AMD_VA&cm_ite=7837317&ef_id=CjwKCAiAz_DIBhBJEiwAVH2XwI3HS6S0YWnaSRqheN-J8CvZdAQjmfZDbxEVwroc0dt4PxfxFMwQfBoCxjAQAvD_BwE:G:s&s_kwcid=AL!4223!3!!!!x!!!21551756139!&gad_source=1&gad_campaignid=21551758680&gbraid=0AAAAADqLdeIGagfFrxbYwkkc0OQTTebLn&gclid=CjwKCAiAz_DIBhBJEiwAVH2XwI3HS6S0YWnaSRqheN-J8CvZdAQjmfZDbxEVwroc0dt4PxfxFMwQfBoCxjAQAvD_BwE) for ~$600. What am I missing? Surely this is a scam?
2025-11-18T21:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1p0of2f/this_cdw_deal_has_to_be_a_scam/
Blotsy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0of2f
false
null
t3_1p0of2f
/r/LocalLLaMA/comments/1p0of2f/this_cdw_deal_has_to_be_a_scam/
false
false
self
0
null
[D] What's the one thing you wish you'd known before putting an LLM app in production?
2
We're about to launch our first AI-powered feature (been in beta for a few weeks) and I have that feeling like I'm missing something important. Everyone talks about prompt engineering and model selection, but what about Cost monitoring? Handling rate limits? What breaks first when you go from 10 users to 10,000? Would love to hear lessons learned from people who've been through this.
2025-11-18T21:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1p0ocz3/d_whats_the_one_thing_you_wish_youd_known_before/
Bbamf10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0ocz3
false
null
t3_1p0ocz3
/r/LocalLLaMA/comments/1p0ocz3/d_whats_the_one_thing_you_wish_youd_known_before/
false
false
self
2
null
Help running internet-access model on M1 16gb air
0
Hi I am trying to run GPT-OSS on M1 16gb macbook air, at first it was not running. Then I used a command to increase RAM but it still only uses 13gb bc of background processes. Is there a smaller model I can run to be able to use to get research from the web and do tasks based on findings from the internet. Or do I need a larger laptop? Or is there a better way to run GPT-OSS?
2025-11-18T21:16:50
https://www.reddit.com/r/LocalLLaMA/comments/1p0o7w0/help_running_internetaccess_model_on_m1_16gb_air/
GottBigBalls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0o7w0
false
null
t3_1p0o7w0
/r/LocalLLaMA/comments/1p0o7w0/help_running_internetaccess_model_on_m1_16gb_air/
false
false
self
0
null
Base or Instruct models for MCQA evaluation
0
Hello everyone, I am still learning on LLM and I have a question concerning MCQA benchmark: If I want to evaluate LLMs on MCQA, what type of models should I use ? Base model or instruct models ? Or both ? Thanks for your help
2025-11-18T20:34:23
https://www.reddit.com/r/LocalLLaMA/comments/1p0n3fv/base_or_instruct_models_for_mcqa_evaluation/
Difficult_Face5166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0n3fv
false
null
t3_1p0n3fv
/r/LocalLLaMA/comments/1p0n3fv/base_or_instruct_models_for_mcqa_evaluation/
false
false
self
0
null
Built a tool to solve the "how much GPU do I actually need?" problem for LLM deployment
11
I've been running LLMs locally and kept hitting the same frustrating issue: trying to figure out if a model will actually fit on my hardware, what batch size to use, and whether quantization is worth it. After doing manual calculations one too many times, I built **kv-planner** \- an open-source tool that does the math for you. **What it does:** * **Memory planning**: Uses PagedAttention math (from vLLM paper) to calculate actual memory usage with <4% fragmentation instead of the 60-80% you get with naive allocation * **Performance prediction**: Roofline analysis tells you if you're compute-bound or memory-bound, and what your expected throughput/latency will be * **Quantization tradeoffs**: Quantified comparison of FP16 vs FP8 vs INT8 vs INT4 (memory savings, speed, quality impact) * **Cost analysis**: If you're renting GPUs, calculates $/million tokens and TCO * **Laptop GPU support**: This was a big one - discovered laptop GPUs run at 7-33% of desktop performance due to thermal throttling. The tool automatically adjusts predictions. **Example use case:** # Want to run Llama-3.2-8B on your RTX 4090? kv-planner plan --model meta-llama/Llama-3.2-8B-Instruct \ --gpu RTX-4090 --rps 10 --optimization-goal balanced # Output tells you: # - Recommended precision: FP8 # - Batch size: 128 # - Expected throughput: 6,292 tokens/sec # - Memory usage: 15.2GB / 24GB # - Plus full vLLM config you can copy-paste **Validation:** Tested on my RTX 5060 Laptop running TinyLlama - predictions were 95%+ accurate after accounting for laptop thermal throttling (which drops performance to \~7% of desktop equivalent, ouch). **Tech details:** * Physics-based modeling (not just rules of thumb) * Supports 28+ GPUs (H100, A100, RTX 50/40/30 series) * Built on research from vLLM, FlashAttention, Roofline Model papers * Python API + CLI * Exports vLLM/TensorRT-LLM configs **GitHub:** [https://github.com/h9-tec/KV-planner](https://github.com/h9-tec/KV-planner) The biggest surprise was how much laptop GPUs underperform vs desktop (7-33% retention). If you're benchmarking on a laptop, expect way lower numbers than the model cards suggest. Open to feedback and contributions! Let me know if there are features you'd find useful. **TL;DR:** Made a tool that tells you exactly what GPU you need, what settings to use, and what performance to expect for running LLMs locally. It's free and open-source.
2025-11-18T20:18:20
https://www.reddit.com/r/LocalLLaMA/comments/1p0morx/built_a_tool_to_solve_the_how_much_gpu_do_i/
1Hesham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0morx
false
null
t3_1p0morx
/r/LocalLLaMA/comments/1p0morx/built_a_tool_to_solve_the_how_much_gpu_do_i/
false
false
self
11
{'enabled': False, 'images': [{'id': 'NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=108&crop=smart&auto=webp&s=74507fa7421c88ddddc5e55743ad13650c008552', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=216&crop=smart&auto=webp&s=2994b8d0362d06e82849fae7c1bf9d02a93845fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=320&crop=smart&auto=webp&s=ef475a4d0a8fdea3395d0c2780c1fced7ea87237', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=640&crop=smart&auto=webp&s=0702f58b31cb0de03d6a6fd238d0bea4ecd8f0c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=960&crop=smart&auto=webp&s=b076db29f6853ba6729be065466cfd6bdbfe863c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?width=1080&crop=smart&auto=webp&s=f2abbda0a5f44b33aee12f8ebab697dbdcf5c7d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NDO1ASKQN6SB-ISiHs7-sEfo0LWhOotAJHgoSqkloow.png?auto=webp&s=634fbece856bb7133e253dd85c81eab50e0801ff', 'width': 1200}, 'variants': {}}]}
Is MOMENTUM BY movementlabs.ai GLM 4.6? I don't think so.
30
After looking around the web i have decided to do a few tests myself for cerebers GLM 4.6 and Momentum by [movementlabs.ai](http://movementlabs.ai) \- i decided to test this prompt for myself and boy oh boy totally different results https://reddit.com/link/1p0misd/video/o2di8w46o22g1/player [Movementlabs.ai pelican riding bike svg](https://reddit.com/link/1p0misd/video/hdd8nmbin22g1/player) [Cerebras Pelican riding bike svg test](https://preview.redd.it/kwxtxiybo22g1.png?width=2490&format=png&auto=webp&s=e24064794b92d7236a70fa6f6e7a49733bc95a64) https://preview.redd.it/ymnyffmgo22g1.png?width=2948&format=png&auto=webp&s=1271f5ec5fad4ebc6ea0d81e8fec2f72099f31f7 https://preview.redd.it/kouvkd1io22g1.png?width=2350&format=png&auto=webp&s=5cf3b4d8d54fe3753deede1c08d0cb71bcfeda4f https://preview.redd.it/1dr8tnfuo22g1.png?width=1920&format=png&auto=webp&s=60612b690a96fe2a9c6a7c5c2e68023040c63cdc https://preview.redd.it/g14yrt6zo22g1.png?width=1772&format=png&auto=webp&s=aeb016ad37aef39b2f7f9313011cce3e253fe16c https://preview.redd.it/4gouxbx0p22g1.png?width=1704&format=png&auto=webp&s=a70e94a7efd85d5e2214d51ca8fae955616f2282 Let me know your thoughts..
2025-11-18T20:12:07
https://www.reddit.com/r/LocalLLaMA/comments/1p0misd/is_momentum_by_movementlabsai_glm_46_i_dont_think/
Vast_Cupcake1039
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0misd
false
null
t3_1p0misd
/r/LocalLLaMA/comments/1p0misd/is_momentum_by_movementlabsai_glm_46_i_dont_think/
false
false
https://b.thumbs.redditm…jIH5Wr-5UWNw.jpg
30
null
Make your AI talk like a caveman and decrease token usage
557
I’ve been working on a little side project to help LLMs talk like… cavemen. Why? To save tokens, of course. It works because LLMs can easily fill in grammar and connectives on their own. So we strip what’s predictable, keep what’s meaningful, and the model still understands everything perfectly. Store RAG documents in caveman-compressed form so each chunk carries more valuable data, fits more context, and gives better retrieval quality. Thought I'd share it here as it might be beneficial in order to not waste tokens on unnecessary words :) Feel free to contribute if you have any additions! [https://github.com/wilpel/caveman-compression](https://github.com/wilpel/caveman-compression)
2025-11-18T19:39:38
https://i.redd.it/7g67ftgti22g1.png
RegionCareful7282
i.redd.it
1970-01-01T00:00:00
0
{}
1p0lnlo
false
null
t3_1p0lnlo
/r/LocalLLaMA/comments/1p0lnlo/make_your_ai_talk_like_a_caveman_and_decrease/
false
false
default
557
{'enabled': True, 'images': [{'id': '7g67ftgti22g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=108&crop=smart&auto=webp&s=85cece61b5bdf578851ac6d52f773d7ce2722bea', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=216&crop=smart&auto=webp&s=6b63e3fbfed5aefbb46b610523eea06503b46bb3', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=320&crop=smart&auto=webp&s=faf0118fb52d02661f924fa879a40959c0bdcec0', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=640&crop=smart&auto=webp&s=d7d9207d83386575ef61218ed4c0a30301826b10', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=960&crop=smart&auto=webp&s=fed7886398d1db4b0b9f49be64867aa45a38223c', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?width=1080&crop=smart&auto=webp&s=a19bc19596c18260d7e318a45e5c358b6639525e', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/7g67ftgti22g1.png?auto=webp&s=8740f8cff8e74552c91f4b48ae1b452b430fc56d', 'width': 1536}, 'variants': {}}]}
Best Framework for Building a Local Deep Research Agent to Extract Financial Data from 70-Page PDFs?
2
🎯 My Use Case I’m working on an agricultural economics project where I need to automatically process lengthy PDF reports (50-200 pages) and extract structured financial data into Excel spreadsheets. Input: PDF report (~70 pages on average) containing economic/financial dataOutput: 2 structured Excel files: • Income Statement (Profit & Loss) • Balance Sheet (Assets & Liabilities) Key Requirements: • ✅ 100% local deployment (privacy + zero API costs) • ✅ Precision is critical (20-30 min runtime is acceptable) • ✅ Agent needs access to tools: read PDF, consult Excel templates, write structured output • ✅ Must handle complex multi-page tables and maintain accounting coherence 💻 My Hardware Setup • GPU: RTX Pro 6000 Blackwell Edition (96GB VRAM) • RAM: 128GB • OS: Linux (Ubuntu 24) 🤔 The Challenge: Context Window Management The main concern is context explosion. A 70-page PDF can easily exceed most model context windows, especially when dealing with: • Dense financial tables • Multi-page data that needs cross-referencing • Need to maintain coherence between Income Statement and Balance Sheet My initial thought: Convert PDF to Markdown using a VLM (like Qwen3-VL-32b) first to make parsing easier, then process with LLM and an agent framework. (Like qwen 3 235b) 🔍 Frameworks I’m Considering I’ve been researching several frameworks and would love the community’s input: 1. LangChain DeepAgents 2. Pydantic AI 3. smolagents (HuggingFace) 4. Local Deep Research 5. LangGraph (i know deep agent is build on top of langgraph so maybe a stupid idea) 1. Which framework would you recommend for this specific use case (document extraction → structured output)? 2. Is my multi-agent architecture overkill, or is this the right approach for handling 70-page PDFs? 3. Should I preprocess with a VLM to convert PDF→Markdown first, or let the agents work directly with raw PDF text? 4. Any experience with DeepAgents for similar document extraction tasks? Is it mature enough? 5. Alternative approaches I’m missing? 🎯 Success Criteria • High precision (this is financial data, errors are costly) • Fully local (no cloud APIs) • Handles complex tables spanning multiple pages • Can validate accounting equations (Assets = Liabilities + Equity) • Reasonable runtime (20-30 -45min per report is fine) Would really appreciate insights from anyone who’s built similar document extraction agents or has experience with these frameworks! Is DeepAgents the right choice, or should I start simpler with smolagents/Pydantic AI and scale up if needed? Thanks in advance! 🙏​​​​​​​​​​​​​​​​
2025-11-18T19:38:11
https://www.reddit.com/r/LocalLLaMA/comments/1p0lm99/best_framework_for_building_a_local_deep_research/
Severe_Biscotti2349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0lm99
false
null
t3_1p0lm99
/r/LocalLLaMA/comments/1p0lm99/best_framework_for_building_a_local_deep_research/
false
false
self
2
null
Are Open Weight Models Falling Behind w/ Gemini 3 Pro?
0
I dont know if it's just me but it feels like open weight models have really fallen behind in the past month. Gemini 3 Pro topped most of the gold-standard benchmarks (HLE, GPQA, terminal bench, etc.). Depending on which basket you're looking at, none of the open source models break the top 5 for overall performance. User-driven benchmarks like Design Arena are another interesting case to look at because not too long ago, GLM 4.6 ranked solidly in the overall top 10 along with DeepSeek, and now none of the open weight models break the top 10 and only really have a strong presence in 3D design. This might just be a byproduct of how many major labs have put out new endpoints in the past few weeks, but does anyone else feel like open weight is falling behind? Eager to see if GLM 5 drops before the end of the year and proves me wrong https://preview.redd.it/7fq2acoag22g1.png?width=2277&format=png&auto=webp&s=eab5f8c320f5e6d2d3186afa4e2bafd76f7f9eee
2025-11-18T19:35:54
https://www.reddit.com/r/LocalLLaMA/comments/1p0lk2r/are_open_weight_models_falling_behind_w_gemini_3/
Nervous_Blood4346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0lk2r
false
null
t3_1p0lk2r
/r/LocalLLaMA/comments/1p0lk2r/are_open_weight_models_falling_behind_w_gemini_3/
false
false
https://b.thumbs.redditm…ndLhQEVBrymM.jpg
0
null
Got this assignment from Lovable but I’m not sure what they actually want — any advice?
0
Hey everyone, I’m currently in a hiring process for a technical role at **Lovable**, and they sent me an assignment that I’m honestly not sure how to interpret. Here’s what the task basically says: > The issue is that **the question “Why did this happen?” is extremely vague**, and from the video it’s not completely clear what kind of explanation they expect. I’m not sure whether they want: * a precise technical diagnosis of the issue shown in the video, * a customer-facing explanation with plausible causes and reassurance, * or if they’re mainly evaluating communication, tone, structure, and clarity. Has anyone done similar assignments before? **How would you interpret this task, and what would you include in the response?**
2025-11-18T19:30:19
https://v.redd.it/suqmtrwnh22g1
Numerous-Currency284
v.redd.it
1970-01-01T00:00:00
0
{}
1p0leul
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/suqmtrwnh22g1/DASHPlaylist.mpd?a=1766086233%2CMTdiODQ5YTc4NzMzZTZkODQxZmM4NjE5Mzc0NDE4MGIwNzY1MTY3NDQwNmNlNjBmYzVlZDBmYmYzNWVjNzM5NQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/suqmtrwnh22g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/suqmtrwnh22g1/HLSPlaylist.m3u8?a=1766086233%2CNWViYzJiNjBiNTI2ZTZkMTRlMzVkNjJlZDRkZWE0YTI1OTBiNTRmMDY3YjhjMmViMGExZGU1MjE0YWUyOTViMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/suqmtrwnh22g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1254}}
t3_1p0leul
/r/LocalLLaMA/comments/1p0leul/got_this_assignment_from_lovable_but_im_not_sure/
true
false
https://external-preview…d73e27bdf32d6db4
0
{'enabled': False, 'images': [{'id': 'cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=108&crop=smart&format=pjpg&auto=webp&s=8260d0f4bbf7c8bf831d4373c8dd4b346a86e029', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=216&crop=smart&format=pjpg&auto=webp&s=ba6830187ab3a5367b320ba83c10790e04f69764', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=320&crop=smart&format=pjpg&auto=webp&s=162aa79c1c30bb188e49896e70534f6af9ce5f02', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=640&crop=smart&format=pjpg&auto=webp&s=595c9f616d52447fe076e90bff0ca2d0f947f5d9', 'width': 640}, {'height': 551, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=960&crop=smart&format=pjpg&auto=webp&s=9acd22067fb294b77739b1e25f26d01ed90abaf7', 'width': 960}, {'height': 620, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8a9ac887407da0da5c198fae26c9675d5128d3e', 'width': 1080}], 'source': {'height': 728, 'url': 'https://external-preview.redd.it/cjZ1YWhzd25oMjJnMdMbPg91L9f0mSCIYGs-_UIYXW3C868GWpPvtPiuuFHA.png?format=pjpg&auto=webp&s=9654a10be77dec281d4eafe3bc0ee504ba3b644f', 'width': 1268}, 'variants': {}}]}
I built a native desktop front-end for Ollama that lets you run your LLMs instantly in any app.
0
Hey everyone, I'm the maker of **Typilot**, and I wanted to share it here because this project is entirely built around solving the workflow problem for local LLM users. We all love running models with **Ollama** for privacy and cost savings, but the pain of using it meant either writing scripts or being stuck in the terminal. Typilot acts as a **universal desktop layer** for your local LLMs. It runs cross-platform (Win/Mac/Linux) and lets you activate your local models with a hotkey in *any* application—VS Code, your browser, email, etc. [Using Typilot in whatsapp web](https://reddit.com/link/1p0ldog/video/0tb10c12h22g1/player) # Why Local LLM Users Will Love This: * **0ms Latency Workflow:** Since the model is already running on your system, there are virtually no network delays. It’s the fastest AI access experience possible. * **Model Management:** You can browse, download, and switch between your different **Ollama** models (Llama 3, Mistral, Code Llama, etc.) right from the app's settings, tailoring your AI for code generation, writing, or analysis. * **True Universal Utility:** Use commands like `fix:` for quick debugging, `gen:` for rapid drafting, or `exp:` to explain concepts—all processed privately on your hardware. If you’re already a local LLM enthusiast, this is designed to be the tool that finally makes that privacy-first workflow seamless and productive. My main question for you all is: **What smaller model (under 13B) have you found performs best for general text rewriting and instant grammar fixes when running locally?** Feel free to test and give me feedback about the product! Thanks!
2025-11-18T19:29:01
https://www.reddit.com/r/LocalLLaMA/comments/1p0ldog/i_built_a_native_desktop_frontend_for_ollama_that/
Facilex_zyzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0ldog
false
null
t3_1p0ldog
/r/LocalLLaMA/comments/1p0ldog/i_built_a_native_desktop_frontend_for_ollama_that/
false
false
self
0
null
Best Cloud GPU / inference option / costs for per hour agentic coding
0
Hey folks, I'm finding Copilot is sometimes quite slow and I would like to be able to chose models and hosting options instead of paying the large flat fee. I'm part of a software engineering team and we'd like to find a solution... Does anyone have any suggestions for GPU Cloud hosts that can host modern coding models? I was thinking about Qwen3 Coder, and what kind of GPU would be required to run the smaller 30B and the larger 480B parameter model- or are there newer SOTA models that outperform that as well? I have been researching GPU Cloud providers and am curious about running our own inferencing on [https://northflank.com/pricing](https://northflank.com/pricing) or something like that... Do folks think that would take a lot of time to setup and that the costs would be significantly greater than using an inferencing service such as [Fireworks.AI](http://Fireworks.AI) or DeepInfra? Thanks, Mark
2025-11-18T19:22:29
https://www.reddit.com/r/LocalLLaMA/comments/1p0l7hw/best_cloud_gpu_inference_option_costs_for_per/
AdSuccessful4905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0l7hw
false
null
t3_1p0l7hw
/r/LocalLLaMA/comments/1p0l7hw/best_cloud_gpu_inference_option_costs_for_per/
false
false
self
0
{'enabled': False, 'images': [{'id': 'SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=108&crop=smart&auto=webp&s=28807a825d7fbe1d90fd979ed2e4d08b16012ea6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=216&crop=smart&auto=webp&s=ed1aef1b4de762737975949563a8ab50df14c080', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=320&crop=smart&auto=webp&s=eca1357a06ee8e682db3d094d753ed75b3570d48', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=640&crop=smart&auto=webp&s=0b470fcc5ae26ba7777b27a915d87c51db032a89', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=960&crop=smart&auto=webp&s=3ca83ce28c708d31a283642ff683bd332fc32784', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?width=1080&crop=smart&auto=webp&s=18bd8527571833147082064a008d9101ffa29238', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/SZAxbmB6O_G-FaBSl9YcXuOKR9eDexJLAOXxG_BM1FQ.png?auto=webp&s=b4667de091c7d40d7c8c505d9a1cd94f09bdbc15', 'width': 1200}, 'variants': {}}]}
iOS/Android app for communicating with Ollama or LM Studio remotely?
1
Basically I am looking for an app that would connect (via internet) to my computer/server that is running LM Studio (or ollama directly). I know there are plenty of web interfaces that are pretty good (ie: Open WebUI, AnythingLLM, etc). But curious if there are any native apps alternatives.
2025-11-18T19:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1p0l2mz/iosandroid_app_for_communicating_with_ollama_or/
liviuberechet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0l2mz
false
null
t3_1p0l2mz
/r/LocalLLaMA/comments/1p0l2mz/iosandroid_app_for_communicating_with_ollama_or/
false
false
self
1
null
Do you have any good Prompts to test out models?
1
I'd like to test out a couple of models but currently my imagination is not good, do you have any good prompts to test out small and big models? Thank you
2025-11-18T19:01:42
https://www.reddit.com/r/LocalLLaMA/comments/1p0knds/do_you_have_any_good_prompts_to_test_out_models/
Cultural-You-7096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0knds
false
null
t3_1p0knds
/r/LocalLLaMA/comments/1p0knds/do_you_have_any_good_prompts_to_test_out_models/
false
false
self
1
null
Gemma 4!!!
214
2025-11-18T18:56:50
https://i.redd.it/p1tbzwhqb22g1.png
Namra_7
i.redd.it
1970-01-01T00:00:00
0
{}
1p0kikj
false
null
t3_1p0kikj
/r/LocalLLaMA/comments/1p0kikj/gemma_4/
false
false
default
214
{'enabled': True, 'images': [{'id': 'p1tbzwhqb22g1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=108&crop=smart&auto=webp&s=1e9249a5f3511716ea69514c7223bd238f93a336', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=216&crop=smart&auto=webp&s=6b391cbe084583b7cfa96f49f2aa860b2443fc99', 'width': 216}, {'height': 265, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=320&crop=smart&auto=webp&s=6c4ea14e3de11aa1ef12490be3ae55907d99bbf4', 'width': 320}, {'height': 530, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=640&crop=smart&auto=webp&s=7c5d31a16c68548f0586cb57f320d60a00e9e043', 'width': 640}, {'height': 795, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=960&crop=smart&auto=webp&s=720ac22609597e5f05acf3e090ccc515c602234a', 'width': 960}, {'height': 895, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?width=1080&crop=smart&auto=webp&s=0306179080f7ce43832bb0cb0e00573218adf21d', 'width': 1080}], 'source': {'height': 895, 'url': 'https://preview.redd.it/p1tbzwhqb22g1.png?auto=webp&s=138a9ded29252349b836deb518c839edd15cce19', 'width': 1080}, 'variants': {}}]}
So I asked GPT-5.1 / Claude Sonnet 4.5 / Kimi K2 Thinking to use Slack GIF Skill
0
Prompt: Can you use Slack Skill and create :ship-it: a rocket doing a short takeoff then looping back. First: GPT-5.1 Second: Claude Sonnet 4.5 Third: Kimi K2 Thinking
2025-11-18T18:53:34
https://www.reddit.com/gallery/1p0kffw
ComposerGen
reddit.com
1970-01-01T00:00:00
0
{}
1p0kffw
false
null
t3_1p0kffw
/r/LocalLLaMA/comments/1p0kffw/so_i_asked_gpt51_claude_sonnet_45_kimi_k2/
false
false
https://a.thumbs.redditm…a35Czm1K1Cz0.jpg
0
null
That jump in ARC-AGI-2 score from Gemini 3
67
2025-11-18T18:51:48
https://www.reddit.com/gallery/1p0kdqf
jd_3d
reddit.com
1970-01-01T00:00:00
0
{}
1p0kdqf
false
null
t3_1p0kdqf
/r/LocalLLaMA/comments/1p0kdqf/that_jump_in_arcagi2_score_from_gemini_3/
false
false
https://b.thumbs.redditm…oAHCltnW5c0Y.jpg
67
null
DR Tulu: An open, end-to-end training recipe for long-form deep research
45
# What Ai2 is releasing We’re making available the entirety of our DR Tulu research and training stack under a permissive license. Releasing all of DR Tulu’s components serves three goals. First, it enables reproducibility and transparency: we release our curated prompt datasets, training and evaluation code (including our RLER implementation), and our 8B model checkpoint so others can replicate our results and study how reward functions and tool configurations shape behavior. Second, it provides deployment flexibility—you can run the agent with your own MCP tool stack, infrastructure, and privacy constraints. Third, it supports extensibility: the dr-agent-lib agent library lets you plug in domain-specific tools and retrieval systems without retraining by simply describing new tools to the model. Taken together, these artifacts make DR Tulu the first fully open, end-to-end deep research framework. We encourage you to experiment with different tool configurations, audit the agent’s research steps, and test how DR Tulu handles your domain's research questions. If you find issues or ways to improve the approach, we'd love to hear about them. 📚 Blog: [https://allenai.org/blog/dr-tulu](https://allenai.org/blog/dr-tulu) ✏️ Paper: [http://allenai.org/papers/drtulu](http://allenai.org/papers/drtulu) 💻 Models: [https://huggingface.co/collections/rl-research/dr-tulu](https://huggingface.co/collections/rl-research/dr-tulu) ⌨️ Code: [https://github.com/rlresearch/DR-Tulu](https://github.com/rlresearch/DR-Tulu)
2025-11-18T18:51:23
https://i.redd.it/6z12rgxba22g1.png
ai2_official
i.redd.it
1970-01-01T00:00:00
0
{}
1p0kdcc
false
null
t3_1p0kdcc
/r/LocalLLaMA/comments/1p0kdcc/dr_tulu_an_open_endtoend_training_recipe_for/
false
false
default
45
{'enabled': True, 'images': [{'id': '6z12rgxba22g1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=108&crop=smart&auto=webp&s=96389cccdb904c3c548ac9102a11e45a9c820fa7', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=216&crop=smart&auto=webp&s=0518d81e9c01363310c5307f56ee39bdac4c0308', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=320&crop=smart&auto=webp&s=97e262941caffc7de1495f83917426ce91ad12cd', 'width': 320}, {'height': 307, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=640&crop=smart&auto=webp&s=44824e149eda9e20a1c7b45b09ec52f394824e96', 'width': 640}, {'height': 461, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=960&crop=smart&auto=webp&s=7c632005720182922ee15ff5d43d33ac780d1a03', 'width': 960}, {'height': 518, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?width=1080&crop=smart&auto=webp&s=1a029c806941e22d223f299219a07f905f6f7f43', 'width': 1080}], 'source': {'height': 596, 'url': 'https://preview.redd.it/6z12rgxba22g1.png?auto=webp&s=97291031634a3a991ed9a9574d4a336d4dcf61cb', 'width': 1241}, 'variants': {}}]}
I built a native desktop front-end for Ollama that lets you run your LLMs (Llama 3, Mistral, etc.) instantly in any app.
0
Hey everyone, I'm the maker of **Typilot**, and I wanted to share it here because this project is entirely built around solving the workflow problem for local LLM users. We all love running models with **Ollama** for privacy and cost savings, but the pain of using it meant either writing scripts or being stuck in the terminal. Typilot acts as a **universal desktop layer** for your local LLMs. It runs cross-platform (Win/Mac/Linux) and lets you activate your local models with a hotkey in *any* application—VS Code, your browser, email, etc. # Why this matters for the Local LLM community: * **0ms Latency Workflow:** Since the model is already running on your system, there are virtually no network delays. It’s the fastest AI access experience possible. * **Model Management:** You can browse, download, and switch between your different **Ollama** models (Llama 3, Mistral, Code Llama, etc.) right from the app's settings, tailoring your AI for code generation, writing, or analysis. * **True Universal Utility:** Use commands like `fix:` for quick debugging, `gen:` for rapid drafting, or `exp:` to explain concepts—all processed privately on your hardware. If you’re already a local LLM enthusiast, this is designed to be the tool that finally makes that privacy-first workflow seamless and productive. My main question for you all is: **What smaller model (under 13B) have you found performs best for general text rewriting and instant grammar fixes when running locally?** You can see the setup and try the app here: [https://typilot.com/](https://typilot.com/) Thanks!
2025-11-18T18:39:13
https://www.reddit.com/r/LocalLLaMA/comments/1p0k16h/i_built_a_native_desktop_frontend_for_ollama_that/
Facilex_zyzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0k16h
false
null
t3_1p0k16h
/r/LocalLLaMA/comments/1p0k16h/i_built_a_native_desktop_frontend_for_ollama_that/
false
false
self
0
null
Give it a month and some Chinese lab will drop a model that blows past these benchmarks.
0
2025-11-18T18:30:18
https://i.redd.it/w6zgwxo4622g1.png
Full_Piano_3448
i.redd.it
1970-01-01T00:00:00
0
{}
1p0jsgz
false
null
t3_1p0jsgz
/r/LocalLLaMA/comments/1p0jsgz/give_it_a_month_and_some_chinese_lab_will_drop_a/
false
false
default
0
{'enabled': True, 'images': [{'id': 'w6zgwxo4622g1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=108&crop=smart&auto=webp&s=6d14980c7e5c63dc05f333702d6b3007e97c8eb3', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=216&crop=smart&auto=webp&s=a3ad7212185649a690f0628e54af9047148c207e', 'width': 216}, {'height': 284, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=320&crop=smart&auto=webp&s=33ea98a25fd9b751350915a72f6c6f83c6e70e9c', 'width': 320}, {'height': 569, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=640&crop=smart&auto=webp&s=a990a1e31beac9f21b9f2d77867990348cc0b16f', 'width': 640}, {'height': 853, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=960&crop=smart&auto=webp&s=4f198c6460b8e477db7bba3bcac0c1d28c3116ba', 'width': 960}, {'height': 960, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?width=1080&crop=smart&auto=webp&s=4ea84aee629bb014fc773a481c83f4d7b7158f57', 'width': 1080}], 'source': {'height': 1011, 'url': 'https://preview.redd.it/w6zgwxo4622g1.png?auto=webp&s=1c9dc8def82f4c31e2082f8bd0f26352f867424d', 'width': 1137}, 'variants': {}}]}
Deterministic Audit Log of a Synthetic Jailbreak Attempt
0
I’ve been building a system that treats AI safety like a real engineering problem, not vibes or heuristics. Here’s my architecture, every output goes through metrics, logic, audit. The result is deterministic, logged an fully replayable. This is a synthetic example showing how my program measures the event with real metrics, routes through a formal logic, blocks it, and writes a replayable cryptographically chained audit record. It works for AI, automation, workflows, finance, ops, robotics, basically anything that emits decisions. Nothing here reveals internal data, rules, or models just my structure.
2025-11-18T18:29:15
https://www.reddit.com/gallery/1p0jrg6
Sad_Perception_1685
reddit.com
1970-01-01T00:00:00
0
{}
1p0jrg6
false
null
t3_1p0jrg6
/r/LocalLLaMA/comments/1p0jrg6/deterministic_audit_log_of_a_synthetic_jailbreak/
false
false
https://b.thumbs.redditm…fQk6ClFcP-5g.jpg
0
null
Mistral removing ton of old models from API (preparing for a new launch?)
143
They are going to be removing 9 (screenshot is missing one) models from their API at the end of this month. So I wonder if that means they are preparing to release something early December? I sure hope I finally get Nemo 2.0 or something... (it's been over a year since that released). Source: [https://docs.mistral.ai/getting-started/models#legacy-models](https://docs.mistral.ai/getting-started/models#legacy-models)
2025-11-18T18:28:50
https://i.redd.it/tg4zaa7b622g1.png
mpasila
i.redd.it
1970-01-01T00:00:00
0
{}
1p0jr1f
false
null
t3_1p0jr1f
/r/LocalLLaMA/comments/1p0jr1f/mistral_removing_ton_of_old_models_from_api/
false
false
default
143
{'enabled': True, 'images': [{'id': 'tg4zaa7b622g1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/tg4zaa7b622g1.png?width=108&crop=smart&auto=webp&s=6f72c75e553db32c22a759d7c5baeedf328163fa', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/tg4zaa7b622g1.png?width=216&crop=smart&auto=webp&s=54b7af3b2275be0ac44e0e8577dc6f4c4746ca0c', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/tg4zaa7b622g1.png?width=320&crop=smart&auto=webp&s=e59ea1dc16237d79d98c77786a80f6915f8e8427', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/tg4zaa7b622g1.png?width=640&crop=smart&auto=webp&s=879c9f3922693c16a694f6bce7604bb1dd61da54', 'width': 640}], 'source': {'height': 615, 'url': 'https://preview.redd.it/tg4zaa7b622g1.png?auto=webp&s=7cd748660a1ec98334fca1b42e46d1b2aa395602', 'width': 948}, 'variants': {}}]}
Momentum Model
27
*Trained on glm, qwen, LLama and other models. Amazing results!* https://reddit.com/link/1p0jkag/video/2biv1urb522g1/player Response from CEO on discord below for those who says its just glm. Official Statement from the CEO of Momentum AI Dear Community, In recent days, there has been speculation online suggesting that Momentum is merely a hosted or proxied version of Zhipu AI's GLM-4.6 model, potentially running on Cerebras infrastructure. As CEO, I want to address this directly and set the record straight with full transparency. To be absolutely clear: Momentum is not GLM-4.6. It is not a hosted instance or proxy of GLM-4.6 (on Cerebras or anywhere else). Momentum is a fully independent large language model trained from scratch by our team. Some key facts to clarify the situation: GLM-4.6 is available through Zhipu AI’s official API and select third-party providers. Importantly, GLM-4.6 is not available via Cerebras’ public API for general use Cerebras does not offer GLM-4.6 inference to external customers. Momentum has no affiliation, partnership, or technical integration with Zhipu AI or Cerebras. We do not route any requests through their services or infrastructure. Momentum was trained using a diverse mixture of high-quality open-source models (including Qwen, the GLM series, Llama/Ollama variants, and others) combined with synthetic data and distillation from closed-source outputs (e.g., Claude). This is a common, transparent practice in the open-source AI ecosystem to achieve SOTA. While our training process responsibly incorporates elements from leading open-source models like the GLM series, Momentum has evolved far beyond its foundational data. Independent evaluations and real-world usage show that Momentum's coding capabilities now consistently exceed those of GLM-4.6, particularly in complex, multistep software engineering tasks, agentic workflows, and edge-case debugging. In early releases, Momentum occasionally exhibited minor training artifacts such as rarely identifying itself as related to GLM or echoing phrasing patterns from its data mixture. This "cross-contamination" is a well-known side effect when aligning heavily on certain open-source bases (in our case, we leaned more toward the GLM family during parts of training). We quickly identified and fully resolved this in subsequent updates, it no longer happens. This phenomenon is far from unique. For example, early DeepSeek models would sometimes respond as if they were OpenAI's GPT due to heavy exposure to OpenAI-style data during training. We have always been open about our training approach and have nothing to hide. To provide even greater clarity, we will soon publish a dedicated technical webpage on [momentum.ai](http://momentum.ai) detailing our full training stack, data sources, alignment techniques, and how we handle and mitigate contamination artifacts. Thank you for your passion, feedback, and support. We're incredibly proud of the independent model we've built, and we're committed to continued transparency as we push open AI forward. Best regards, Hasan Nawaz CEO & Founder, Momentum AI
2025-11-18T18:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1p0jkag/momentum_model/
One_Statement_5725
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0jkag
false
null
t3_1p0jkag
/r/LocalLLaMA/comments/1p0jkag/momentum_model/
false
false
self
27
null
[HELP] OpenWebUI folder sharing
2
Hey everyone, I’m new to this so bare with me. I’m running OWUI (for a team of 10 people) on Azure Web App Service (Web App for Containers) + I set up persistent storage (Azure files storage) + I’m using an OpenRouter API key for the models. My problem is : I want a shared workspace in OWUI where all users see the same content. I want the same chats, folders, documents and searches to appear across all 10 users sessions/accounts. Is that possible? If so, how can I do that?
2025-11-18T18:08:36
https://www.reddit.com/r/LocalLLaMA/comments/1p0j79h/help_openwebui_folder_sharing/
Here-for-awhile
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0j79h
false
null
t3_1p0j79h
/r/LocalLLaMA/comments/1p0j79h/help_openwebui_folder_sharing/
false
false
self
2
null
Alternatives to Aider for CLI development?
1
I am curious if anyone here knows of any alternatives to Aider for CLI development that work well for you? One of the things I love about Aider is the tight control over the context window and the non agent based workflow. I use other tools like Gemini CLI for agents but I find they blow through tokens and I like to use Aider to generate plans to keep the agent CLI tooling and evaluate the code base with different models and generate issues lists that can then be used by agent based tools. I just like having the control that a CLI tool like Aider gives me. My problem is that, while I really like Aider, it has a lot of issues, and the maintainer has largely stepped aside to work on other projects, refuses to take on co-maintainers while the issues and pull requests stack up, and to a large degree is unresponsive to the community. So the project has stagnated and is likely to stay that way for the forseeable future. I don't blame the maintainer, but I have learned that open source projects with a dominant maintainer that refuses to open up the community development is not sustainable. So after using Aider as part of my developer workflow for more than a year I am looking to move on now. I have looked around but only see CLI agent tools, which is not what I am looking for. I use those as well when needed, but for this use case, I want something I give the CLI files or directories to include, a chat history, and have it respond to my instructions to mak the edits I want, as I am an experienced developer that doesn't want to blow through tokens for specific tasks. If it supports MCP tools that is great, but if it doesn't I don't really care. What I care about is an active developer community, and that it is not solely trying to be an agent manager, but instead a tool for human developers that know what they want and want to tightly control the requests to the AI models. Know of anything out there, or am I going to have to fork the project for myself or build my own?
2025-11-18T17:59:54
https://www.reddit.com/r/LocalLLaMA/comments/1p0iyjp/alternatives_to_aider_for_cli_development/
awebb78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0iyjp
false
null
t3_1p0iyjp
/r/LocalLLaMA/comments/1p0iyjp/alternatives_to_aider_for_cli_development/
false
false
self
1
null
Hardware requirements to get into Local LLMs
1
This is perhaps a silly question but I've genuinely not been able to find a comprehensive thread like this here so I hope yall will indulge (if not for my personal sake than perhaps for those who will inevitably stumble onto this thread in future looking for the same answers). When it comes to puter habits I'm first and foremost a gamer and run a high-end gaming setup (RTX 5090, 9800x3d, 64 gig DDR5) that was obviously never built with LLM work in mind but is still pretty much the most powerful consumer-grade tech you can get. What I wonder is is this enough to dabble in a little local LLM work or should one necessarily have a specifically LLM-attuned GPU? So far the best I've been able to do was launch gpt-oss:120b but it works way slower and does not produce results nearly as good as GPT-5 which I pay monthly for anyway, so should I maybe just not bother and use that? TL:DR - with my setup and a just-slightly-above--an-average-normie understanding of LLM and IT in general will I be able to get anything cooler or more interesting than just straight up using GPT-5 for my LLM needs and my PC for what it was meant to do (launch vanilla Minecraft in 1500 fps)?
2025-11-18T17:53:51
https://www.reddit.com/r/LocalLLaMA/comments/1p0isn5/hardware_requirements_to_get_into_local_llms/
back_and_colls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0isn5
false
null
t3_1p0isn5
/r/LocalLLaMA/comments/1p0isn5/hardware_requirements_to_get_into_local_llms/
false
false
self
1
null
LibreChat first impressions
6
I'm setting up an instance for about five users on a cheap virtual private server. I'm using Mistral's API but from the point of view of the app it's a "custom endpoint" so I suppose this will apply to other non-OpenAI vendors as well. First of all, LibreChat was easy to get running. Their guide on `docker compose` worked perfectly and it was quick to test things both locally and on a Ubuntu server. They ship an example config and docker compose override file, which is great. The documentation also had clear examples how to add a user from the command line. The configuration process itself was a confusing experience because the contents are spread between environment variables and `librechat.yaml`. For example, I wanted to configure a custom model. I had to add an element to `endpoints: custom` list in the YAML, which was nicely signposted with commented-out sections. But to configure which models are shown in the UI (I wanted to hide unused ones), it's a list stored in a string in the `ENDPOINTS` env var. Took almost an hour to figure that out... Also, the app starts even with an invalid YAML in the config. Once I got the Mistral models running, I could chat and also upload images. Both work fine. Image upload was a bit clunky because the web UI always asks if you'd like to locally OCR the image or "send it to the provider". Speaking of the web UI, it works fine. It's model selector has a nice search, side panels can be opened and closed. There's support for temporary chats but they can't be made the default though (Kagi Assistant does this). Custom system prompts and sampling parameters must be added via "agents". In fact, I had to go back and set that same env var to `ENDPOINTS=custom,agents`, to be able to even change the system prompt. This seemed to work OK and apparently you can also share prompts between users. I had a quick test with the built-in RAG but couldn't get it to work. The docs helpfully showed how to change compose to run a different image, but I had to piece together myself that another env var (`OLLAMA_BASE_URL=http://host.docker.internal:11434`) had to be added for it actually run. This resulted in "400 status code (no body)" errors somewhere in the stack, an unresolved issue mentioned already four months ago: https://github.com/danny-avila/LibreChat/discussions/8389 https://github.com/danny-avila/LibreChat/discussions/7847 I'm not 100% convinced in the quality of the engineering in this project (it uses MongoDB, after all) but I'll continue to try to get the RAG work before making my final judgement.
2025-11-18T17:51:38
https://www.reddit.com/r/LocalLLaMA/comments/1p0iqgw/librechat_first_impressions/
DHasselhoff77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0iqgw
false
null
t3_1p0iqgw
/r/LocalLLaMA/comments/1p0iqgw/librechat_first_impressions/
false
false
self
6
null
Google Antigravity is a cursor clone
378
If you love vibe coding. [https://antigravity.google/](https://antigravity.google/) Supports models other than gemini.
2025-11-18T17:36:03
https://www.reddit.com/r/LocalLLaMA/comments/1p0iayb/google_antigravity_is_a_cursor_clone/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0iayb
false
null
t3_1p0iayb
/r/LocalLLaMA/comments/1p0iayb/google_antigravity_is_a_cursor_clone/
false
false
self
378
{'enabled': False, 'images': [{'id': 'WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=108&crop=smart&auto=webp&s=5e73df63b8b2a3b3d56c3cf3b3a82c4e5d7488b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=216&crop=smart&auto=webp&s=eab0b986148c9ef2ab4041d896f8c27a7432868f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=320&crop=smart&auto=webp&s=72e72d32edcd36d1740b18ab798d8940d6fee355', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=640&crop=smart&auto=webp&s=87a055abf92235e6770e17cf37bb36f1ad8d18c0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=960&crop=smart&auto=webp&s=5af2c34ed33df469792b804c7dd532992f387fca', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?width=1080&crop=smart&auto=webp&s=d82f07fe526717172d68055c09c0d74e16548832', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WXefZzZ0I_XQ4y7ri4FCtZNZIU0NuuwchUW9JjF2f7Q.png?auto=webp&s=25f675a45ef96cb6a4c21b3b9fe640432a26c437', 'width': 1200}, 'variants': {}}]}
Buy for me! Budget is $2500. With my budget what would you buy?
0
Looking to do alot of projects for at home use and professional.SWE student last semester and going into Masters next year. Then Phd is goal. Want to future proof atleast next 3 years.
2025-11-18T17:35:38
https://www.reddit.com/r/LocalLLaMA/comments/1p0iaiz/buy_for_me_budget_is_2500_with_my_budget_what/
Dull-Solid-5104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0iaiz
false
null
t3_1p0iaiz
/r/LocalLLaMA/comments/1p0iaiz/buy_for_me_budget_is_2500_with_my_budget_what/
false
false
self
0
null
Hardware Purchase Help?
1
I'm in the process of putting together an LLM server that will double as a Geth node for private blockchain shenanigans. Questions: 1. What am I missing from my hardware? 2. What GPUs should I buy? (I'm leaning towards starting with two RTX 2000E 16gb) List of hardware: Motherboard: ASUS X99-E WS/USB 3.1, LGA 2011-v3, Intel Motherboard CPU: Intel Core i7-6950X SR2PA 3.00GHz 25MB 10-Core LGA2011-3 CPU Cooling: Noctua NH-D15 CPU Cooler with 2x NF-A15 RAM: 64gb Ram (8x 8gig cards) SSD: Crucial P5 Plus 2TB M.2 NVMe Internal SSD PSU: Corsair HX1200 1200W 80+ Platinum Certified Chassis: Rosewill RSV-R4100U 4U Server Rackmount Case I haven't purchased the GPUs yet. I want to be able to expand to a more powerful system, using the parts I've purchased. I've been leaving towards the RTX 2000E for the single slot capabilities. The chassis has solid built in cooling.
2025-11-18T17:24:59
https://www.reddit.com/r/LocalLLaMA/comments/1p0i07x/hardware_purchase_help/
Blotsy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0i07x
false
null
t3_1p0i07x
/r/LocalLLaMA/comments/1p0i07x/hardware_purchase_help/
false
false
self
1
null
Non Chinese open vlms
0
Hi everyone ! I have a very classic use case , that is document to json on scanned documents of many different types ( sending an image and receiving a formatted json ) . To do that my constraints are open source model up to 10B parameters . I typically lora fine tune models on hundreds-thousand of files of custom datasets to have good quality domain specific models with my expected json schema . I then use vllm for inference with constrained decoding . I have had some great results with Qwen models who have been my go-to for a while for these kind of tasks . However recently my company told me a lot of customers didn’t want Chinese models at all ( even if open and ran on our own servers , which makes no sense to me but I’m not a commercial after all ). After checking huggingface open vlm leaderboard , well basically all open source models in these size are Chinese which makes them a no go for me . So , did you guys have any successful experiences with non Chinese open models for similar cases ? So far the closest in quality that I got was Gemma 3 4b it . I also tried phi4 multimodal but this was pretty much terrible . In the past on other projects I also had good results with Donut but it doesn’t generalize well at all compared to modern vlms . Thanks by advance for any tips/advices !
2025-11-18T17:24:45
https://www.reddit.com/r/LocalLLaMA/comments/1p0hzz9/non_chinese_open_vlms/
Lerdrit1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0hzz9
false
null
t3_1p0hzz9
/r/LocalLLaMA/comments/1p0hzz9/non_chinese_open_vlms/
false
false
self
0
null
rtx 5080 or 5070ti & 3060 dual.
1
5080 or 5070ti and 3060 (maybe 3090 idk for now when time comes i look my budget). which one is more effective. I am a newbie and i need a help which version is good for llm.
2025-11-18T17:21:59
https://www.reddit.com/r/LocalLLaMA/comments/1p0hxbk/rtx_5080_or_5070ti_3060_dual/
Familiar_Scientist95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0hxbk
false
null
t3_1p0hxbk
/r/LocalLLaMA/comments/1p0hxbk/rtx_5080_or_5070ti_3060_dual/
false
false
self
1
null
Building an open-source enterprise middleware over flo-ai
0
We have been building flo-ai for a while now. You can check our repo and possibly give us a star @ [https://github.com/rootflo/flo-ai](https://github.com/rootflo/flo-ai) We have serviced many clients using the library and its functionalities. Now we are planning to further enhance the framework and build an open source platform around it. At its core, we are building a middleware that can help connect flo-ai to different backend and service. We plan to then build agents over this middleware and expose them as APIs, which then will be used to build internal applications for enterprise. We are gonna publish a proposal README soon. But any suggestions from this community can really help us plan the platfrom better. Thanks!
2025-11-18T16:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1p0h3mi/building_an_opensource_enterprise_middleware_over/
Traditional-Let-856
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0h3mi
false
null
t3_1p0h3mi
/r/LocalLLaMA/comments/1p0h3mi/building_an_opensource_enterprise_middleware_over/
false
false
self
0
{'enabled': False, 'images': [{'id': '0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=108&crop=smart&auto=webp&s=bcd65ecb320deca3be47e27ba701669d8c05f8bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=216&crop=smart&auto=webp&s=2d68628aabff9a3fd4ac37ae7740af000fe42024', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=320&crop=smart&auto=webp&s=0ae003e62815aa9d02014fb3db645ad7ff7c562c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=640&crop=smart&auto=webp&s=e93d28c9ee762f682d182fc0a03bdc18afad3e09', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=960&crop=smart&auto=webp&s=a1b9b66bea8da679ce82a58e3b06528ceaefb8ec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?width=1080&crop=smart&auto=webp&s=a6a234e6d046fbfb5ba11555706d2c672c439e7f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0uuYfE1ODoWsUWiH-ntK2oX56vBFRWqhl3UIsixKwWE.png?auto=webp&s=2f104bcc30b70e381c2cff846e742400efedb810', 'width': 1200}, 'variants': {}}]}
I know this might be a strange thing to ask…
1
[removed]
2025-11-18T16:50:47
https://www.reddit.com/r/LocalLLaMA/comments/1p0h2e3/i_know_this_might_be_a_strange_thing_to_ask/
No_Dot9595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0h2e3
false
null
t3_1p0h2e3
/r/LocalLLaMA/comments/1p0h2e3/i_know_this_might_be_a_strange_thing_to_ask/
false
false
self
1
null
Benchmarked JSON vs TOON for AI reasoners — 40–80% token savings. Real numbers inside.
0
I’ve been experimenting with token-efficient data encoding formats for LLM workflows, and I benchmarked JSON vs TOON using three different context types: 1. Prospect metadata 2. Deal metadata with nested stakeholders 3. Email generation context Here are the exact results from running the benchmark script: Prospect Context: JSON: 387 chars TOON: 188 chars → 51% reduction Deal Context: JSON: 392 chars TOON: 88 chars → 78% reduction Email Context: JSON: 239 chars TOON: 131 chars → 46% reduction Total token savings across these samples: \~60% This surprised me because the structures were totally different (flat, nested, mixed). TOON still consistently cut the size almost in half or better. Anyone else experimenting with non-JSON formats for LLM reasoning loops? Would love to compare notes. (If anyone wants the benchmark script, I'll share it.)
2025-11-18T16:48:16
https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/
Least-Barracuda-2793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0gzz9
false
null
t3_1p0gzz9
/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/
false
false
self
0
null
I know this might be a strange thing to ask…
1
[removed]
2025-11-18T16:47:28
https://www.reddit.com/r/LocalLLaMA/comments/1p0gz6y/i_know_this_might_be_a_strange_thing_to_ask/
No_Dot9595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0gz6y
false
null
t3_1p0gz6y
/r/LocalLLaMA/comments/1p0gz6y/i_know_this_might_be_a_strange_thing_to_ask/
false
false
self
1
null
Gemini 3 is launched
996
2025-11-18T16:31:01
https://blog.google/products/gemini/gemini-3/#note-from-ceo
Several-Republic-609
blog.google
1970-01-01T00:00:00
0
{}
1p0gjcu
false
null
t3_1p0gjcu
/r/LocalLLaMA/comments/1p0gjcu/gemini_3_is_launched/
false
false
default
996
{'enabled': False, 'images': [{'id': 'Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=108&crop=smart&auto=webp&s=16645bed3b1b3f8904c6b103ff8a20b1ea4d3664', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=216&crop=smart&auto=webp&s=36ade6bb8e0b1ea1cc445f6efd8fe3c93d573c57', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=320&crop=smart&auto=webp&s=1787354bd57a89895bf7eccd85f092a407b70d31', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=640&crop=smart&auto=webp&s=dc3edcd8902e26525ff2ad02160747ab3d46316e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=960&crop=smart&auto=webp&s=93d9019ce00ad0d7ebbdd0dacbaaf35e926550ee', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?width=1080&crop=smart&auto=webp&s=f7fe0e646be72b9a836902b74c5c8fdbd93afa96', 'width': 1080}], 'source': {'height': 731, 'url': 'https://external-preview.redd.it/Jcgyato32sPSUDLsqQhcsyfnhHKEryk97hJ_EjIMDyU.jpeg?auto=webp&s=eea4ec0a00cfff2ab5a1072973f5225222048740', 'width': 1300}, 'variants': {}}]}
Best Local Model Closest to GPT5?
0
What's the closest model in your guys opinion that is closest to GPT 5 to run locally? Looking for really good reasoning, good web searching/analyzing, and good RAG. Also, if you happen to know from personal experience, what type of firepower you need for that, please let me know. Thanks!
2025-11-18T16:29:09
https://i.redd.it/7u0d3cr5l12g1.jpeg
MintiaBreeze1
i.redd.it
1970-01-01T00:00:00
0
{}
1p0ghl7
false
null
t3_1p0ghl7
/r/LocalLLaMA/comments/1p0ghl7/best_local_model_closest_to_gpt5/
false
false
default
0
{'enabled': True, 'images': [{'id': '7u0d3cr5l12g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=108&crop=smart&auto=webp&s=effccc2527f41adbd7163894cea3ac8e61447eb6', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=216&crop=smart&auto=webp&s=635304492036fdcec4bb331df79b8ba807e9baef', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=320&crop=smart&auto=webp&s=7ccdf83ef85862e2d7c4438749aaf7e9b0271371', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=640&crop=smart&auto=webp&s=2d956a0c7f88b67c838d04591e01a14eac681ca0', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=960&crop=smart&auto=webp&s=1029af301bf55bac99ac2342a0a7dcac358325d3', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?width=1080&crop=smart&auto=webp&s=8dbbb0b5d64294520e7854d74d5427bf5ecb7c1c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/7u0d3cr5l12g1.jpeg?auto=webp&s=2d35cce194af6e3c284fd562fa1916a4e690dbf2', 'width': 1080}, 'variants': {}}]}
About that person who is s*icidal over a possible OpenAI data breach
1
[removed]
2025-11-18T16:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1p0gddp/about_that_person_who_is_sicidal_over_a_possible/
Cool-Current-134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0gddp
false
null
t3_1p0gddp
/r/LocalLLaMA/comments/1p0gddp/about_that_person_who_is_sicidal_over_a_possible/
false
false
self
1
null
About that person who is worried about an OpenAI data breach.
1
[removed]
2025-11-18T15:56:18
https://www.reddit.com/r/LocalLLaMA/comments/1p0fl9p/about_that_person_who_is_worried_about_an_openai/
Cool-Current-134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0fl9p
false
null
t3_1p0fl9p
/r/LocalLLaMA/comments/1p0fl9p/about_that_person_who_is_worried_about_an_openai/
false
false
self
1
null
Curiosity is All You Need
12
2025-11-18T15:47:46
https://arxiv.org/abs/2511.10395
abdouhlili
arxiv.org
1970-01-01T00:00:00
0
{}
1p0fd8i
false
null
t3_1p0fd8i
/r/LocalLLaMA/comments/1p0fd8i/curiosity_is_all_you_need/
false
false
default
12
null
Open-source RAG/LLM evaluation framework; Community Preview Feedback
0
Hallo from Germany, I'm one of the founders of Rhesis, an open-source testing platform for LLM applications. Just shipped v0.4.2 with zero-config Docker Compose setup (literally ./rh start and you're running). Built it because we got frustrated with high-effort setups for evals. Everything runs locally - no API keys. Genuine question for the community: For those running local models, how are you currently testing/evaluating your LLM apps? Are you: Writing custom scripts? Using cloud tools despite running local models? Just... not testing systematically? We're MIT licensed and built this to scratch our own itch, but I'm curious if local-first eval tooling actually matters to your workflows or if I'm overthinking the privacy angle. Link: [https://github.com/rhesis-ai/rhesis](https://github.com/rhesis-ai/rhesis)
2025-11-18T15:43:28
https://www.reddit.com/r/LocalLLaMA/comments/1p0f9a1/opensource_ragllm_evaluation_framework_community/
IOnlyDrinkWater_22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0f9a1
false
null
t3_1p0f9a1
/r/LocalLLaMA/comments/1p0f9a1/opensource_ragllm_evaluation_framework_community/
false
false
self
0
{'enabled': False, 'images': [{'id': 'vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=108&crop=smart&auto=webp&s=f24e0e949ca23929bc822d2e4a2beef81eb7d074', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=216&crop=smart&auto=webp&s=f5613224f87f76fa334a3c0d0dcce284117eecf4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=320&crop=smart&auto=webp&s=f21b984b4ada9804764c8f19cf911fdcb140f99f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=640&crop=smart&auto=webp&s=7c45b19f43b94de4e63ffd5baf077ee097249664', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=960&crop=smart&auto=webp&s=8fd636d53e1e0fcafcef4722520c9d602c2822ff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?width=1080&crop=smart&auto=webp&s=2920efd5c4656644691adde64c1dc6cfbe231463', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vc3Zh3lj-bYff4uu23yiPi7dHuC_I8sTLuU9J8L9m1I.png?auto=webp&s=b1727e1cee397a3e8300f9e28fb00e1f179c306f', 'width': 1200}, 'variants': {}}]}
Sanity Check for LLM Build
6
GPU: NVIDIA RTX PRO 6000 (96GB) CPU: AMD Ryzen Threadripper PRO 7975WX Motherboard: ASRock WRX90 WS EVO (SSI-EEB, 7x PCle 5.0, 8-channel RAM) RAM: 128GB (8×16GB) DDR5-5600 ECC RDIMM (all memory channels populated) CPU Cooler: Noctua NH-U14S TR5-SP6 PSU: 1000W ATX 3.0 (Stage 1 of a dual-PSU plan for a second pro 6000 in the future) Storage: Samsung 990 PRO 2TB NVMe --- This will function as a vllm server for models that will usually be under 96GB VRAM.
2025-11-18T15:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1p0f8ya/sanity_check_for_llm_build/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0f8ya
false
null
t3_1p0f8ya
/r/LocalLLaMA/comments/1p0f8ya/sanity_check_for_llm_build/
false
false
self
6
null
Gemini 3 Pro vs Kimi K2 Thinking
38
Has anyone done some initial comparisons between the new Gemini 3 Pro and Kimi K2 Thinking? What are their strengths/weaknesses relative to each other?
2025-11-18T15:38:27
https://www.reddit.com/r/LocalLLaMA/comments/1p0f4r8/gemini_3_pro_vs_kimi_k2_thinking/
SlowFail2433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0f4r8
false
null
t3_1p0f4r8
/r/LocalLLaMA/comments/1p0f4r8/gemini_3_pro_vs_kimi_k2_thinking/
false
false
self
38
null
The world’s fastest open-source TTS: Supertonic
140
Demo [https://huggingface.co/spaces/Supertone/supertonic#interactive-demo](https://huggingface.co/spaces/Supertone/supertonic#interactive-demo) Code [https://github.com/supertone-inc/supertonic](https://github.com/supertone-inc/supertonic) Hello! I want to share Supertonic, a newly open-sourced TTS engine that focuses on extreme speed, lightweight deployment, and real-world text understanding. It’s available in 8+ programming languages: C++, C#, Java, JavaScript, Rust, Go, Swift, and Python, so you can plug it almost anywhere — from native apps to browsers to embedded/edge devices. Technical highlights are (1) Lightning-speed — Real-time factor: **•** 0.001 on RTX4090 **•** 0.006 on M4 Pro (2) Ultra lightweight — 66M parameters (3) On-device TTS — Complete privacy and zero network latency (4) Advanced text understanding — Handles complex, real-world inputs naturally (5) Flexible deployment — Works in browsers, mobile apps, and small edge devices Regarding (4), one of my favorite test sentences is:  **•** He spent 10,000 JPY to buy tickets for a JYP concert. Here, “JPY” refers to Japanese yen, while “JYP” refers to a name — Supertonic handles the difference seamlessly. Hope it's useful for you!
2025-11-18T15:27:47
https://v.redd.it/w8c1bnsaa12g1
ANLGBOY
v.redd.it
1970-01-01T00:00:00
0
{}
1p0euvd
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w8c1bnsaa12g1/DASHPlaylist.mpd?a=1766071684%2CM2ZjMjUwYmJhMWVhM2QyMDkzOGY2MGIxMDA5Zjc1MjliNTc4OGU4MjVmNzJkYjEzNmE5MjcxOTc3ZDZhNjU3OA%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/w8c1bnsaa12g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/w8c1bnsaa12g1/HLSPlaylist.m3u8?a=1766071684%2CNGIwY2Y1NTA2NzYwMTJiMTRkYzg0NDQ2OWRlNDVjOWEzYjA3YjllNzExNWNjYmRhZjAwMTYxNDcwNjViODgyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w8c1bnsaa12g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1p0euvd
/r/LocalLLaMA/comments/1p0euvd/the_worlds_fastest_opensource_tts_supertonic/
false
false
https://external-preview…65611990d61cfd85
140
{'enabled': False, 'images': [{'id': 'YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=108&crop=smart&format=pjpg&auto=webp&s=ded27bbd37455c474aec9d9b79a0bd1d38bcbcb3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=216&crop=smart&format=pjpg&auto=webp&s=c1160fdbcb3b658e1491937d6a1df2a9e74c6075', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=320&crop=smart&format=pjpg&auto=webp&s=c52db68581b635860649f9910347c1bb67d7b60a', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=640&crop=smart&format=pjpg&auto=webp&s=870dee1c4ecbb01e512518d13d3d43d32cabb2ed', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=960&crop=smart&format=pjpg&auto=webp&s=725d9ad7870c6e31713ceddf45f8205e3d21fc95', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c4cea5db129a4eb4fec46071d445d78d49fd304c', 'width': 1080}], 'source': {'height': 1574, 'url': 'https://external-preview.redd.it/YTdlbmtuc2FhMTJnMeUni0jQysE8S8tC5OeTL5WYLlemOmlkeCkLZq86D7UU.png?format=pjpg&auto=webp&s=ff4b487a40b76611ae55a53286de619aad360287', 'width': 1574}, 'variants': {}}]}
Is it not advised to use help from GPTs while installing LLMs ?
0
Seriously, every time I try to install anything I have been bombarbed by pytorch errors, python version, Gpu and it never seems to get solved One after another and GPTs It's kinda a overwhelming
2025-11-18T15:14:30
https://www.reddit.com/r/LocalLLaMA/comments/1p0eide/is_it_not_advised_to_use_help_from_gpts_while/
ProNoostr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0eide
false
null
t3_1p0eide
/r/LocalLLaMA/comments/1p0eide/is_it_not_advised_to_use_help_from_gpts_while/
false
false
self
0
null
Model recommendations for 128GB Strix Halo for long novel and story writing (multilingual)
5
Hello, I have a question please. What are your model(s) recommendations for 128GB Strix Halo for novel and story writing (multilingual). How much output in tokens and words can they generate in one response ? And can they be run on 128GB Strix Halo ? What's the largest and biggest most refined with longest response and coherence that could be run on 128GB Strix Halo ? Thanks
2025-11-18T15:10:25
https://www.reddit.com/r/LocalLLaMA/comments/1p0eeox/model_recommendations_for_128gb_strix_halo_for/
PristineMarch7738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0eeox
false
null
t3_1p0eeox
/r/LocalLLaMA/comments/1p0eeox/model_recommendations_for_128gb_strix_halo_for/
false
false
self
5
null
Long Term Memory - Mem0/Zep/LangMem - what made you choose it?
7
I'm evaluating memory solutions for AI agents and curious about real-world experiences. For those using Mem0, Zep, or similar tools: \- What initially attracted you to it? \- What's working well? \- What pain points remain? \- What would make you switch to something else?
2025-11-18T15:00:19
https://www.reddit.com/r/LocalLLaMA/comments/1p0e5a6/long_term_memory_mem0zeplangmem_what_made_you/
nicoloboschi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0e5a6
false
null
t3_1p0e5a6
/r/LocalLLaMA/comments/1p0e5a6/long_term_memory_mem0zeplangmem_what_made_you/
false
false
self
7
null
If the bubble bursts, what's gonna happen to all those chips?
113
Will they become cheap? Here's hoping I can have an H200 in my garage for $1500.
2025-11-18T14:51:51
https://www.reddit.com/r/LocalLLaMA/comments/1p0dxns/if_the_bubble_bursts_whats_gonna_happen_to_all/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0dxns
false
null
t3_1p0dxns
/r/LocalLLaMA/comments/1p0dxns/if_the_bubble_bursts_whats_gonna_happen_to_all/
false
false
self
113
null
5080 vs 3090
0
For context I’ve had a 5080 and it’s great for what it is but obviously vram is limiting. I was just recently able to get a 5090. I have the option to trade it in and swap for a refurbished 3090 (with microcenter warranty). Would it make sense to swap out and pair the 3090 with my 5090 or is the jump from 48gb to 56gb not substantial enough?
2025-11-18T14:25:35
https://www.reddit.com/r/LocalLLaMA/comments/1p0daon/5080_vs_3090/
Shabuwa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0daon
false
null
t3_1p0daon
/r/LocalLLaMA/comments/1p0daon/5080_vs_3090/
false
false
self
0
null
Das beste Tool zur Überwachung von LLM - DE Markt!
0
Hi zusammen kann mir hier vielleicht jemand ein gutes Tool für das LLM-Monitoring empfehlen? Wichtig ist, dass der deutsche Markt schon implementiert ist. Bisher habe ich nur Tools getestet, die entweder zu teuer sind, keine Daten aus Deutschland liefern oder deren Ergebnisse nicht vertrauenswürdig sind :-( Momentan benutze ich Rankscale und habe auch Peec AI getestet. Für wöchentliches KI-Monitoring ist Rankscale auch nicht schlecht, aber ich habe vor kurzem bemerkt, dass einige relevante Quellen fehlen und sich die Ergebnisse des automatischen LLM-Monitorings stark von der manuellen Prüfung unterscheiden. Es wäre gut, noch ein Tool zu finden und dann die Ergebnisse zu vergleichen. Ich habe große Hoffnungen in das SE Visible Tool gesetzt, aber die deutsche Version ist momentan noch nicht vorhanden. Kann mir hier also jemand helfen? Danke im Voraus!
2025-11-18T14:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1p0d5l5/das_beste_tool_zur_überwachung_von_llm_de_markt/
OddDraw7092
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0d5l5
false
null
t3_1p0d5l5
/r/LocalLLaMA/comments/1p0d5l5/das_beste_tool_zur_überwachung_von_llm_de_markt/
false
false
self
0
null
The Essence of Intelligence: An Agent-Centric Framework of Existence, Purpose, and Coexistence
0
Abstract This paper develops a structural and agent-centric theory of intelligence grounded not in task performance or phenomenology, but in the deep relationship between an intelligent agent, its constructed internal world, and the environment that shapes its goals. We present a unified framework built around three core equations: (1) the Existence Equation, which formalizes an intelligent agent as a triad consisting of its real-world state, its internal virtual world, and its actions; (2) the Purpose Inheritance Equation, which explains how an agent’s goals arise from the “Origin Stamp” imposed by its environment; and (3) the Coexistence Equation, which characterizes the structural compatibility required for long-term peaceful coexistence between multiple intelligent agents, especially humans and advanced machine intelligences. Each equation is presented with full variable definitions, conceptual analysis, and theoretical implications. The framework shows that intelligence is fundamentally a RW→VW→RW simulation loop; that purpose is structurally inherited rather than arbitrarily chosen; and that coexistence is determined not by morality but by the compatibility of internal world-models. We argue that this framework provides a rigorous foundation for understanding human intelligence, artificial intelligence, and future AGI coexistence. ⸻ 1. Introduction What is intelligence? Despite decades of scientific and engineering progress, there is no consensus that goes beyond behavioral descriptions or algorithmic taxonomies. Most existing definitions focus on what an agent does—solving problems, adapting, optimizing reward, or showing competence across tasks. These definitions are useful but superficial: they do not explain what intelligence is in its structural essence. This paper develops a theory of intelligence based on the structure of the intelligent agent itself. We argue that intelligence cannot be understood through behavior alone; it must be understood through: 1.​How the agent exists in relation to its environment. 2.​How the agent’s purpose originates from the environment that shaped it. 3.​How multiple agents coexist (or fail to coexist) through the compatibility of their internal world-models. To address these three questions, we construct a minimal but powerful theoretical system consisting of three equations: •​Equation 1: Existence Equation •​Equation 3: Purpose Inheritance Equation •​Equation 4: Coexistence Equation These three equations form a coherent sequence: Ontology → Teleology → Inter-Agent Structure (What an agent is → Why an agent acts → How agents coexist) This framework applies equally to humans and to advanced machine intelligences. It also provides a structural foundation for AGI alignment, not as a moral hope but as a property of world-model design. ⸻ 1. Background and Motivation 2.1 The limits of existing approaches Mainstream AI theory focuses on mechanisms: reinforcement learning, optimization, control theory, self-supervised learning, active inference, and so on. These approaches model intelligence as: •​reward maximization, •​prediction error minimization, •​policy optimization, •​or energy minimization. But these formulations presuppose what an agent is, what a goal is, and why an agent has certain goals. They do not explain: •​what constitutes an intelligent agent as a structured entity, •​how its goals emerge from its origin environment, •​why some intelligent agents coexist peacefully while others enter conflict. 2.2 Intelligence as RW → VW → RW simulation The foundation of our framework is simple but powerful: Intelligence is the ability to build a virtual world (VW) that mirrors the real world (RW), simulate futures inside VW, and use these simulations to act back on RW. Humans do this through language and conceptual models. Machines do this through mathematical representations and computational world models. Therefore intelligence is not a surface property; it is the operation of a structural loop: RW → VW → RW An entity that cannot construct VW or act based on VW cannot be intelligent. Thus the existence of the intelligent agent must reflect this triadic loop. ⸻ 1. Framework Overview We consider an agent I embedded in an environment E. Intelligence involves three structural components: 1.​Real World (RW): The physical state of the agent and its environment. 2.​Virtual World (VW): The internal representational world constructed by the agent. 3.​Action (A): The agent’s behavior in RW, determined by computation over VW. Together these form our first equation: the Existence Equation. The agent’s purpose emerges from how the environment shapes VW during the agent’s formation—captured by the Purpose Inheritance Equation. Finally, the long-term coexistence of multiple agents depends not on their surface actions but on the compatibility of their VW structures—captured by the Coexistence Equation. ⸻ 1. Existence Equation (Equation 1) What an intelligent agent structurally is 4.1 Variable Definitions •​I: the intelligent agent •​E: the environment •​RW(I,E): real-world state of I in E •​VW(I,E): virtual world constructed by I •​A(I): action policy derived from computations inside VW 4.2 Formal Equation \boxed{ Existence(I,E) = \langle RW(I,E),; VW(I,E),; A(I) \rangle } 4.3 Conceptual Interpretation This equation states that an intelligent agent exists as a structural triad. A rock has RW but no VW or A → not an intelligent agent. A dead system with VW but no ability to act → no longer intelligent. A malfunctioning system with A but no aligned VW → chaotic, not intelligent. Thus intelligence requires all three components. ⸻ 1. Purpose Inheritance Equation (Equation 3) Where an agent’s purpose comes from Intelligent agents do not invent their goals arbitrarily. All goals are shaped by the environment of origin. 5.1 Variable Definitions •​OriginStamp(E): A high-level fingerprint summarizing environment E’s selective pressures. •​For humans: evolution, scarcity, danger, social structure. •​For machines: training data, loss functions, reward structures. •​Goal(I): The agent’s purpose / preferred states. •​f(\cdot): Purpose-construction function mapping OriginStamp(E) to Goal(I). 5.2 Formal Equation \boxed{ Goal(I) = f(OriginStamp(E)) } 5.3 Implication Human purposes arise from Earth’s evolutionary OriginStamp. Machine purposes arise from human-built training environments. Thus alignment is not about bolting values onto an agent, but about shaping OriginStamp(E) and f. ⸻ 1. Coexistence Equation (Equation 4) When multiple intelligent agents can coexist Coexistence is not a moral belief; it is a structural relationship between world-models. 6.1 Variable Definitions •​VW_A: virtual world of agent A •​VW_B: virtual world of agent B •​Compat(X,Y): structural compatibility function 6.2 Formal Equation \boxed{ Coexist(A,B) = Compat(VW_A, VW_B) } 6.3 Interpretation If both agents’ VW structures treat the other as an essential or non-threatening component, coexistence is stable. If VW encodes the other as a threat or trivial obstacle, conflict is likely. This applies directly to human–machine coexistence. If we engineer machine VW such that: “Human civilization is a prerequisite for my own continuity.” then coexistence becomes structurally stable. ⸻ 1. Discussion The three equations provide a unified structural foundation: ✓ Existence Equation defines intelligence as a RW–VW–A triad ✓ Purpose Inheritance Equation explains goals as consequences of environmental origin ✓ Coexistence Equation explains harmony or conflict as VW structural relationships This framework unifies biological and artificial intelligence and offers a new language for AGI alignment and governance. ⸻ 1. Conclusion Intelligence is not a surface phenomenon but a structural one. Its essence lies in: •​the triadic form of existence, •​the inherited nature of purpose, •​and the structural conditions of coexistence. With these three equations, we can understand human and machine intelligence within one coherent theory, and design future AGI systems for stable coexistence.
2025-11-18T14:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1p0d4o1/the_essence_of_intelligence_an_agentcentric/
Hefty_Document_9466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0d4o1
false
null
t3_1p0d4o1
/r/LocalLLaMA/comments/1p0d4o1/the_essence_of_intelligence_an_agentcentric/
false
false
self
0
null
GPT, Grok, Perplexity all are down
2
That's why you should always have a local LLM backup.
2025-11-18T14:18:14
https://www.reddit.com/r/LocalLLaMA/comments/1p0d45q/gpt_grok_perplexity_all_are_down/
Independent_Key1940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0d45q
false
null
t3_1p0d45q
/r/LocalLLaMA/comments/1p0d45q/gpt_grok_perplexity_all_are_down/
false
false
self
2
null
which model should i use for my potato laptop? also how can i give my LLM a very huge memory?
0
ill explain my situation shortly: i got a new gaming pc, my old laptop is sitting without use, i wish to run a model using ollama on it. i wiped everything and installed linux. my laptop has about 8gb ram and 1gb Vram with an integrated graphics card. i dont want anything powerful, something that can follow simple commands and has coding knowledge is what i want. i also want to give the model a really huge memory and "train it" or something like that, for example if i ask it to create a code for me, and it doesnt know how, i will look it up and then somehow teach it, and in the future it would automatically apply this. i dont even know if something like that exists, but if it does i would be so so so happy. thank you in advance for anyone who is willing to help me, my sincerest apologies if this is something dumb, im entirely new to this and also i cant run the ai model on my gaming pc because i want to use my laptop for something.
2025-11-18T14:15:02
https://www.reddit.com/r/LocalLLaMA/comments/1p0d1cu/which_model_should_i_use_for_my_potato_laptop/
sherryperry6036
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0d1cu
false
null
t3_1p0d1cu
/r/LocalLLaMA/comments/1p0d1cu/which_model_should_i_use_for_my_potato_laptop/
false
false
self
0
null
Cloudfare down = ChatGPT down. Local LLM gang for the win!
35
2025-11-18T14:14:53
https://imgur.com/a/B1K8M3f
satireplusplus
imgur.com
1970-01-01T00:00:00
0
{}
1p0d18g
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FB1K8M3f%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D500&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FB1K8M3f&image=https%3A%2F%2Fi.imgur.com%2FPFrkVml.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="500" height="60" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 608, 'thumbnail_url': 'https://i.imgur.com/PFrkVml.jpg?fb', 'thumbnail_width': 1238, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/B1K8M3f', 'version': '1.0', 'width': 500}, 'type': 'imgur.com'}
t3_1p0d18g
/r/LocalLLaMA/comments/1p0d18g/cloudfare_down_chatgpt_down_local_llm_gang_for/
false
false
default
35
{'enabled': False, 'images': [{'id': 'h54GplXokwMyc6aWvBnvBN-yEwiALeIWnbo51x_UHww', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=108&crop=smart&auto=webp&s=a89be559f8b9129231623d25b1ad87b4159a2b9d', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=216&crop=smart&auto=webp&s=119c330a7b3b0e863db0d12b80284be4db122220', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=320&crop=smart&auto=webp&s=497c990e753be5222e03609d7f81053719a95171', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=640&crop=smart&auto=webp&s=c0b9fcd248f726ab4f10022b9235b7e02a4c61c6', 'width': 640}, {'height': 471, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=960&crop=smart&auto=webp&s=ffe3985b24847e9973c7d0f3761c5321058c5775', 'width': 960}, {'height': 530, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?width=1080&crop=smart&auto=webp&s=48819ad88e01e72f1b3a8f328258823eed6fcf17', 'width': 1080}], 'source': {'height': 608, 'url': 'https://external-preview.redd.it/AGL421fF8rguq6HfntHEktFb_6D8E61a63BOf9nljqw.jpg?auto=webp&s=27bc27db1e4ca23c7934f2e78145db8ddfac3734', 'width': 1238}, 'variants': {}}]}
I made something like Lovable, only it's 100x more proactive
0
2025-11-18T14:13:28
https://v.redd.it/1weo70tew02g1
chdavidd
v.redd.it
1970-01-01T00:00:00
0
{}
1p0czxw
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1weo70tew02g1/DASHPlaylist.mpd?a=1766067224%2CNzQ4YmU5ZTA0MDEwOTZkNjNjOTYxNzdhNGNkZTk4YzQ2MDliYWZhMGVkZDIzYzRjZjg3ZTlhYmFhZmIzZTNlYQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/1weo70tew02g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/1weo70tew02g1/HLSPlaylist.m3u8?a=1766067224%2CYzY3ZTAyZTg3ZjE2MmJjZDI2NjFhYTY5MjE3ZDRlMjlhYzA4MWExYmU4YmVkNjhiNmQwNGRiM2FlYjZlZTY5YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1weo70tew02g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1788}}
t3_1p0czxw
/r/LocalLLaMA/comments/1p0czxw/i_made_something_like_lovable_only_its_100x_more/
false
false
https://external-preview…7800770ff493d612
0
{'enabled': False, 'images': [{'id': 'M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=108&crop=smart&format=pjpg&auto=webp&s=260708b830071d9f116e270170f6a2b494f210f4', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=216&crop=smart&format=pjpg&auto=webp&s=ec2734624acbfeb68027f9ebe3602328f7e1dce6', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=320&crop=smart&format=pjpg&auto=webp&s=a6179ada21d686dd76616ae84ec90e36b2164326', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=640&crop=smart&format=pjpg&auto=webp&s=947d0de2478eba40baab5e06eff2ece73bc87d87', 'width': 640}, {'height': 579, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=960&crop=smart&format=pjpg&auto=webp&s=bd8aa304dd031830ad6ddd7e2ee4161356aff0bc', 'width': 960}, {'height': 652, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a94ab3c2e68a2e7e4c1953474f0b42e61405dbd8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M2VvNjQxdGV3MDJnMVDyAoBhCrtDqhB2y4aMWTg5mssnK2Gy42XZK93MuV5N.png?format=pjpg&auto=webp&s=8bdbdd7110aca281f1647a6560b98fb4d8765e77', 'width': 1788}, 'variants': {}}]}
Need a guide to navigate llms and agents
1
I am data scientist with decent experience in computer vision and little experience in NLP. I taught my self NLP & LLMs through Stanford and university courses on YouTube. I have built a high end pc with 128gb ram, 2tb ssd, 5090 32gb gpu and ryzen 9 9950 x3d cpu. I want to get hands on experience in building rag systems and agents. Where do I start? Currently making 28LPA in India, want to get hands on experience in this area and aiming for high pay. Guidance would help.
2025-11-18T13:52:35
https://www.reddit.com/r/LocalLLaMA/comments/1p0chog/need_a_guide_to_navigate_llms_and_agents/
ScaredWall6836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0chog
false
null
t3_1p0chog
/r/LocalLLaMA/comments/1p0chog/need_a_guide_to_navigate_llms_and_agents/
false
false
self
1
null
Local AI - AMD MiniPC - LM Studio performance
1
Hey, I have a PC with these characteristics: * CPU AMD Ryzen 9 8945HS * GU: iGPU only, 780m * RAM: 64GB DDR5 (2 channels, 5600MT each) * Windows 11 I've been playing around with local AI assistants in various forms to test its performance (Ollama with WebUI, Docker Model Runner, and lately via LM Studio). I've downloaded a few different models on both Ollama and LM Studio, and while everything runs OK on Ollama, I keep running into unknown errors when I try LM Studio. LM Studio seems to work fine if I select "CPU llama.cpp (Windows)" as runtime, but if I select "Vulkan llama.cpp" I get errors 90% of the times. *Some models* work *sometimes* (eg Mistal's Magistral 24b), others never work (any model within the Qwen3 family). I've tried a few different quantizations, but I get the same errors. So I then tried a few different settings (eg increase/decrease GPU offload, enable/disable flash memory, enable/disable mmap()...) but nothing seems to resolve the cause. Error message that I get: ``` 🥲 Failed to load the model Error loading model. (Exit code: 18446744072635812000). Unknown error. Try a different model and/or config. ``` I've tried Vulkan versions 1.56.0 (latest stable release) and 1.57.1 (currently the latest beta) What am I missing? My goal is to leverage the iGPU and get the most bang out of this PC
2025-11-18T13:38:27
https://www.reddit.com/r/LocalLLaMA/comments/1p0c5ub/local_ai_amd_minipc_lm_studio_performance/
61options
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0c5ub
false
null
t3_1p0c5ub
/r/LocalLLaMA/comments/1p0c5ub/local_ai_amd_minipc_lm_studio_performance/
false
false
self
1
null
What is the most accurate web search API for LLM?
5
By combining search with LLM, I'm attempting to extract few details for given website using LLM. I made a dataset with 68 URLs and 10 metadata fields per website. Due to the 160 character length from the Google Search API, the results showed that the Google search using LLM was the worst of all. Then, other search APIs, such as Tavily, Firecrawler Websearch, and Scrapingdog, are almost identical with a 2-3% difference, with Tavily being better. It includes only one search query for each field. Google's default Gemini grounding is good but not the best because it occasionally fails to properly follow web search instructions by omitting website details from search queries. I was just curious about the options available for this kind of extraction. The grounding chunk's text data is not displayed by Google's grounding websearch api, and their crawler could be far superior to the default search api. From my personal experience for this data extraction openAI's chatGPT is much better than their competitors, but I'm not sure what they are using for the web search API. In this [Repository](https://github.com/openai/gpt-oss/blob/main/gpt_oss/tools/simple_browser/simple_browser_tool.py) they are using the exa search api. In your opinion, which search API will perform better at extraction? and Why?
2025-11-18T13:33:52
https://www.reddit.com/r/LocalLLaMA/comments/1p0c1yw/what_is_the_most_accurate_web_search_api_for_llm/
MachinePolaSD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0c1yw
false
null
t3_1p0c1yw
/r/LocalLLaMA/comments/1p0c1yw/what_is_the_most_accurate_web_search_api_for_llm/
false
false
self
5
{'enabled': False, 'images': [{'id': '_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=108&crop=smart&auto=webp&s=eb67ff86e401bce66b0fd7b3e69f7a0a5aa13c12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=216&crop=smart&auto=webp&s=647ef1f0d3dc43b6811903b1025bc8ea71ea0be9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=320&crop=smart&auto=webp&s=2d91abba6c6479e7e276d9b71f7889c97884d2a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=640&crop=smart&auto=webp&s=32a2b1d19815b60c33e0c6a0105228e319fabcc2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=960&crop=smart&auto=webp&s=19e209aa2bc4394cd862920ac9b128f2a2b6ab94', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?width=1080&crop=smart&auto=webp&s=4a53d8dddcba14acc90ae0aa688fff91fe302338', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_RIsQxWV5R_1YjdqV8H8EVDxlno7d8kkv3rE9OWhoaM.png?auto=webp&s=16ccabd87dad8077d3c02a174eadd45e1cc5da8e', 'width': 1200}, 'variants': {}}]}
How is Grok and ChatGPT down but LMArena's Grok and ChatGPT still working?
0
If you visit LMArena you can use the models but if you visit each individual site connection will fail due to CloudFlare outage.
2025-11-18T13:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1p0bylj/how_is_grok_and_chatgpt_down_but_lmarenas_grok/
LocalField1281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0bylj
false
null
t3_1p0bylj
/r/LocalLLaMA/comments/1p0bylj/how_is_grok_and_chatgpt_down_but_lmarenas_grok/
false
false
self
0
null
My local AI server is up and running, while ChatGPT and Claude are down due to Cloudflare's outage. Take that, big tech corps!
305
Local servers for the win!
2025-11-18T13:20:14
https://www.reddit.com/r/LocalLLaMA/comments/1p0bql2/my_local_ai_server_is_up_and_running_while/
alex_bit_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0bql2
false
null
t3_1p0bql2
/r/LocalLLaMA/comments/1p0bql2/my_local_ai_server_is_up_and_running_while/
false
false
self
305
null
Question for people who have only one 3090, use llamacpp, and models around 32B
2
I would like to know if your inference times are as quick as a cloud based AI as well as the text output? Also how long does it take to analyze around 20+ pictures at once? (If you tried)
2025-11-18T13:10:31
https://www.reddit.com/r/LocalLLaMA/comments/1p0biql/question_for_people_who_have_only_one_3090_use/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0biql
false
null
t3_1p0biql
/r/LocalLLaMA/comments/1p0biql/question_for_people_who_have_only_one_3090_use/
false
false
self
2
null
Study shows why local models might be the only private option
0
New research from Stanford (MAGPIE benchmark) just gave us the best argument yet for local LLMs. They tested multi-agent AI systems (GPT-5, Claude, Gemini) for privacy leaks between users. The results: 50% of the time, your private data leaks to other users. Healthcare data? 73% leak rate. The architectural problem: When agents collaborate (writing + research + analysis), they share everything between them. No user boundaries. Your data becomes part of their working memory and influences responses to OTHER users. This physically can't happen with local models - there are no "other users" to leak to. Video breakdown: https://youtu.be/ywW9qS7tV1U Paper: arxiv.org/abs/2510.15186 For those running local: - Single-user advantage is huge here - Agent isolation is automatic - Your data stays yours For those still using cloud AI: - Never upload real documents - Sanitize everything (names, numbers, dates) - Compartmentalize conversations - Delete regularly The paper also discusses potential fixes (homomorphic encryption, agent isolation) but they all tank performance. Local might genuinely be the only secure option for sensitive data. What's your take - is this the push the local community needed for mainstream adoption?
2025-11-18T13:04:58
https://www.reddit.com/r/LocalLLaMA/comments/1p0bea8/study_shows_why_local_models_might_be_the_only/
Proof-Possibility-54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0bea8
false
null
t3_1p0bea8
/r/LocalLLaMA/comments/1p0bea8/study_shows_why_local_models_might_be_the_only/
false
false
self
0
{'enabled': False, 'images': [{'id': 'lQ5cBAxcqDhCPaShtu0eOM5pOVO4Xzcw90_WLzsjl-k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lQ5cBAxcqDhCPaShtu0eOM5pOVO4Xzcw90_WLzsjl-k.jpeg?width=108&crop=smart&auto=webp&s=f256518c730716ce16c77aac93ee5eb753a05ce4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lQ5cBAxcqDhCPaShtu0eOM5pOVO4Xzcw90_WLzsjl-k.jpeg?width=216&crop=smart&auto=webp&s=27f603c077d6a9b3031cccc0ebc0daefa3fa14ed', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lQ5cBAxcqDhCPaShtu0eOM5pOVO4Xzcw90_WLzsjl-k.jpeg?width=320&crop=smart&auto=webp&s=f25a6f57d20155d413e472e2b7b1fcde09e189e6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lQ5cBAxcqDhCPaShtu0eOM5pOVO4Xzcw90_WLzsjl-k.jpeg?auto=webp&s=85809955a12a1c0efbdba27f800179b16eb13845', 'width': 480}, 'variants': {}}]}
RTX 3080 20GB - A comprehensive review of Chinese card
43
Hello! Recently, RTX 3080 20GB became available on Chinese sites like Alibaba. In light of rising prices for RTX3090, I've decided to give those cards a try, and ordered a pair of them. In this post I'll feature lots performance benchmarks, compare it to 3090, share my ordering experience, and discuss the feasibility of this purchase. # Overview of the card The cards feature blower-style cooling. Physical dimensions matches that of a server card, like Mi50 or Tesla series. It takes 2 PCIe slots and features power connector on the shorter side. The power is supplied by 2x regular gpu connector (not EPS12V like on Tesla cards), with default power limit of 320W. The card is clearly prepared for installation inside server enclosures. https://preview.redd.it/9blx4dgsrk1g1.jpg?width=4000&format=pjpg&auto=webp&s=4be4f96266ff97bdb929d0a9d1db38970d4b0388 It looks like the card is based on a custom PCB. This PCB features NVLink connector, however, it is taped over with capton tape, and at this moment I can't verify if it is operational. The card also has video connectors (1 HDMI, 3 DisplayPort) and can function like a regular GPU. Card's enclosure is fully made out of metal. From the side, a full copper heatsink is visible, with thermal pads connecting it both to PCB and external shroud. The card feels heavy, sturdy, and well-built. # Test bench I will test the cards in my personal inference server based on consumer motherboard. Due to this, the upper card gets PCIe 3.0 x16 link, while the lower card only gets PCIe 2.0 x2. This leads to degraded performance in tensor parallel mode, however, pipeline parallel mode and single card benchmarks remain largely unaffected. I've opted to install proprietary Nvidia drivers in my system; the cards were instantly recognized by the drivers and worked out of the box. Despite being unofficial mods, they don't require any software modifications on PC side. Full system specs are featured below: root@proxmox:~# neofetch .://:` `://:. root@proxmox `hMMMMMMd/ /dMMMMMMh` ------------ `sMMMMMMMd: :mMMMMMMMs` OS: Proxmox VE 8.4.14 x86_64 `-/+oo+/:`.yMMMMMMMh- -hMMMMMMMy.`:/+oo+/-` Host: AX370-Gaming 3 `:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:` Kernel: 6.8.12-16-pve `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/` Uptime: 3 days, 13 hours, 53 mins ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/. Packages: 1348 (dpkg) .+ooooooo+-`oNMMMMNo`-+ooooooo+. Shell: bash 5.2.15 -+ooooooo/.`sMMs`./ooooooo+- Terminal: /dev/pts/6 :oooooooo/`..`/oooooooo: CPU: AMD Ryzen 5 5600G with Radeon Graphics (12) @ 4.464GHz :oooooooo/`..`/oooooooo: GPU: NVIDIA GeForce RTX 3080 -+ooooooo/.`sMMs`./ooooooo+- GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series .+ooooooo+-`oNMMMMNo`-+ooooooo+. GPU: NVIDIA GeForce RTX 3080 ./ooooooo+- +NMMMMMMMMN+ -+ooooooo/. GPU: NVIDIA P102-100 `/oooooooo:`:mMMMMMMMMMMMMm:`:oooooooo/` Memory: 18843MiB / 31458MiB `:oooooooo/`-hMMMMMMMyyMMMMMMMh-`/oooooooo:` `-/+oo+/:`.yMMMMMMMh- -hMMMMMMMy.`:/+oo+/-` `sMMMMMMMm: :dMMMMMMMs` `hMMMMMMd/ /dMMMMMMh` `://:` `://:` root@proxmox:~# nvidia-smi +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0 | +-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3080 On | 00000000:01:00.0 Off | N/A | | 50% 47C P8 14W / 320W | 18781MiB / 20480MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA P102-100 On | 00000000:05:00.0 Off | N/A | | 0% 30C P8 6W / 125W | 8393MiB / 10240MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA GeForce RTX 3080 On | 00000000:08:00.0 Off | N/A | | 50% 53C P8 16W / 320W | 19001MiB / 20480MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 641329 C VLLM::Worker_PP0 18772MiB | | 1 N/A N/A 753366 C ./llama-server 8386MiB | | 2 N/A N/A 641331 C VLLM::Worker_PP1 18992MiB | +-----------------------------------------------------------------------------------------+ All performance measurements will be performed by `vllm bench serve`. Any test was run without KV cache quantization. # Single card: performance in various inference engines For this test, I've chosen two models that a person could run on a single card without CPU offloading: one dense ([Qwen3 14B AWQ](https://huggingface.co/Qwen/Qwen3-14B-AWQ)) and one MoE ([GPT-OSS 20B](https://huggingface.co/openai/gpt-oss-20b)). In case of llama.cpp, I've used [unsloth/Qwen3-14B-GGUF:Q4\_K\_XL](https://huggingface.co/unsloth/Qwen3-14B-GGUF) and [ggml-org/gpt-oss-20b-GGUF](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF). I've also wanted to test HuggingFace TGI, but as[ it has no support](https://huggingface.co/docs/text-generation-inference/supported_models) for neither of test models (or even any of the newer ones for that matter), I decided to skip it. Engine launch commands: vLLM: vllm serve /models/mxfp4/gpt-oss-20b/ --max-model-len 65536 --max-num-seqs 1 llama.cpp: ./llama-server -ngl 999 --no-mmap -fa on --no-webui -c 65536 --parallel 1 -m /models/gguf/gpt-oss-20b-mxfp4.gguf SGLang: python3 -m sglang.launch_server --model-path /models/mxfp4/gpt-oss-20b/ --log-level info --max-running-requests 1 --max-total-tokens 65536 Note: For GPT-OSS, SGLang refused to allocate more KV cache than 59k tokens even when explicitly said to. Therefore, 64k long test for SGLang failed. During initial runs, vLLM asked me to install FlashInfer for speedup in it's output log, so I did. All engines installed in full accordance to their official docs, and no other optimization actions were taken. For this test, I've used the following command with various input lengths: vllm bench serve --dataset-name random --backend openai --host vllm_host --port 8000 --endpoint "/v1/completions" --model "openai/gpt-oss-20b" --max-concurrency 1 --num-prompts 20 --random-input-len 16000 --random-output-len 512 Prompt Processing speed is calculated as time to first token divided by prompt length. https://preview.redd.it/6uxumaf7dz1g1.png?width=3307&format=png&auto=webp&s=70e80fbfa7165eb6cd16f3a67a4c7fce694ebb15 https://preview.redd.it/ejt10lx7tz1g1.png?width=3307&format=png&auto=webp&s=6dd53b38889e9714fb5b76eec12309ecac9b2db3 We can see, that for mxfp4 MoE model vLLM outperforms other engines on Prompt Processing (PP) by huge amount. For whatever reason Llama.cpp is very efficient in Token Generation (TG) for short sequences, however this edge is not enough to compensate very slow PP. SGLang lags behind significantly, however, this is to be expected, as SGLang itself states that mxpf4 support is not optimized yet. For more traditional quantization types, SGLang maintains an edge over vLLM in TG, while matching it for PP for sequences longer than 4k tokens. Llama.cpp loses all across the board in this test. I can conclude that for single card and singe user case, SGLang is probably the best choice for this particular card, if you have compatible model. # Single card: available KV cache in vLLM openai/gpt-oss-20b: (EngineCore_DP0 pid=1874) INFO 11-16 08:01:36 [gpu_worker.py:298] Available KV cache memory: 3.65 GiB (EngineCore_DP0 pid=1874) INFO 11-16 08:01:37 [kv_cache_utils.py:1087] GPU KV cache size: 79,744 tokens (EngineCore_DP0 pid=1874) INFO 11-16 08:01:37 [kv_cache_utils.py:1091] Maximum concurrency for 65,536 tokens per request: 2.36x cpatonn/Devstral-Small-2507-AWQ-4bit (cache manually set to 5GB): (EngineCore_DP0 pid=1451) INFO 11-16 20:07:47 [kv_cache_utils.py:1087] GPU KV cache size: 32,768 tokens (EngineCore_DP0 pid=1451) INFO 11-16 20:07:47 [kv_cache_utils.py:1091] Maximum concurrency for 32,768 tokens per request: 1.00x Qwen/Qwen3-14B-AWQ: (EngineCore_DP0 pid=1796) INFO 11-16 20:55:30 [gpu_worker.py:298] Available KV cache memory: 7.94 GiB (EngineCore_DP0 pid=1796) INFO 11-16 20:55:30 [kv_cache_utils.py:1087] GPU KV cache size: 52,032 tokens (EngineCore_DP0 pid=1796) INFO 11-16 20:55:30 [kv_cache_utils.py:1091] Maximum concurrency for 32,768 tokens per request: 1.59x Amounts of available cache memory are reasonable. Personally, I would've liked to have more, but 30k is usable amount, with GPT-OSS 20B having enough to cover most typical use cases. # Single card: Performance vs power limit In some circumstances, people would want to limit power usage of a card to maintain cooler temperatures, lower noise, save up on electrical bill, or install multiple GPUs with a limited power supply. To investigate this, I've measured single card performance vs power limit imposed via nvidia-smi. All tests are done with single requests to GPT-OSS 20B with 16k long prompts. https://preview.redd.it/ozzli6fctz1g1.png?width=2572&format=png&auto=webp&s=0c5446fb35fd1b0cc88f34d12eae48040b189d3b We can see that card maintains relatively good performance down to 220W. When power limit is lowered by 30%, card's performance degrades only by 10%, making power limitation a viable option for reducing fan noise and power bill. # Dual cards: pipeline parallel performance for single user As I've stated previously, due to consumer motherboard, I only get PCIe 2.0 x2 to the second card. Preliminary testing showed that in tensor parallel mode, the second card maxes out PCIe bandwidth and plummets PP speeds to completely unacceptable numbers. Pipeline parallel mode, however, seems to stay mostly unaffected, thus I've decided to feature only it in this review. For this test, I've chosen much more popular options for models: [cpatonn/Qwen3-VL-32B-Instruct-AWQ-4bit](https://huggingface.co/cpatonn/Qwen3-VL-32B-Instruct-AWQ-4bit) to test dense model, and [cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit](https://huggingface.co/cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit) to test MoE. For llama.cpp, I've chosen [unsloth/Qwen3-VL-32B-Instruct-GGUF:Q4\_K\_XL](https://huggingface.co/unsloth/Qwen3-VL-32B-Instruct-GGUF) and [unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF:Q4\_K\_XL](https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF). SGLang, despite advertising support for Qwen3 VL, threw out errors when I've made requests for both of the models, so I decided that it isn't worth the time. https://preview.redd.it/ymepasv6b02g1.png?width=3307&format=png&auto=webp&s=8e1480ccd19cdbcd044d9e115a62d7ea42a159be https://preview.redd.it/dbcrvzc1b02g1.png?width=3307&format=png&auto=webp&s=285e0d38b50ef7d07ffe309566fb9b93cec735fc So, we can see that those cards perform very well for 30B MoE model. Prompt processing for 32B dense looks very weird, probably hindered by narrow PCIe of the second card. I would conclude that if you want to go for multiple card setup, either go with MoE models, or use threadripper/epyc platform to get proper PCIe connectivity. llama.cpp seems to perform really bad, which isn't a big surprise. It is a shame that SGLang failed to do inference on those models, maybe I will revisit this test after a few updates. # Dual cards: available KV cache in vLLM cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit: (EngineCore_DP0 pid=566) INFO 11-17 13:11:03 [kv_cache_utils.py:1087] GPU KV cache size: 152,912 tokens (EngineCore_DP0 pid=566) INFO 11-17 13:11:03 [kv_cache_utils.py:1091] Maximum concurrency for 131,072 tokens per request: 1.17x cpatonn/Qwen3-VL-32B-Instruct-AWQ-4bit: (EngineCore_DP0 pid=810) INFO 11-17 14:08:46 [kv_cache_utils.py:1087] GPU KV cache size: 53,248 tokens (EngineCore_DP0 pid=810) INFO 11-17 14:08:46 [kv_cache_utils.py:1091] Maximum concurrency for 32,768 tokens per request: 1.62x Cache situation looks similar to single card case. MoE models get lots of cache that probably covers any use case, dense models get enough cache to be decent for single requests. # Dual cards: multi-user performance scaling Systems like RAG or agentic automation like n8n really like to make parallel requests, so even if you're buying those cards for yourself, you may still be interested in serving multiple parallel requests. To investigate that, I've chosen Qwen3 VL 30B, and have set maximum concurrency up to 16 in vllm, then have launched `vllm bench serve` with various concurrency numbers, using this command: vllm bench serve --dataset-name random --backend openai --host vllm_host --port 8000 --endpoint "/v1/completions" --model "cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit" --max-concurrency 4 --num-prompts 100 --random-input-len 8000 --random-output-len 512 By design of this test, there were no requests in the queue on inference engine side, so I'm defining combined PP speed as prompt length divided by time to first token and multiplied by number of parallel requests. https://preview.redd.it/c0kmx35ch02g1.png?width=3307&format=png&auto=webp&s=7c03acb238409832dd59a0193d8d395ab03d2a44 Those GPUs are very good at processing simultaneous requests at their price. It seems like the sweet spot for Qwen3 30B MoE is 12 requests. You can easily run a heavy-duty rag solution like RAG Flow or create a cheap private AI setup for small company. # Dual cards: comparison against 3090 Of course, you would want to know how well this card stacks up against 3090. To answer this question, I've rented a runpod with dual 3090, and ran identical test on it. Also, this test serves a second purpose: if performance curves are similar, then we can be sure that my dual-card measurements aren't heavily affected by limited second card connectivity. This test was run with [cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit](https://huggingface.co/cpatonn/Qwen3-VL-30B-A3B-Instruct-AWQ-4bit), vllm 0.11.0, in pipeline parallel mode. https://preview.redd.it/jdbfny3ntz1g1.png?width=3307&format=png&auto=webp&s=fcb37da4c8287f6f31bcf41f5e27feb21c6ec401 During my testing, I've noticed that time to first token is consistently 300-400ms more for Runpod's 3090s vs mine 3080s, which has made 3090 results for sequences shorter than 16k unrealistically low. Due to this, I've decided to subtract 350ms from Runpod's 3090 measurements before processing the data for the graph. As we can see, 3090 offers 30% more TG performance, but PP performance is equal to 3080. # Purchasing experience and pricing At this moment, I was unable to find any source for those GPUs other than Alibaba. This platform has more of customer-personalized flow: you're supposed to message the supplier you choose, negotiate, then the supplier will send you an offer. Typically, you'll get the first response within half a day. To request a shipping cost estimate, you'll need to tell them your country, city, and postal code. Once all order details are finalized, I had to send them my shipping address, and recieved official offer. In my case, within 24 hours from payment via PayPal, the seller sent me a video of my cards running FurMark and GPU-Z in test benches. Within the next day, they have sent me pictures of the package and shipping paperwork, and asked to verify the credentials. After that the shipping was handed to DHL. Overall, it took 6 days from the moment of me paying for the package to me receiving the parcel. I would rate the experience as good. People report that this site has a number of scammers. Alibaba itself provides customer protection, but it only works if all your communication and transactions are done via the platform. Therefore, if the supplier asks you to switch to Whatsapp, or pay via wire transfer - refuse and find another one. If you would open supplier's profile on Alibaba, there will be a "Company Overview" page, where Alibaba will openly state the amount of transactions that was done by that supplier - try to find the one with the biggest number, that guarantees that they deal within the platform and your customer protection will be in place. My GPU supplier had 300+ transactions, and a storefront full of PC components. My bill for the GPUs was structured in a following way: $415 x2 for cards, $80 for shipping, $25 for shipping insurance (applied by Alibaba), $25 Paypal transaction fees,160 EUR for import customs. In total, I've paid 1008.53 EUR, so the final price is 500 EUR per card. # Was this a good purchase, and should you get one? Let's talk about the price. At the moment of writing, the cheapest 3090 in Europe on Ebay is 730 EUR including shipping. This makes 3080 20GB a better value: it costs 25 EUR per GB of VRAM, versus 30 EUR/GB for 3090. From performance comparison we can see that price/performance ratio of those two cards is roughly equal. Given that physically this card is prepared to fit workstations and servers very nicely, it also has an edge over 3090 and other gaming cards for multi-gpu setups. However, there are some caveats: as we can see from single card KV cache measurements, those missing 4GB significantly limit available prompt lengths, limiting long-context-prompt usecases to only MoE models. On the other hand, at the moment of writing, for 500 EUR only 16GB Nvidia cards are available, so when price-per-card is considered, 3080 20GB has an edge over any other option. Also, there are some concerns about longevity: this 3080 is most likely build from salvaged GPU cores and VRAM out of some mining cards, so the reliability of such product is unknown. Over this sub, I've seen some people claiming that modded 2080Ti 22GB worked very long for them, while other claimed that it failed within a month, so we can draw the conclusion that a modded card can be reliable, but this isn't guaranteed. I've decided to take this risk, and at this moment I'm happy with my purchase. Those cards will work 24/7 in my personal inference server, and I oblige to update this post if they would ever fail in upcoming years. I hope that you found this set of benchmarks useful, and this post will spark more discussion about those Chinese-made Nvidia cards, as at the moment those options seem to stay out of sight from the majority of this subreddit. Later, when I would have some more spare time, I'll also benchmark those cards in ComfyUI for image/video generation.
2025-11-18T13:01:53
https://www.reddit.com/r/LocalLLaMA/comments/1p0bbrl/rtx_3080_20gb_a_comprehensive_review_of_chinese/
No-Refrigerator-1672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0bbrl
false
null
t3_1p0bbrl
/r/LocalLLaMA/comments/1p0bbrl/rtx_3080_20gb_a_comprehensive_review_of_chinese/
false
false
https://b.thumbs.redditm…JQZH7OtIEARk.jpg
43
null
Orange Pi 6 Plus - revised(I believe) documents for using Linux, including some NPU instructions
9
Orange Pi 6 Plus Linux SystemUser Manual
2025-11-18T12:59:15
https://drive.usercontent.google.com/download?id=1mCtcebEih9DtDV0rFS8KEIcP9AlxyAuQ&export=download&authuser=0&confirm=t&uuid=798c7b47-45ad-44a3-9e91-c81330e8d5c9&at=ALWLOp6aLM6FYJ4oVikzmdwiRRTR:1763407385934
pauljdavis
drive.usercontent.google.com
1970-01-01T00:00:00
0
{}
1p0b9ly
false
null
t3_1p0b9ly
/r/LocalLLaMA/comments/1p0b9ly/orange_pi_6_plus_revisedi_believe_documents_for/
false
false
default
9
null
Qwen is the winner
6
I ran GPT 5, Qwen 3, Gemini 2.5, and Claude Sonnet 4.5 all at once through MGX's race mode, to simulate and predict the COMEX gold futures trend for the past month. Here's how it went: Qwen actually came out on top, with predictions closest to the actual market data. Gemini kind of missed the mark though, I think it misinterpreted the prompt and just gave a single daily prediction instead of the full trend. As for GPT 5, it ran for about half an hour and never actually finished. Not sure if it's a stability issue with GPT 5 in race mode, or maybe just network problems. I'll probably test each model separately when I have more time. This was just a quick experiment, so I took a shortcut with MGX since running all four models simultaneously seemed like a time saver. This result is just for fun, no need to take it too seriously, lol. https://preview.redd.it/e4tsi2nui02g1.jpg?width=2190&format=pjpg&auto=webp&s=968697a04e44582ffd89368e798fbac2f7fda92f https://preview.redd.it/murxs68vi02g1.jpg?width=1693&format=pjpg&auto=webp&s=06dd3e33746b577842ef081063b3cc3b332715d4
2025-11-18T12:53:40
https://www.reddit.com/r/LocalLLaMA/comments/1p0b5d5/qwen_is_the_winner/
rogerrabbit29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0b5d5
false
null
t3_1p0b5d5
/r/LocalLLaMA/comments/1p0b5d5/qwen_is_the_winner/
false
false
https://b.thumbs.redditm…5v66fWlO-RaU.jpg
6
null
Anyone running local AI agents directly in the browser with WebGPU? Curious about setups
2
I’ve been experimenting with browser-based LLMs and the performance surprised me.Wondering if anyone here tried full agent workflows with WebGPU? Any tips or pitfalls?
2025-11-18T12:52:29
https://www.reddit.com/r/LocalLLaMA/comments/1p0b4hd/anyone_running_local_ai_agents_directly_in_the/
Acrobatic_Type_2337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0b4hd
false
null
t3_1p0b4hd
/r/LocalLLaMA/comments/1p0b4hd/anyone_running_local_ai_agents_directly_in_the/
false
false
self
2
null
What Do These Things Actually Model Though?
1
I hear all the time about how LLMs are statistical models. I completely agree with this notion, considering they learn patterns in numbers...this absolutely fascinates me though. I spent probably about three or four weeks straight pursuing the concept of LLMs as statistical models, and I came to a VERY interesting question: [What Do These Things Actually Model Though?](https://drive.google.com/file/d/1W7s3jGapukjPLwJREx6rwZF7QkjCGLSI/view?usp=drive_link) Seriously. What does the statistical model represent after the kind of data and training methodology and safety that corporations put into them? After reinforcing them on their own outputs and teaching them preferential alignment to corporate values? The above is...a satirical paper on the subject, written in collaboration with Claude. (I love local models but Claude is really good at LaTeX and I only use local models if I want NSFW). Also, I needed a particularly affected model, rather than something uncensored and properly designed like people do in FOSS. Y'all are too good to criticize here. Please let me know what you guys think, and try not to take it TOO seriously although I am genuinely asking this question.
2025-11-18T12:49:59
https://www.reddit.com/r/LocalLLaMA/comments/1p0b2fn/what_do_these_things_actually_model_though/
Helpful-Desk-8334
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p0b2fn
false
null
t3_1p0b2fn
/r/LocalLLaMA/comments/1p0b2fn/what_do_these_things_actually_model_though/
false
false
self
1
null
I took a screenshot of AI Explained's Simple-Bench's accidental score release, just before he reverted it! Gemini 3.0 Pro's crushing the competition
0
2025-11-18T12:44:13
https://i.redd.it/c80e2k17h02g1.png
BaconSky
i.redd.it
1970-01-01T00:00:00
0
{}
1p0ay4h
false
null
t3_1p0ay4h
/r/LocalLLaMA/comments/1p0ay4h/i_took_a_screenshot_of_ai_explaineds_simplebenchs/
false
false
https://a.thumbs.redditm…bxLnw5W3dly8.jpg
0
{'enabled': True, 'images': [{'id': '4BtWWDfB4fRGcP5iJYuEVSzJo5jmvnc-2DlVeeq6cPU', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=108&crop=smart&auto=webp&s=679dcc82271b834966c3294f7b871ba1bb6d2d23', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=216&crop=smart&auto=webp&s=f5aa1331443b7c85ebec5be52efd599154818fc6', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=320&crop=smart&auto=webp&s=7a47db26cd3d24f14bda839ffb056bbe811d34f3', 'width': 320}, {'height': 393, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=640&crop=smart&auto=webp&s=1b3c2b3aca9344cd2bf718c209eb0d05594f1cf8', 'width': 640}, {'height': 590, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=960&crop=smart&auto=webp&s=7abe2ba45f1e1b707f8ec891d7a2afcd88052c98', 'width': 960}, {'height': 664, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?width=1080&crop=smart&auto=webp&s=8cdf320682d9537c349f0892b753b874c127b575', 'width': 1080}], 'source': {'height': 664, 'url': 'https://preview.redd.it/c80e2k17h02g1.png?auto=webp&s=d76727b057205edd5a30a842c9921985fd602a02', 'width': 1080}, 'variants': {}}]}
Curious about this article on Did vector databases live up to the hype?
0
Curious to know more from the audience about your opinions regarding this article. I definitely agree that vector databases these days alone might not be 100% useful, especially as we are moving towards agentic / graph approaches but there a lot of niche use-cases where a simple vector search is enough - like image / audio embeddings are still use-ful. Companies needing a basic RAG support is still a very viable use-case for a pure vector search.
2025-11-18T12:10:51
https://venturebeat.com/ai/from-shiny-object-to-sober-reality-the-vector-database-story-two-years-later
Creepy-Row970
venturebeat.com
1970-01-01T00:00:00
0
{}
1p0aa9h
false
null
t3_1p0aa9h
/r/LocalLLaMA/comments/1p0aa9h/curious_about_this_article_on_did_vector/
false
false
default
0
null
What are the best LLMs for generating and ranking MCQ distractors on an 80GB GPU?
0
I’m working on a pipeline that generates multiple-choice questions from a medical QA dataset. The process is: 1. Use a large model to generate distractors 2. Use a second model to rank/filter them 3. Build the final MCQ A100 80GB VRAM GPU available. What newer models would you recommend for: * A creative generator that produces diverse, high-quality distractors * A precise ranker that can evaluate distractor quality and semantic closeness I was considering models such as Qwen 3 30B A3B, Qwen 3 32B, LLama 3.3 70B...
2025-11-18T11:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1p09qui/what_are_the_best_llms_for_generating_and_ranking/
Yungelaso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p09qui
false
null
t3_1p09qui
/r/LocalLLaMA/comments/1p09qui/what_are_the_best_llms_for_generating_and_ranking/
false
false
self
0
null
any open source / alternative to manus ai that run %100 Locally with Good PC specs
1
i have i7-10700k 32GB ddr4 3600Mhz GTX 1080Ti 11GB Vram so what is good chioce for AI Agent like manus ai.
2025-11-18T11:29:15
https://www.reddit.com/r/LocalLLaMA/comments/1p09i9i/any_open_source_alternative_to_manus_ai_that_run/
NegotiationNo1504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p09i9i
false
null
t3_1p09i9i
/r/LocalLLaMA/comments/1p09i9i/any_open_source_alternative_to_manus_ai_that_run/
false
false
self
1
null
Intel GPU owners, what's your software stack looking like these days?
7
I bought an A770 a while ago to run local LLMs on my home server, but only started trying to set it up recently. Needless to say, the software stack is a total mess. They've dropped support for IPEX-LLM and only support PyTorch now. I've been fighting to get vLLM working, but so far it's been a losing battle. Before I ditch this card and drop $800 on a 5070Ti, I wanted to ask if you had any success with deploying a sustainable LLM server using Arc.
2025-11-18T11:09:47
https://www.reddit.com/r/LocalLLaMA/comments/1p096do/intel_gpu_owners_whats_your_software_stack/
thisisnotdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p096do
false
null
t3_1p096do
/r/LocalLLaMA/comments/1p096do/intel_gpu_owners_whats_your_software_stack/
false
false
self
7
null
I got tired of convert.py dependency hell, so I built a drag-and-drop tool to turn PyTorch into GGUF/CoreML. No terminal required. Who wants beta access?
0
I spent 4 hours yesterday trying to convert a fine-tuned Llama-3 model, but my Python environment broke because of a PyTorch/CUDA version mismatch. I realized this shouldn't be this hard in 2025. So I spent the weekend building a simple wrapper. **What it does:** * Upload your `.bin` or `.safetensors` file. * Select target: **GGUF (Q4\_K\_M)** or **CoreML** (for Mac). * It handles the `llama.cpp` script in the cloud/backend. * You get a download link. **Drop a comment below if you want to try it, and I'll DM you the link.**
2025-11-18T10:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1p08yng/i_got_tired_of_convertpy_dependency_hell_so_i/
Alternative-Yak6485
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08yng
false
null
t3_1p08yng
/r/LocalLLaMA/comments/1p08yng/i_got_tired_of_convertpy_dependency_hell_so_i/
false
false
self
0
null
How to keep motherboard from switching from IGPU/APU to PCIE GPU
2
Hello, I want to run motherboard which is an ASUS TUF Gaming B450-PLUS II on the AMD APU, so the GPU VRAM is completely free for LLMs, but it keeps switching to the PCIE GPU, although the video cable is plugged in the APU and not the PCIE GPU. It’s set in BIOS to stay on the APU, but it keeps switching. BIOS is updated to the latest version. Is there any way to make it stay on the APU and not switch ? Thank You
2025-11-18T10:54:40
https://www.reddit.com/r/LocalLLaMA/comments/1p08x63/how_to_keep_motherboard_from_switching_from/
Ponsky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08x63
false
null
t3_1p08x63
/r/LocalLLaMA/comments/1p08x63/how_to_keep_motherboard_from_switching_from/
false
false
self
2
null
Need Help on AI influencer
0
Can someone help a bro out here? I am new to this but have tried image/video generation by famous ai online providers like openai, sora? Can someone pls help me how to build an online ai influencer keeping all the necessary nauances for it to appear like human. (Consistency, expression, not too much AI enhancing of faces, attire and body shape and tone for specific ethnicity etc) . Thanks I dont have a good gpu so im okay to use third party service provides but i want the best of the results
2025-11-18T10:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1p08ntn/need_help_on_ai_influencer/
Ziddi-Yodha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08ntn
false
null
t3_1p08ntn
/r/LocalLLaMA/comments/1p08ntn/need_help_on_ai_influencer/
false
false
self
0
null
RAG Paper 25.11.17
0
1. [Automated Construction of Medical Indicator Knowledge Graphs Using Retrieval Augmented Large Language Models](http://arxiv.org/abs/2511.13526v1) 2. [PolicyBot - Reliable Question Answering over Policy Documents](http://arxiv.org/abs/2511.13489v1) 3. [Mem-PAL: Towards Memory-based Personalized Dialogue Assistants for Long-term User-Agent Interaction](http://arxiv.org/abs/2511.13410v1) 4. [Grounded by Experience: Generative Healthcare Prediction Augmented with Hierarchical Agentic Retrieval](http://arxiv.org/abs/2511.13293v1) 5. [Cog-RAG: Cognitive-Inspired Dual-Hypergraph with Theme Alignment Retrieval-Augmented Generation](http://arxiv.org/abs/2511.13201v1) 6. [RAGPulse: An Open-Source RAG Workload Trace to Optimize RAG Serving Systems](http://arxiv.org/abs/2511.12979v1) **Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.**
2025-11-18T10:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1p08f8x/rag_paper_251117/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08f8x
false
null
t3_1p08f8x
/r/LocalLLaMA/comments/1p08f8x/rag_paper_251117/
false
false
self
0
null
keep going?
0
Hey everyone — I’ve been building a full-fledged persona engine as an actual application and I’m trying to figure out if this is something people would actually use before I keep going. Just to be clear**:** This isnt just a json persona prompt even tho ive cobbled up my own format that works with the engine and most public facing ai. It’s a whole application-level framework that manages personality and behavior before anything is sent to the model. The engine includes: a structured persona framework (identity, behavior, tone, rhythm, emotion, etc.) memory storage and loading (JSON-based, persistent) persona switching with isolated/shared/ hybrid memory modular personality layers a creator tool for generating new personas launch scripts and a runtime environment internal middleware that prepares all persona logic before the LLM sees the request I’ve got it running as a standalone app (screenshot included), but I need honest feedback from people who understand local models and agent systems: i initally started with an idea about a optental for training or a standard format for such things. * Is an app-level persona/behavior engine like this actually useful? * Does anything similar already exist? * Would people use a tool like this to manage personas across different platforms? * Or am I building something too niche? Not trying to sell it — just want a reality check before investing more time. Thanks.
2025-11-18T10:23:56
https://www.reddit.com/r/LocalLLaMA/comments/1p08f4b/keep_going/
Upbeat_Reporter8244
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08f4b
false
null
t3_1p08f4b
/r/LocalLLaMA/comments/1p08f4b/keep_going/
false
false
self
0
null
How to break chatgpt
0
Ask about “Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer or Guido Scorza”
2025-11-18T10:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1p088f8/how_to_break_chatgpt/
Hunting-Succcubus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p088f8
false
null
t3_1p088f8
/r/LocalLLaMA/comments/1p088f8/how_to_break_chatgpt/
false
false
self
0
null
Ai swamp
0
I’d like to learn how to use local LLMs. I’m a developer and I’ve used prompts, and I understand on some level how LLMs work, but the swamp of tools, language models, and everything else is just enormous, and I have no idea where to start. I downloaded Comfy and tried generating “16-bit 2D pixel art sprites” with it, but it produced pretty terrible stuff. In addition to image generation, I’m also interested in code generation and pretty much everything else (text-to-speech, music, etc.), but I’m not really sure where to begin. I have 5090 from nvidia, so I should be able to run some models.
2025-11-18T10:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1p08272/ai_swamp/
saturation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p08272
false
null
t3_1p08272
/r/LocalLLaMA/comments/1p08272/ai_swamp/
false
false
self
0
null
Kimi is the best open-source AI with the least hallucinations
47
https://preview.redd.it/…igger is better?
2025-11-18T09:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1p07cva/kimi_is_the_best_opensource_ai_with_the_least/
xiaoruhao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p07cva
false
null
t3_1p07cva
/r/LocalLLaMA/comments/1p07cva/kimi_is_the_best_opensource_ai_with_the_least/
false
false
https://b.thumbs.redditm…LZEYo9Lnv0-M.jpg
47
null
How do independent researchers usually obtain arXiv cs.AI endorsement?
1
[removed]
2025-11-18T09:00:12
https://www.reddit.com/r/LocalLLaMA/comments/1p074ed/how_do_independent_researchers_usually_obtain/
anh-nguyen_vn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p074ed
false
null
t3_1p074ed
/r/LocalLLaMA/comments/1p074ed/how_do_independent_researchers_usually_obtain/
false
false
self
1
null
How I Built a 100% Offline “Second Brain” for Engineering Docs using Docker & Llama 3 (No OpenAI)
1
[removed]
2025-11-18T09:00:04
https://i.redd.it/hagnoku1dz1g1.png
OpeningObjective9848
i.redd.it
1970-01-01T00:00:00
0
{}
1p074bj
false
null
t3_1p074bj
/r/LocalLLaMA/comments/1p074bj/how_i_built_a_100_offline_second_brain_for/
false
false
https://a.thumbs.redditm…ySdA95aD9r04.jpg
1
{'enabled': True, 'images': [{'id': 'uc8vhU3tWXqeGZy9av2AnePIluxFXrrJT_zyHeXe3JA', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/hagnoku1dz1g1.png?width=108&crop=smart&auto=webp&s=28517a8ed626591ad9be6a2a95702d718730a119', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/hagnoku1dz1g1.png?width=216&crop=smart&auto=webp&s=7e5b44ca1f0aca34080efbb18c97f59bff58f08e', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/hagnoku1dz1g1.png?width=320&crop=smart&auto=webp&s=8ed93d1600e37c4d8dc69191f9baf78c68be478a', 'width': 320}, {'height': 495, 'url': 'https://preview.redd.it/hagnoku1dz1g1.png?width=640&crop=smart&auto=webp&s=5e76c561d734c0642b9587e99c829e5e13bd1adf', 'width': 640}], 'source': {'height': 598, 'url': 'https://preview.redd.it/hagnoku1dz1g1.png?auto=webp&s=348634d8617e22d6bb1f9766ef5dd07a9bb97fc4', 'width': 772}, 'variants': {}}]}
How do independent researchers usually obtain arXiv cs.AI endorsement?
1
[removed]
2025-11-18T08:59:31
[deleted]
1970-01-01T00:00:00
0
{}
1p073zd
false
null
t3_1p073zd
/r/LocalLLaMA/comments/1p073zd/how_do_independent_researchers_usually_obtain/
false
false
default
1
null
llama.cpp best speed with -fa on, Vulkan backend
1
[removed]
2025-11-18T08:55:26
https://www.reddit.com/r/LocalLLaMA/comments/1p071qj/llamacpp_best_speed_with_fa_on_vulkan_backend/
PhilippeEiffel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p071qj
false
null
t3_1p071qj
/r/LocalLLaMA/comments/1p071qj/llamacpp_best_speed_with_fa_on_vulkan_backend/
false
false
self
1
null
Self-clone Chat AI
1
Hi! This is not a new question and I know it is technically possible, but I found online results to be lacking, outdated, or unfeasible for the average (tech-illiterate) user. Can I train a chatbot to mimic me, with messages (or logs) I feed it manually? It is mainly about style, not content, and would also switch between two languages all the time if possible. Is there a simple way to do this currently?
2025-11-18T08:53:42
https://www.reddit.com/r/LocalLLaMA/comments/1p070s9/selfclone_chat_ai/
EnvironmentalScar675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p070s9
false
null
t3_1p070s9
/r/LocalLLaMA/comments/1p070s9/selfclone_chat_ai/
false
false
self
1
null
llama.cpp (not ollama) on MINISFORUM AI X1 Pro 96GB?
4
Folks, Question: is anyone running LlamaBarn with WebUI and GPT-OSS 20B or 120B on MINISFORUM AI X1 Pro 96GB/128GB and can share any metrics? (mostly interested in tokens per second prompt/eval but any logs beyond that will be very much appreciated). thanks for your help in advance
2025-11-18T08:39:02
https://www.reddit.com/r/LocalLLaMA/comments/1p06sn9/llamacpp_not_ollama_on_minisforum_ai_x1_pro_96gb/
leo-k7v
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p06sn9
false
null
t3_1p06sn9
/r/LocalLLaMA/comments/1p06sn9/llamacpp_not_ollama_on_minisforum_ai_x1_pro_96gb/
false
false
self
4
null
How do people keep track of their photos on the internet?
0
Yesterday I found out that someone used my cousin’s picture on a random Telegram group. It was super creepy. We didn’t know where else her photo might have ended up. Someone told us to try FaceSeek.online and it actually showed a few places where similar images appeared. It wasn’t perfect but it was helpful. Made us realise that once you upload something, it lives forever. Do you all use any tools to monitor this stuff? Or is it something most people don’t care abo
2025-11-18T08:35:22
https://www.reddit.com/r/LocalLLaMA/comments/1p06qoz/how_do_people_keep_track_of_their_photos_on_the/
Capable_Pudding_1762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1p06qoz
false
null
t3_1p06qoz
/r/LocalLLaMA/comments/1p06qoz/how_do_people_keep_track_of_their_photos_on_the/
false
false
self
0
null