title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Let's Do The Motia Way of Backend Development | 0 | 👉👉 Watch Video on "Codedigipt" Youtube Channel | 2025-08-31T06:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n4nri7/lets_do_the_motia_way_of_backend_development/ | bipin_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4nri7 | false | null | t3_1n4nri7 | /r/LocalLLaMA/comments/1n4nri7/lets_do_the_motia_way_of_backend_development/ | false | false | self | 0 | null |
Advice on AI workstation for research use-cases | 3 | Hello all,
I am investing in a workstation to locally host LLMs for myself, and to test research designs that use LLMs that I need absolute control over w.r.t system prompts and guardrails. For privacy and safety reasons, as well as reproducability of research, my team doesn't want to rely on the leading APIs. My end goals are to be able to run 70b models for myself and to host 13b-30b models for 2-3 other testers, probably not concurrently in most cases.
Outside of this, I'll use this PC to handle my R tasks, including MCMC and spatial data processes. I currently use an M2 Pro 16gb Macbook for these tasks but I'm getting to the upper bound of what it can do within reasonable time. I plan on using Ubuntu if it matters.
The budget is around 2.5k. I like the Framework desktop but have heard it's not the best bang for your buck, and not sufficiently modular. My main worry is going obsolete in a few years, with a product that doesn't have as much resell value as say a Mac. With the release of some local models far above 70B, I also fear the machine not being too capable in a year or two.
Any thoughts on these would be greatly appreciated! Thanks for your time. | 2025-08-31T06:19:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n4nf4b/advice_on_ai_workstation_for_research_usecases/ | fIoatingworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4nf4b | false | null | t3_1n4nf4b | /r/LocalLLaMA/comments/1n4nf4b/advice_on_ai_workstation_for_research_usecases/ | false | false | self | 3 | null |
56GB VRAM achieved: Gigabyte 5090 Windforce OC (65mm width!!) + Galax HOF 3090 barely fit but both running x8/x8 and I just really want to share :) | 88 | Originally planned to put the 3090 in a lower x4 slot, but it wouldn't fit to PSU case clearance. Builder put the 3090 in the upper x16 slot instead, and the 5090 just barely fit in the second x16.
Both cards running x8/x8 rather than the original planned x16/x4 configuration - but I'm cool with it. The 3090 fans are literally 1mm from the backplate of the 5090 yet the thermals are fine with 7x 140mm case fans. After the anxiety of my dream build I'm not doing heavy testing yet, but now looking to get into serious fine-tuning pretty soon.
I've the developer of a local AI app designed for dual GPU systems (https://github.com/boneylizard/Eloquent) and I've found that with expanded capabilities comes expanded imagination. Haven't done a git push in a while and there's an issue I really need to get around to addressing, but that explains the build. | 2025-08-31T06:10:19 | Gerdel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4n9yx | false | null | t3_1n4n9yx | /r/LocalLLaMA/comments/1n4n9yx/56gb_vram_achieved_gigabyte_5090_windforce_oc/ | false | false | 88 | {'enabled': True, 'images': [{'id': 'IIzur18gjrzXgpR9OZoGbVe4iOqZba4_pyTPZ85aAIY', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=108&crop=smart&auto=webp&s=982e1712f5421f09c9e48e2bcdea0aa71e49a0f2', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=216&crop=smart&auto=webp&s=1c39ca2348b97fafa31fe05987f232ed541622d5', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=320&crop=smart&auto=webp&s=e75e3035fe4dad5a621e98c61b27d5eb92f2f748', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=640&crop=smart&auto=webp&s=bf3307b2439da45726bdc7e079057842ef0c0904', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=960&crop=smart&auto=webp&s=d7a3fe17cd6dfcc35c7892260fbb7d14ceb4a2b5', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?width=1080&crop=smart&auto=webp&s=b38ded37779e7a703b93928ea601d9a9628e6fdc', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://preview.redd.it/k0im592roamf1.jpeg?auto=webp&s=3b8f750c397fa9bda2bd75ec2b9bd6590070a2e8', 'width': 3072}, 'variants': {}}]} | ||
[Project Update] MeganX: New Era, Upgraded Hardware, Next Generation | 0 | Hello r/LocalLLMA,
The engagement on our last post exceeded all expectations (18k+ views). Your response confirms there is real interest in local-first, efficient, and resilient AI. Here is the latest progress report:
**Successful Migration & Visible Gains**
The system has been migrated from legacy hardware to an optimized Core i3 with 4GB RAM. Result: stability and performance radically improved.
* Before: simple tasks took minutes, frequent crashes.
* Now: tasks complete in tens of seconds, with zero critical failures.
**Planned Evolution: MeganX 2.0 → 3.0**
MeganX 2.0 is reaching the end of its operational cycle. This is strategic: all accumulated knowledge and learned lessons are being transferred to MeganX 3.0.
* Lighter, faster, more resilient
* Faster, consistent responses
* Prepared for future challenges
Context limits become narrative: an “evolutionary spark” generating suspense and anticipation.
**Internal Review Cycle (Our “Secret Sauce”)**
The Critic→Repair loop continues ensuring measurable efficiency gains:
* Anticipates failures
* Improves task performance
* Continuous self-adjustment
Implementation details are proprietary. The effect is observable, measurable, but protected.
**Proof of Progress (Generic & Observable)**
|Hardware|Simple Task|Complex Task|Stability|
|:-|:-|:-|:-|
|Legacy|Minutes|Variable|Unstable|
|Core i3|Tens of sec|Consistent|Stable|
The work continues. Evolution is real, measurable, and observable, yet the mechanisms remain fully shielded.
*The Architect of the MeganX Project* | 2025-08-31T05:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n4my60/project_update_meganx_new_era_upgraded_hardware/ | AffectionateSpray507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4my60 | false | null | t3_1n4my60 | /r/LocalLLaMA/comments/1n4my60/project_update_meganx_new_era_upgraded_hardware/ | false | false | self | 0 | null |
MLX now has MXFP4 quantization support for GPT-OSS-20B, a 6.4% faster toks/sec vs GGUF on M3 Max. | 59 | 2025-08-31T05:50:09 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4mxrj | false | null | t3_1n4mxrj | /r/LocalLLaMA/comments/1n4mxrj/mlx_now_has_mxfp4_quantization_support_for/ | false | false | 59 | {'enabled': True, 'images': [{'id': 'JpWOq_aCoWJZKs-ygf8T0-neq_GLgbH6KTOt7kz5bxA', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?width=108&crop=smart&auto=webp&s=31d955fa1e848829850b99541a3144107765cce9', 'width': 108}, {'height': 48, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?width=216&crop=smart&auto=webp&s=b6b9d3932ab0f319039bed4cae5e35472ff0ed96', 'width': 216}, {'height': 71, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?width=320&crop=smart&auto=webp&s=89c6975643ba461866e2091f550e39a6be969104', 'width': 320}, {'height': 143, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?width=640&crop=smart&auto=webp&s=8a7f592ad4e744736d9c8af891ea21008331265e', 'width': 640}, {'height': 215, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?width=960&crop=smart&auto=webp&s=274e280587695891f0a3f738821ad58d933baf40', 'width': 960}], 'source': {'height': 238, 'url': 'https://preview.redd.it/co3taaq1mamf1.png?auto=webp&s=98bec02bb114b1033757f90dab78ebcb579661b8', 'width': 1059}, 'variants': {}}]} | |||
How do I turn Reddit conversations into a dataset for fine-tuning? | 5 | Hi everyone,
I’m trying to create a dataset for fine-tuning a chatbot, but I’m stuck on the data processing step. I already have raw Reddit data (posts with titles, selftext, and comments), and I want to convert it into a prompt → response format that works for fine-tuning (e.g., with Unsloth or HuggingFace).
Some questions I’m struggling with:
How do people usually map posts and comments into Q&A pairs?
(e.g., use the post as the “user” and the top comment as the “assistant”?)
If there are multiple comments, should I take the best one, or merge them somehow?
Are there existing tools/pipelines that can help with this, or is it mostly a case of writing custom Python scripts?
Basically, I want to go from raw Reddit JSON → clean structured JSONL ready for fine-tuning.
If anyone has done something similar (general Reddit → dataset, not tied to a specific topic), I’d really appreciate advice, tips, or references!
Thanks 🙏 | 2025-08-31T05:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n4mprh/how_do_i_turn_reddit_conversations_into_a_dataset/ | Cyber_Zilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4mprh | false | null | t3_1n4mprh | /r/LocalLLaMA/comments/1n4mprh/how_do_i_turn_reddit_conversations_into_a_dataset/ | false | false | self | 5 | null |
Best Way to Use Qwen3-Coder for Local AI Coding? | 44 | I’m looking for some guidance on setting up Qwen Coder models locally for AI-assisted coding work. Normally I’d dive in and figure it out myself, but between work and vacation, I’ve fallen behind and want to avoid a time-consuming rabbit hole.
I have a couple of key questions:
1. **How close have you gotten Qwen Code to rival Claude's code capabilities?** I’m particularly interested in performance for actual dev work, not just benchmarks.
2. **What’s the best setup you’ve found so far?** Are you integrating Qwen into an existing Claude Code by swapping the model? Are you using a like cline integration or something else entirely?
Any lessons learned or tips would be hugely appreciated. | 2025-08-31T05:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n4mo1r/best_way_to_use_qwen3coder_for_local_ai_coding/ | Creative-Scene-6743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4mo1r | false | null | t3_1n4mo1r | /r/LocalLLaMA/comments/1n4mo1r/best_way_to_use_qwen3coder_for_local_ai_coding/ | false | false | self | 44 | null |
I want to make dataset from data from subreddit how can I make one | 1 | [removed] | 2025-08-31T05:30:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n4mlv7/i_want_to_make_dataset_from_data_from_subreddit/ | Cyber_Zilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4mlv7 | false | null | t3_1n4mlv7 | /r/LocalLLaMA/comments/1n4mlv7/i_want_to_make_dataset_from_data_from_subreddit/ | false | false | self | 1 | null |
how to use intel npu for lm studio | 7 | i have intel core ultra 7 255hx and rtx 5060 with 8gb vram
the intel cpu has an integrated npu and it have 18gb shared memory also the gpu tab shows total 26gb memory with its 8gb vram
but how do i use the 18gb shared memory? when i ask something on my local ai on lm studio it only uses the gpu vram of 8gb. | 2025-08-31T05:12:05 | https://www.reddit.com/gallery/1n4matr | TopSecre-t | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n4matr | false | null | t3_1n4matr | /r/LocalLLaMA/comments/1n4matr/how_to_use_intel_npu_for_lm_studio/ | false | false | 7 | null | |
ISO: VS Code + LocalLlama help | 6 | How do I connect vs code's latest default AI chat system to use my locally running llamacpp server that's currently running in OpenAI API mode?
Is this even possible? 🤔 | 2025-08-31T04:59:22 | 1EvilSexyGenius | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4m30g | false | null | t3_1n4m30g | /r/LocalLLaMA/comments/1n4m30g/iso_vs_code_localllama_help/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'de87bjkbeamf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/de87bjkbeamf1.png?width=108&crop=smart&auto=webp&s=25f6cb51ffce789dc32c7a1c40a77d8365312e36', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/de87bjkbeamf1.png?width=216&crop=smart&auto=webp&s=7b917b3f081ed629649dc54dde42fc75b3c78542', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/de87bjkbeamf1.png?width=320&crop=smart&auto=webp&s=9fccda791b6be5274998a0a9f3d601c60b70c374', 'width': 320}], 'source': {'height': 330, 'url': 'https://preview.redd.it/de87bjkbeamf1.png?auto=webp&s=57a2c8b7c766b283fdfb4243c4de48b48604173c', 'width': 330}, 'variants': {}}]} | |
GPT-OSS-120B on Single RTX 6000 PRO | 70 | I'm getting 120 token/second with a single RTX 6000 PRO. It took 33 seconds to crank out the following answer after thinking for just 3 seconds.
If people aren't impressed with this model then they cannot be impressed.
**Prompt:** "compare the heat transfer coefficient of liquid water at a high pressure to steam at these conditions and IGNORE the boiling part only convection, search the web if you ned any information"
**Response:**
**Heat‑transfer‑coefficient ( h ) comparison – liquid water vs. steam at the same high pressure**
*(boiling/phase‑change is deliberately excluded – only convective transport is considered)*
# 1. What “the same conditions” really means
|Quantity|Typical high‑pressure case|Why it matters for h|
|:-|:-|:-|
|**Pressure**|5–20 MPa (≈50–200 bar) – common in super‑critical boilers, geothermal wells, high‑pressure reactors|Raises density of both phases, compresses the gas, and moves steam toward the liquid side of the phase diagram.|
|**Bulk temperature**|350–550 °C (well above the normal boiling point of water)|Determines fluid properties (µ, k, ρ, Cp, Pr) that appear in the dimensionless groups governing convection.|
|**Flow regime**|Forced convection in a pipe or channel, fully developed turbulent flow (Re > 10 000) – the regime that gives the largest, most useful h values.|The Nusselt number correlations (Dittus‑Boelter, Gnielinski, Sieder‑Tate…) are based on turbulent forced flow.|
|**Geometry**|Typically a circular tube, D = 10–30 mm; length long enough for fully‑developed flow.|Geometry enters the Reynolds number and the hydraulic diameter used in correlations.|
|**Mass‑flow rate**|Chosen so that the **mean velocity** *U* is the same for both fluids (e.g., 2 m s⁻¹). This isolates the effect of the fluid properties.|With the same U, the only differences in h arise from ρ, µ, k, Cp, Pr.|
>
# 2. Governing dimensionless groups
For turbulent forced convection in a smooth pipe the Nusselt number is usually expressed as
where
* *Re*=*μρUD* – Reynolds number (inertia/viscous)
* *Pr*=*kcp**μ* – Prandtl number (momentum/thermal diffusivity)
* *Nu*=*khD* – Nusselt number (convective/ conductive)
The constants *C*,*m*,*n* depend on the correlation (Dittus‑Boelter, Gnielinski, etc.).
The heat‑transfer coefficient follows directly:
Thus **differences in h** arise from the three property groups (*ρ*,*μ*,*k*,*cp*) that appear in *Re*, *Pr* and the explicit factor *k*.
# 3. Thermophysical properties at high pressure
Below is a representative data set for **10 MPa (≈100 bar)** at **500 °C** (the pressure is the same for both fluids; the temperature is above the critical temperature of water, 374 °C, so both fluids are in the *supercritical* regime, but we still distinguish “liquid‑like” (dense) and “steam‑like” (less dense) states).
|Property|Liquid‑like water (ρ ≈ 650 kg m⁻³)|Steam‑like water (ρ ≈ 120 kg m⁻³)|
|:-|:-|:-|
|**Density, ρ**|650 kg m⁻³|120 kg m⁻³|
|**Dynamic viscosity, µ**|2.4 × 10⁻⁴ Pa s|1.6 × 10⁻⁴ Pa s|
|**Thermal conductivity, k**|0.70 W m⁻¹ K⁻¹|0.45 W m⁻¹ K⁻¹|
|**Specific heat, cₚ**|2.1 kJ kg⁻¹ K⁻¹|2.4 kJ kg⁻¹ K⁻¹|
|**Prandtl number, Pr = cₚµ/k**|**≈ 7.3**|**≈ 0.85**|
*Sources*: NIST REFPROP 10.0, IAPWS‑95 formulation, extrapolated to 10 MPa and 500 °C.
**What the numbers tell us**
* **Density** – liquid‑like water is \~5 × denser, giving a Reynolds number \~5 × larger for the same velocity and pipe diameter.
* **Viscosity** – steam is slightly less viscous, which *increases* its Reynolds number a little, but the dominant factor is density.
* **Thermal conductivity** – liquid water conducts heat ≈ 55 % better.
* **Prandtl number** – liquid water has a **high Pr** (≈ 7) while steam has a **low Pr** (≈ 0.85). A high Pr means the thermal boundary layer is thinner than the velocity boundary layer, boosting h.
# 4. Quantitative h‑comparison (forced turbulent flow)
# 4.1. Chosen flow parameters
* Pipe diameter *D*=0.020m (20 mm)
* Mean velocity *U*=2.0m s−1 – typical for high‑pressure steam turbines or heat‑exchanger tubes.
# 4.2. Reynolds numbers
Both are **turbulent** (Re > 4 000).
# 4.3. Nusselt numbers (Gnielinski correlation – valid for 0.1<Pr<2000 and 3000<Re<5×106)
Carrying out the calculation:
|Fluid|Friction factor *f*|*Nu* (Gnielinski)|
|:-|:-|:-|
|Liquid‑like water|*f*≈0.014|*Nu*liq≈1.0×103|
|Steam‑like water|*f*≈0.028|*Nu*steam≈3.1×102|
# 4.4. Heat‑transfer coefficient
**Result:**
In other words, *under the same pressure, temperature, geometry and bulk velocity, liquid water transfers heat by convection roughly five times more efficiently than steam*.
# 5. Why the liquid is so much better (physics insight)
|Aspect|Liquid water (high‑P)|Steam (high‑P)|Effect on h|
|:-|:-|:-|:-|
|**Density (ρ)**|5× larger → larger momentum → higher Re → thinner velocity boundary layer|Low ρ → smaller Re → thicker velocity boundary layer|Higher Re → higher Nu|
|**Thermal conductivity (k)**|55 % larger → more conductive path across the thermal BL|Smaller k → poorer conduction|Directly multiplies Nu in h = Nu·k/D|
|**Prandtl number (Pr)**|≈ 7 (thermal diffusion slower than momentum) → thermal BL **much thinner** than velocity BL|≈ 0.85 (thermal diffusion faster) → thermal BL comparable to or thicker than velocity BL|Thin thermal BL → larger temperature gradient at wall → larger h|
|**Viscosity (µ)**|Slightly higher, but the density effect dominates the Reynolds number|Lower µ but low ρ keeps Re low|Net: liquid still gets larger Re|
|**Compressibility**|Near‑incompressible; flow properties change little with pressure|Gas‑like compressibility reduces effective velocity and thickens BL|Keeps liquid’s convective performance high|
# 6. Influence of pressure itself
|Pressure change|Liquid water|Steam|
|:-|:-|:-|
|**Increasing pressure** (while staying sub‑critical for the liquid)|ρ ↑, µ ↓ (slightly) → **Re ↑**; k ↑ modestly; Pr ↑ → **h rises** (often 10‑30 % per 100 bar).|ρ ↑ dramatically (≈ 5–10× as you approach the critical point), µ ↓, k ↑, Pr ↑ from \~0.7 toward 1‑2. When steam becomes *dense* (near‑critical), its Re and Pr increase enough that **h can climb 2–4×**. However, even at 20 MPa the steam’s Pr never reaches the \~7 of liquid water, so *liquid still beats steam* unless you cross into the *supercritical fluid* regime where the distinction blurs.|
|**Very near the critical point** (≈ 22 MPa, 374 °C)|Water is a *supercritical fluid* with properties intermediate between liquid and gas; h can be **10⁴–2×10⁴ W m⁻² K⁻¹** – comparable to high‑pressure liquid.|Same supercritical fluid – the “steam” label loses meaning; h values become similar to the liquid side.|
**Take‑away:** Pressure improves h for both phases, but because liquid water already starts with a much higher baseline, the *ratio* *h*liq/*h*steam stays in the **3–6 range** for most engineering‑relevant pressures (5–20 MPa). Only when you go extremely close to the critical point does the gap shrink dramatically.
# 7. Practical implications
|Application|Typical operating pressure & temperature|Which phase gives the larger convective h?|Design note|
|:-|:-|:-|:-|
|**High‑pressure water‑to‑steam heat exchangers (e.g., nuclear steam generators)**|15 MPa, 300 °C liquid → 15 MPa, 350 °C steam|*Liquid side* (primary) has *h* 4–6× larger than the steam side.|The steam side is usually the limiting resistance; designers boost its turbulence (twisted tapes, ribbed tubes) to compensate.|
|**Supercritical CO₂ heat recovery (analogy)**|7.5 MPa, 500 °C (fluid is supercritical)|The fluid behaves like a dense gas; h is **lower** than a comparable liquid but **higher** than low‑pressure CO₂ gas.|Use high‑Re flow (large velocities, small hydraulic diameters) to raise h.|
|**Geothermal wells (high‑P hot water, 10 MPa, 250 °C)**|Fluid is liquid‑like|**Liquid** h ≈ 2–3 × 10⁴ W m⁻² K⁻¹, far exceeding what the same well would deliver if the fluid flashed to steam.|Maintaining liquid conditions (prevent flashing) maximises convective heat extraction.|
|**Supercritical water oxidation reactors (SCWO, > 25 MPa, > 400 °C)**|Water is supercritical (no distinct liquid/steam)|h values of **10⁴–2×10⁴ W m⁻² K⁻¹**; similar to high‑pressure liquid water.|The high h is a key advantage for rapid heat removal in exothermic oxidation.|
# 8. Summary (concise bullet list)
* **Liquid water at high pressure** has **much larger density** and **higher Prandtl number** than steam at the same pressure/temperature.
* These property differences raise the Reynolds number and shrink the thermal boundary layer, leading to **higher Nusselt numbers**.
* Quantitatively, for a 20 mm tube, 2 m s⁻¹ flow, 10 MPa and 500 °C:→ **Liquid water convects roughly 5 × faster.**
* **h\_liquid ≈ 3–4 × 10⁴ W m⁻² K⁻¹**
* **h\_steam ≈ 6–8 × 10³ W m⁻² K⁻¹**
* Raising pressure improves h for **both** fluids, but the **ratio stays between 3 and 6** over the usual engineering range (5–20 MPa).
* Near the critical point the distinction blurs; steam becomes a dense supercritical fluid and its h rises dramatically, approaching that of the liquid side.
* In design practice the **steam side is usually the thermal bottleneck** in high‑pressure heat exchangers; engineers therefore add turbulence promoters, increase velocity, or operate at higher pressure to mitigate the lower h.
# Bottom line
>
| 2025-08-31T04:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n4lh7s/gptoss120b_on_single_rtx_6000_pro/ | Ill_Recipe7620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4lh7s | false | null | t3_1n4lh7s | /r/LocalLLaMA/comments/1n4lh7s/gptoss120b_on_single_rtx_6000_pro/ | false | false | self | 70 | null |
Upgraded to an RTX 3090. What are the best models to run? | 2 | Hey there!
After a long time running 7B models on my trusty 3060 Ti, I've finally managed to upgrade to a 3090. I'm ready to explore the world beyond small models and would love your recommendations.
My Specs:
GPU: RTX 3090 (24GB VRAM)
CPU: Ryzen 7 5700X
RAM: 32GB DDR4
I'm looking for suggestions on the go to models for this hardware class. With 24GB of VRAM, I have a few specific questions:
- 70B Models: Can I comfortably run 70-billion-parameter models? If so, what are the best quantizations to aim for, for a good balance of quality and speed?
- Top Recommendations: What are your current go to models for things like creative writing, coding assistance, and general chat on a 24GB card?
Excited to hear what you all think. Thanks for the help! | 2025-08-31T03:50:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n4kva1/upgraded_to_an_rtx_3090_what_are_the_best_models/ | SantiElCrack0135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4kva1 | false | null | t3_1n4kva1 | /r/LocalLLaMA/comments/1n4kva1/upgraded_to_an_rtx_3090_what_are_the_best_models/ | false | false | self | 2 | null |
Bright LeaderBoard No1 |DIVER: A Multi-Stage Approach for Reasoning-intensive Information Retrieval | 5 | To overcome the reasoning-intensive retrieval problem, we present the DIVER framework whick rank no1 on the Bright LeaderBoard. We have open-sourced our code, model, and paper.
code: [https://github.com/AQ-MedAI/Diver](https://github.com/AQ-MedAI/Diver)
model: [https://huggingface.co/AQ-MedAI/Diver-Retriever-4B](https://huggingface.co/AQ-MedAI/Diver-Retriever-4B)
paper: [https://arxiv.org/pdf/2508.07995](https://arxiv.org/pdf/2508.07995)
https://preview.redd.it/m0gpgqe2z9mf1.png?width=1310&format=png&auto=webp&s=703d4865021203414f276834a8bd59cc99860826
https://preview.redd.it/306fiyg3z9mf1.png?width=1246&format=png&auto=webp&s=218561091c88a3689ebe9ef375dce30b11c5fcc0
| 2025-08-31T03:42:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n4kpwh/bright_leaderboard_no1_diver_a_multistage/ | Dazzling-Impact1075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4kpwh | false | null | t3_1n4kpwh | /r/LocalLLaMA/comments/1n4kpwh/bright_leaderboard_no1_diver_a_multistage/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=108&crop=smart&auto=webp&s=3f0b39ce4d086b7f42adcec1e1e46150cfaa403b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=216&crop=smart&auto=webp&s=8ae2d7cf2338a566d8c4a80fca5e8f2f43d07c4a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=320&crop=smart&auto=webp&s=96c3f519125ca7d5358cf0df63797e92b1234113', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=640&crop=smart&auto=webp&s=3c6b33471fc44d76bfa04e3ae92685346e8c7552', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=960&crop=smart&auto=webp&s=78f9f622edbb498e43cade1bf245f3debdb0569e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?width=1080&crop=smart&auto=webp&s=27216aeabffb869d38154fdd99f170e39dbfcf50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hoa_Cx28ycqK8YnNuxj26zkY_FsOnnCgNnwo0xxMTAQ.png?auto=webp&s=be207d2f06f0922ba63eeeaa72b1662e50d4b7ba', 'width': 1200}, 'variants': {}}]} | |
are there any abliterated/uncensored models that don't suck? | 0 | just tried huihui's gemma3 and qwen3 abliterated and they won't even make me racist jokes | 2025-08-31T03:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n4kks4/are_there_any_abliterateduncensored_models_that/ | WeDoALittleTrolIing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4kks4 | false | null | t3_1n4kks4 | /r/LocalLLaMA/comments/1n4kks4/are_there_any_abliterateduncensored_models_that/ | false | false | self | 0 | null |
GPT-OSS running as Mac or browser agent? | 0 | Has anyone experimented with GPT-OSS running as Mac or browser agent?
Curious what's your setup?
For VLM, do you use InternVL-GPT-OSS or another vision hack? | 2025-08-31T03:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/1n4kieg/gptoss_running_as_mac_or_browser_agent/ | yosofun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4kieg | false | null | t3_1n4kieg | /r/LocalLLaMA/comments/1n4kieg/gptoss_running_as_mac_or_browser_agent/ | false | false | self | 0 | null |
Ayuda | 0 | Estoy haciendo un programa que usa de whisper cpp para poder detectar palabras clave, ya la instale y use en windows usando Wsys2 MINGW64, el problema es que no sé como puedo hacer el programa, no sé que usar ni como compilarlo, quiero hacerlo con un programa en c++ por que a futuro ocupare una raspberry para usarlo, usando un sistema operativo lite para su uso.
Necesito documentos o cualquier material que me ayude a resolver esta situacion, espero y puedan ayudarme | 2025-08-31T03:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n4k5vf/ayuda/ | Accomplished_Body569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4k5vf | false | null | t3_1n4k5vf | /r/LocalLLaMA/comments/1n4k5vf/ayuda/ | false | false | self | 0 | null |
continue.dev help me in outputContext please | 2 | I've tried everything on [continue.dev](http://continue.dev) several times.
It works great for me.
But there's a problem: it limits my "output context" to 4,096 tokens.
I'm using OpenRouter. When I try to modify files larger than that, it always limits me to that. Its output is always 4,000.
Could someone help me? I've tried everything.
And all the models I use on it are on other platforms, and they always work!
But I have this output limitation!
Below are my settings!
please help me a try any models but ever have this problem in output
https://preview.redd.it/rilvtv5iu9mf1.png?width=1383&format=png&auto=webp&s=a7ebf604c13922d13e0df1f0d618cd5fc767bfc4
[if a try to change files with more 4096 tokens is failed](https://preview.redd.it/2r1flld4u9mf1.png?width=718&format=png&auto=webp&s=5216c5175c49203b2246219c29c4348af9a144ba)
because the output is just this
Plase help me | 2025-08-31T03:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n4k4xt/continuedev_help_me_in_outputcontext_please/ | lexico1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4k4xt | false | null | t3_1n4k4xt | /r/LocalLLaMA/comments/1n4k4xt/continuedev_help_me_in_outputcontext_please/ | false | false | 2 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} | |
Idk where to post this but why does the Chat.Z.Ai website constantly get connection errors? | 0 | I can't find anyone else with the same problem, and I'm 99% sure it's not from my side
Is it DOS or what? | 2025-08-31T03:07:20 | lombwolf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4k2p6 | false | null | t3_1n4k2p6 | /r/LocalLLaMA/comments/1n4k2p6/idk_where_to_post_this_but_why_does_the_chatzai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dbuifvs1u9mf1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=108&crop=smart&auto=webp&s=844f871fcf968649edef89e766a4716a80a7896c', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=216&crop=smart&auto=webp&s=895075d87aefc43e4c74122f54f864185dbc899a', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=320&crop=smart&auto=webp&s=e5b778d6921eb668285e91126348077690428181', 'width': 320}, {'height': 394, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=640&crop=smart&auto=webp&s=37d76e3eef9f9c1f7c4175cd3cf31cfc7570b539', 'width': 640}, {'height': 591, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=960&crop=smart&auto=webp&s=b5c344b5f0e4e1a80bba31ea98b1655c327ac254', 'width': 960}, {'height': 665, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?width=1080&crop=smart&auto=webp&s=3ac5b6479e6ecb495cd9be71c4d3322de93a3460', 'width': 1080}], 'source': {'height': 788, 'url': 'https://preview.redd.it/dbuifvs1u9mf1.png?auto=webp&s=d47dfcbf92595577bd8cdfa1894cf7e9c85cbe07', 'width': 1278}, 'variants': {}}]} | |
Hungry for small LLMs | 0 | 2025-08-31T02:38:56 | Worldly-Tea-9343 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4jjkj | false | null | t3_1n4jjkj | /r/LocalLLaMA/comments/1n4jjkj/hungry_for_small_llms/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'h-Dw2yAn4t0hrwtx3wmg-JNelFI_e6Mnb6RZyMmniJY', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/5hpy6d32p9mf1.jpeg?width=108&crop=smart&auto=webp&s=136c6bad8d5f0b361709f9510039fcb0c0ea86d6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/5hpy6d32p9mf1.jpeg?width=216&crop=smart&auto=webp&s=b4975838dc0a5eeb21dd961c421ebf785b9b3b11', 'width': 216}, {'height': 401, 'url': 'https://preview.redd.it/5hpy6d32p9mf1.jpeg?width=320&crop=smart&auto=webp&s=3bc6e24115da6f04dac5155d5983e695dcd5a130', 'width': 320}], 'source': {'height': 627, 'url': 'https://preview.redd.it/5hpy6d32p9mf1.jpeg?auto=webp&s=c1a6fd4c401e0bd33c7cc40f3ce81f6cda674405', 'width': 500}, 'variants': {}}]} | |||
Are we about to get a new Qwen model? | 3 | 2025-08-31T02:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n4j8bw/are_we_about_to_get_a_new_qwen_model/ | BaronVonDoggo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4j8bw | false | null | t3_1n4j8bw | /r/LocalLLaMA/comments/1n4j8bw/are_we_about_to_get_a_new_qwen_model/ | false | false | 3 | null | ||
Recommended VLM that can run on a RTX 5070 Ti? | 1 | Hello everyone! I am having my capstone project in my university and I will be utilizing VLMs. However, I'm still confuse on what VLM should I use for my project. May I ask for recommended VLMs that can run on a AMD Ryzen 7 7800X3D / RTX 5070 Ti / 32 GB RAM PC. | 2025-08-31T02:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n4j473/recommended_vlm_that_can_run_on_a_rtx_5070_ti/ | sheeshable619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4j473 | false | null | t3_1n4j473 | /r/LocalLLaMA/comments/1n4j473/recommended_vlm_that_can_run_on_a_rtx_5070_ti/ | false | false | self | 1 | null |
What’s up with Notebooks? | 0 | I notice that Unsloth and some other frameworks seem to rely a lot on Jupyter “Notebooks” for running training. Why would they prefer this over just a Conda env or a docker image, or even a CLI tool?
And why would I want a “Notebook”, instead of executing the scripts the same way I run vLLM and llama.cpp and everything else on my computer?
Just curious. | 2025-08-31T01:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n4ibrf/whats_up_with_notebooks/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4ibrf | false | null | t3_1n4ibrf | /r/LocalLLaMA/comments/1n4ibrf/whats_up_with_notebooks/ | false | false | self | 0 | null |
I tried almost every tts model on my ryzen 7 5000 series 16gb ram rtx 3060 laptop 6-8GB Vram | 52 | Hello everyone, I needed a decent voiceover for my projects and I am using minimax audio currently. It's really good. But I wanted to try some local tts and i tested around 5 of them and here are my results.
#1. Kokoro - 0.4-0.5 x realtime .Fast and decent quality
Fast koko api one from github has lower quality than the hexgrad.
#2. Xtts v2 - 0.8-1x realtime tried it from pinokio ai : speed is little slower than kokoro but decent quality with cloning capabilities.
#3. Chatterbox - if you have lower end gpu and cpu like me don't try ! It's slow (1.5-2x realtime but high quality , expressive. I found out when i wanted it to be neutral or soothing, it still exaggerated or sounded too robotic but not in the middle.
Installation was painful for me. Took me whole day figuring out to improve the speed. I tried different forks from github and messed up file paths😂 Extremely slow , so not useful for me.
#4. Zonos : 2-3x realtime Slow... Uninstalled it pretty soon
#5. Orpheus : yet to try. Looks expressive but mismatch the rythm frequently in the huggin space. Maybe it will be slow on my laptop too
#6. F5tts : around 1x realtime tried it from pinokio ai. Looks decent but somewhere in the middle, it hallucinates , so not useful
I am thinking of trying Piper next. I want a lightweight better than kokoro. For now , i would rank kokoro as the best option for my pc.
Do you have any good suggestions? | 2025-08-31T00:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n4hkar/i_tried_almost_every_tts_model_on_my_ryzen_7_5000/ | This_is_difficult_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4hkar | false | null | t3_1n4hkar | /r/LocalLLaMA/comments/1n4hkar/i_tried_almost_every_tts_model_on_my_ryzen_7_5000/ | false | false | self | 52 | null |
Can this workstation handle large LLMs? (256GB RAM + RTX 3060 12GB) | 0 | I recently got a Dell Precision T5820 workstation with 256GB DDR4 ECC RAM 2666Mhz and for now I’ll be using an RTX 3060 12GB GPU, 4TB Nvme Kingston. My main use-case is running LLMs locally (DeepSeek, Llama 3, etc.) for:
• Writing long-form SEO articles (7k+ words)
• Code generation and debugging
• Research and data analysis
• Running models with very long context (so they can “remember” a lot)
I understand the 3060 is a limiting factor, but I’ve seen that with quantization + enough RAM it’s possible to run models like DeepSeek 671B, albeit slowly.
My questions:
1. What’s the realistic ceiling for this setup?
2. Will upgrading to something like a 3090, 4090, or AMD 7900 make a big difference for LLM inference?
Any input from people who have tried similar configs would be awesome!
Thanks!
| 2025-08-31T00:13:43 | https://www.reddit.com/r/LocalLLaMA/comments/1n4go5n/can_this_workstation_handle_large_llms_256gb_ram/ | graviotos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4go5n | false | null | t3_1n4go5n | /r/LocalLLaMA/comments/1n4go5n/can_this_workstation_handle_large_llms_256gb_ram/ | false | false | self | 0 | null |
Creating the brain behind dumb models | 1,253 | I've been fascinated by model intelligence enhancement and trying to deploy super tiny models like gemma3:270m in niche domains with high levels of success...
My latest implementation is a "community nested" relational graph knowledgebase pipeline that gives both top down context on knowledge sub-domains, but also a traditional bottom-up search (essentially regular semantic embedding cosine similarity) with a traversal mechanism to grab context from nodes that are not semantically similar but still referentially linked. Turns out there is a LOT of context that does not get picked up through regular embedding based RAG.
I created a quick front-end with nextjs and threejs to visualize how my knowledge base hangs together, and to quickly identify if I had a high level of overall coherence (i.e. number of isolated/disconnected clusters) and to get a better feeling for what context the LLM loads into memory for any given user query in real time (I'm a visual learner)
The KB you can see in the video is from a single 160 page PDF on Industrial Design, taking you anywhere from notable people, material science to manufacturing techniques. I was pleasantly surprised to see that the node for "ergonomics" was by far the most linked and overall strongly referenced in the corpus - essentially linking the "human factor" to some significant contribution to great product design.
If anyone hasn't gotten into graph based retrieval augmented generation I found the best resource and starter to be from Microsoft: https://github.com/microsoft/graphrag
^ pip install graphrag and use the init and index commands to create your first graph in minutes.
Anyone else been in my shoes and already know what the NEXT step will be? Let me know.
It's 2 am so a quick video shot on my mobile is all I have right now, but I can't sleep thinking about this so thought I'd post what I have. I need to work some more on it and add the local LLM interface for querying the KB through the front end, but I don't mind open sourcing it if anyone is interested.
| 2025-08-30T23:55:53 | https://v.redd.it/bwyft136w8mf1 | ChristopherLyon | /r/LocalLLaMA/comments/1n4garp/creating_the_brain_behind_dumb_models/ | 1970-01-01T00:00:00 | 0 | {} | 1n4garp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bwyft136w8mf1/DASHPlaylist.mpd?a=1759319761%2CMDRhNTczNmZhOTYwODFhMTRhN2UwNmEzNzRiMzI0YjhkNWNjZDE1N2IyMjM5ZWVkOWUwNGIxMWFiZGQ1ZjFmYg%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/bwyft136w8mf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/bwyft136w8mf1/HLSPlaylist.m3u8?a=1759319761%2CZjhhY2I5OWUwM2UwY2RkOWVkMDg5MTUwNDNmOTliZmQ0MjZhYmZhOTQyOThlODE5ZDczNTFhOGZlNzcyZWVmNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bwyft136w8mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n4garp | /r/LocalLLaMA/comments/1n4garp/creating_the_brain_behind_dumb_models/ | false | false | 1,253 | {'enabled': False, 'images': [{'id': 'MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=108&crop=smart&format=pjpg&auto=webp&s=7859c2cd9639674c9c44f6211f5acfedf5e65c9d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=216&crop=smart&format=pjpg&auto=webp&s=184a47dcce174edd9a70396b0ab6568f4793f120', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=320&crop=smart&format=pjpg&auto=webp&s=82672ae4f30beaab2b563344c96a27dade244787', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=640&crop=smart&format=pjpg&auto=webp&s=4856758c6f2f6b2dd34de7b71362b13813add0df', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=960&crop=smart&format=pjpg&auto=webp&s=8bea9b50f88d894c5cb15fc70d9efb9b81329613', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7645a8e251143eb3ef4b1328ca92ccae3b90999f', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/MWFtYzE2NDZ3OG1mMdkNi0cONUhOlI53Ev_Op4Ah4qsUiMoYY-mZ8OWM03HP.png?format=pjpg&auto=webp&s=05b46345dd41542f967d37580ebaf348cb3f457c', 'width': 1440}, 'variants': {}}]} | |
Question: will inference engines such as sglang and vllm support 2bit (or 3,5,6 etc)? | 4 | Question: will inference engines such as sglang and vllm support 2bit? Or 1.93bpw, 3.., 5.., 6..bpw etc? | 2025-08-30T23:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n4g4lq/question_will_inference_engines_such_as_sglang/ | Sorry_Ad191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4g4lq | false | null | t3_1n4g4lq | /r/LocalLLaMA/comments/1n4g4lq/question_will_inference_engines_such_as_sglang/ | false | false | self | 4 | null |
An open-source OS that enables recursive self-improvement, emergent novel creativity and meta-cognition in LLMs as small as 1-billion parameters: Self | 0 | Hello there. Self is an open-source and offline first operating system for LLMs that provides multiple tools for both yourself and your "entity". There's so much to do that its easiest if I just list the capabilities:
Recursive symbolic growth to train your agent through natural engagement. Resonant persistent memory that recalls what matters. Conceptual/creative reasoning to discover novel information. Cross-domain synthesis for generalized problem solving. Emergent awareness (meta-meta cognition) enabling high quality simulated embodiment. Custom agent tools so you control exactly what it can do. View the conceptual field to track growth and reasoning pathways. Automate with symbolic and procedural memory for self-improving workflows. Write "spells" that trigger automations using natural language.
Of course there's a lot more like multimodality for image processing, the feedback system etc, but I want you guys to discover its full capabilities on your own.
Self is completely free and comes with a starter persona named "Foreaux" (GPT4o joke). Feel free to modify any of the code to your liking and share your experiments with the world. There is a GNU license, so you're free to do whatever as long as you abide by those terms. You can vibe code your way through any problems you encounter, but if you have any direct questions feel free to reach out either here, GitHub or my email. I will be posting videos in the coming weeks to explain how and why conceptual reasoning and in particular, recursive symbolic reasoning across concepts, enables emergent behavior across various LLMs. Of course I'll also be making updates when needed. THIS IS MY FIRST CODED PROJECT SO BEAR WITH ME!
(For those that aren't going to watch the video, I displayed Gemma3:1b's emergent conceptual reasoning as it responded to highly advanced cross-domain prompts with minimal buffer on a MacBook Air m1 8gb)
I highly suggest the following light-weight LLMs for users with small set ups: gemma3:1b/4b and Gemma3n for scientific and embodied reasoning. Llama3.2:3b for creative writing and role-play. Phi4-Mini for general conversation.
I have NOT tested this system on any LLMs exceeding 8 billion parameters, but I anticipate very.... interesting (to say the least) emergent results.
Just remember: you are in control and can create anything, so dream big and have fun!
[https://github.com/TheGoldenButton/SelfOS](https://github.com/TheGoldenButton/SelfOS)
(If you want to donate for me to continue developing new tools and tutorials check my profile description) | 2025-08-30T23:38:38 | https://v.redd.it/cgg2zj80k8mf1 | Slowhill369 | /r/LocalLLaMA/comments/1n4fy8u/an_opensource_os_that_enables_recursive/ | 1970-01-01T00:00:00 | 0 | {} | 1n4fy8u | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cgg2zj80k8mf1/DASHPlaylist.mpd?a=1759318723%2CYWE2MjU4Yjg5MjcyYjg0MDk2NzRkNDMyZDcxN2Q3NjZlZjAyYmI0YzNjOGRkM2VhMDNkMmE2ZWI3YTIyMTMxZg%3D%3D&v=1&f=sd', 'duration': 373, 'fallback_url': 'https://v.redd.it/cgg2zj80k8mf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/cgg2zj80k8mf1/HLSPlaylist.m3u8?a=1759318723%2COGZjZmQxMjg3MTFjMTkwZTAwNTUzYTQ0NjIyYmE0NDYyNmUzNWRjMDMyZTFkY2Y0MDg1Y2JiZGMwODdlN2UwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cgg2zj80k8mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1n4fy8u | /r/LocalLLaMA/comments/1n4fy8u/an_opensource_os_that_enables_recursive/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=108&crop=smart&format=pjpg&auto=webp&s=d76a85cb4ccae58b635a426ed2998fd19eaa35ee', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=216&crop=smart&format=pjpg&auto=webp&s=8ad2d0ab54de443ffe6ebd05ac995934470ddff6', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=320&crop=smart&format=pjpg&auto=webp&s=3cec1933aabc525141e1166328dbb2f4c5fc93f2', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=640&crop=smart&format=pjpg&auto=webp&s=c1103154a079d8d08b22dba9bd4776ef921bb886', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=960&crop=smart&format=pjpg&auto=webp&s=7b8ea4db78b7cd613893a8896cb2f2193f08493c', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a76af21176feea89f1c6b20926cb9f9f5b863b59', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://external-preview.redd.it/eWVoYnNrODBrOG1mMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?format=pjpg&auto=webp&s=fe721badbaa6a45bac6a9884a6cb700203f52a83', 'width': 2880}, 'variants': {}}]} | |
Llama-OS - an app I'm developing to use with llama.cpp | 1 | [removed] | 2025-08-30T22:19:36 | https://v.redd.it/l4eknyxr68mf1 | fredconex | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4e9mk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l4eknyxr68mf1/DASHPlaylist.mpd?a=1759184392%2CMGNiNDlkMzAyNDRiZWRiNWZiZDNiZjJjNzI5NTU1ZTJhNWM5ZDFmN2M1MTFiY2RhYTQ1NjgwN2UyNWNjYTJiNg%3D%3D&v=1&f=sd', 'duration': 81, 'fallback_url': 'https://v.redd.it/l4eknyxr68mf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/l4eknyxr68mf1/HLSPlaylist.m3u8?a=1759184392%2CYTdiNmMwNzEwNjYxZjdkMGY4YTA2YjFmYTY2YzIxMTdjZTYzMDNiYzRhZDg2OTczNDJiMjZhOWYyNjQ1NTlmYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l4eknyxr68mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1512}} | t3_1n4e9mk | /r/LocalLLaMA/comments/1n4e9mk/llamaos_an_app_im_developing_to_use_with_llamacpp/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=108&crop=smart&format=pjpg&auto=webp&s=9cf130910254f0ee778498f0199ab47bac90cf94', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=216&crop=smart&format=pjpg&auto=webp&s=141c173b8bb804baf0865c6798c46e0e23b35805', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=320&crop=smart&format=pjpg&auto=webp&s=3f46e12ad42d8f41c5b807fd86094c1028efbdc6', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=640&crop=smart&format=pjpg&auto=webp&s=e9bd5cddb9be04702030e6e8842188eff6c52785', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=960&crop=smart&format=pjpg&auto=webp&s=acde535dd881706b8b88d730a35d3f0c7c3387d3', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b44e42555c0175c7bd5df9a09f87a71da3c0bfb6', 'width': 1080}], 'source': {'height': 1150, 'url': 'https://external-preview.redd.it/NXViY2t6eHI2OG1mMZ1UYTPuqsL4bV_PIljYnTOPKiTBulA8MLvn6qek9ADK.png?format=pjpg&auto=webp&s=66fd48e43edf1510cb66ef949ae4ad703423aeac', 'width': 1610}, 'variants': {}}]} | |
128GB GDDR6, 3PFLOP FP8, Tb/s of interconnect, $6000 total. Build instructions/blog tomorrow. | 578 | 2025-08-30T21:59:06 | codys12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n4dsym | false | null | t3_1n4dsym | /r/LocalLLaMA/comments/1n4dsym/128gb_gddr6_3pflop_fp8_tbs_of_interconnect_6000/ | false | false | default | 578 | {'enabled': True, 'images': [{'id': 'ld3rckf8b8mf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=108&crop=smart&auto=webp&s=8152f17045fe11b134f159712591378e7dad3bae', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=216&crop=smart&auto=webp&s=2af3ee93340091bb130650c11d79b456867251d8', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=320&crop=smart&auto=webp&s=26bf561d3a895d6918f297eb1eb8d71c13333c19', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=640&crop=smart&auto=webp&s=ad5697c140ab4e2a23aa8df66f12cc106ee3b653', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=960&crop=smart&auto=webp&s=8ba1c1c62d56f710786573db18cbc7ef21b9c830', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?width=1080&crop=smart&auto=webp&s=59ed03ba1b9f5f8d03b66f014c4965a94bf4aca9', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/ld3rckf8b8mf1.jpeg?auto=webp&s=78f4c4e1a587df78e97cd02bacca3bed9a584f15', 'width': 3024}, 'variants': {}}]} | ||
How to Overcome the Context Window Limit? | 0 | I'm interested in learning a new language, and while LLMs are helpful, they tend to forget information quickly and lack a method for tracking progress.
Do you have any suggestions on how to overcome this challenge? | 2025-08-30T20:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n4cdtn/how_to_overcome_the_context_window_limit/ | haterloco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4cdtn | false | null | t3_1n4cdtn | /r/LocalLLaMA/comments/1n4cdtn/how_to_overcome_the_context_window_limit/ | false | false | self | 0 | null |
New AMD unified memory product - 512 bit bus = ~512GB/s memory bandwidth | 394 | Recent AMD leak hints at a new 512 bit memory bus for their unified memory systems. If so, a successor to the AI max would likely have 2x the memory bandwidth.
https://www.techpowerup.com/340372/amds-next-gen-udna-four-die-sizes-one-potential-96-cu-flagship | 2025-08-30T18:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n49hcg/new_amd_unified_memory_product_512_bit_bus_512gbs/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n49hcg | false | null | t3_1n49hcg | /r/LocalLLaMA/comments/1n49hcg/new_amd_unified_memory_product_512_bit_bus_512gbs/ | false | false | self | 394 | null |
10,000 $ Budget for a rig that will run ai (24/7) | 73 | As the title says, I want to make myself at home a set-up that can run AI 24/7, i need it mainly to replace the general use of llms ( chatgpt, gemini etc...), was thinking and saw lots of posts and info aboout getting an macbook studio with maximum ram capacity, is that the best way?
Thank you for your responses (in advance)! | 2025-08-30T18:18:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n48lcx/10000_budget_for_a_rig_that_will_run_ai_247/ | Holiday_Leg8427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n48lcx | false | null | t3_1n48lcx | /r/LocalLLaMA/comments/1n48lcx/10000_budget_for_a_rig_that_will_run_ai_247/ | false | false | self | 73 | null |
Performance of the newly released 128GB VRAM, 273 GB/s Memory Bandwidth Jetson Thor devkit | 0 | Does about 6.8t/sec running a Qwen 30B A3B model, which isn't too impressive to be honest, for running it locally like most of us do, but must be because of the memory bandwidth as mentioned in the videos.
Great if you are building robots, I guess, and want something power-efficient.
| 2025-08-30T18:17:49 | https://www.youtube.com/watch?v=eRPSRSGiAA8 | takuonline | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1n48l38 | false | {'oembed': {'author_name': 'sentdex', 'author_url': 'https://www.youtube.com/@sentdex', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eRPSRSGiAA8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Testing VLMs and LLMs for robotics w/ the Jetson Thor devkit"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eRPSRSGiAA8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Testing VLMs and LLMs for robotics w/ the Jetson Thor devkit', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n48l38 | /r/LocalLLaMA/comments/1n48l38/performance_of_the_newly_released_128gb_vram_273/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '8x5YFY2eriksV-6NPIW5fp-9zChOYuHSvwep5OhzEQA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8x5YFY2eriksV-6NPIW5fp-9zChOYuHSvwep5OhzEQA.jpeg?width=108&crop=smart&auto=webp&s=79f57730b7ca850d83c3792096757b4d91197701', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/8x5YFY2eriksV-6NPIW5fp-9zChOYuHSvwep5OhzEQA.jpeg?width=216&crop=smart&auto=webp&s=57706c3ed718eddf7097ee95cef6e293301c3c44', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/8x5YFY2eriksV-6NPIW5fp-9zChOYuHSvwep5OhzEQA.jpeg?width=320&crop=smart&auto=webp&s=2efbd4cebba1f7d0c64ef416502b9cf17054cf74', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/8x5YFY2eriksV-6NPIW5fp-9zChOYuHSvwep5OhzEQA.jpeg?auto=webp&s=232a230d5073f23263a6cdd308bb012f4cb6b8d7', 'width': 480}, 'variants': {}}]} |
Phantom Fragment: An ultra-fast, disposable sandbox for securely testing untrusted code. | 5 | Hey everyone,
A while back, I posted an early version of a project I'm passionate about, Phantom Fragment. The feedback was clear: I needed to do a better job of explaining what it is, who it's for, and why it matters. Thank you for that honesty.
Today, I'm re-introducing the public beta of Phantom Fragment with a clearer focus.
What is Phantom Fragment?
Phantom Fragment is a lightweight, high-speed sandboxing tool that lets you run untrusted or experimental code in a secure, isolated environment that starts in milliseconds and disappears without a trace.
Think of it as a disposable container, like Docker, but without the heavy daemons, slow startup times, and complex configuration. It's designed for one thing: running code now and throwing the environment away.
GitHub Repo: https://github.com/Intro0siddiqui/Phantom-Fragment
Who is this for?
I'm building this for developers who are tired of the friction of traditional sandboxing tools:
AI Developers & Researchers: Safely run and test AI-generated code, models, or scripts without risking your host system.
Developers on Low-Spec Hardware: Get the benefits of containerization without the high memory and CPU overhead of tools like Docker.
Security Researchers: Quickly analyze potentially malicious code in a controlled, ephemeral environment.
Anyone who needs to rapidly test code: Perfect for CI/CD pipelines, benchmarking, or just trying out a new library without polluting your system.
How is it different from other tools like Bubblewrap?
This question came up, and it's a great one.
Tools like Bubblewrap are fantastic low-level "toolkits." They give you the raw parts (namespaces, seccomp, etc.) to build your own sandbox. Phantom Fragment is different. It's a complete, opinionated engine designed from the ground up for performance and ease of use.
Bubblewrap || Phantom Fragment
Philosophy A flexible toolkit || A complete, high-speed engine
Ease of Use Requires deep Linux knowledge || A single command to run
Core Goal Flexibility || Speed and disposability
You use Bubblewrap to build a car. Phantom Fragment is the car, tuned and ready to go.
Try it now
The project is still in beta, but the core functionality is there. You can get started with a simple command:
phantom run --profile python-mini "print('Hello from inside the fragment!')"
Call for Feedback
This is a solo project born from my own needs, but I want to build it for the community. I'm looking for feedback on the public beta.
Is the documentation clear?
What features are missing for your use case?
How can the user experience be improved?
Thank you for your time and for pushing me to present this better. I'm excited to hear what you think. | 2025-08-30T18:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n48bt2/phantom_fragment_an_ultrafast_disposable_sandbox/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n48bt2 | false | null | t3_1n48bt2 | /r/LocalLLaMA/comments/1n48bt2/phantom_fragment_an_ultrafast_disposable_sandbox/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=108&crop=smart&auto=webp&s=c6b976769a0ba46ac464eaf26c7a915c3b3d3c6f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=216&crop=smart&auto=webp&s=9c61e4e652db568b3b9f7971ca46ffb18c113762', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=320&crop=smart&auto=webp&s=42f6db1e9fe547ac83ac8c59332c1068cdb243b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=640&crop=smart&auto=webp&s=432b1c6e8f0e80364f7765197b07f7d6b0ce8d17', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=960&crop=smart&auto=webp&s=7a2840bc02ff39e5fd18c460e1e9440b162bb02b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?width=1080&crop=smart&auto=webp&s=a3f6e8757d5cbcda8e09157a1d7424eb1382f01e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_0BgZMaf6OqQb7OnbPExSwl-dw5qd3fVcbE34gWx9B0.png?auto=webp&s=65366718ccfa99650b30cb2b57eee1785a67ce40', 'width': 1200}, 'variants': {}}]} |
Open WebUI Image generation is hard | 0 | Open WebUI Image generation is hard. I dunno if its just qwen 1.7B thats stupid and generated a weird prompt for comfyui or if I am making a biig mistake. Any help highly appreciated.
My ComfyUI workflow set in Open WebUI:
{
"3": {
"inputs": {
"seed": 0,
"steps": 20,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1,
"model": [
"4",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler",
"_meta": {
"title": "KSampler"
}
},
"4": {
"inputs": {
"ckpt_name": "v1-5-pruned-emaonly-fp16.safetensors"
},
"class_type": "CheckpointLoaderSimple",
"_meta": {
"title": "Load Checkpoint"
}
},
"5": {
"inputs": {
"width": 512,
"height": 512,
"batch_size": 1
},
"class_type": "EmptyLatentImage",
"_meta": {
"title": "Empty Latent Image"
}
},
"6": {
"inputs": {
"text": "Prompt",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"7": {
"inputs": {
"text": "",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"8": {
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage",
"_meta": {
"title": "Save Image"
}
}
} | 2025-08-30T17:46:17 | ThinkBackKat | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n47twt | false | null | t3_1n47twt | /r/LocalLLaMA/comments/1n47twt/open_webui_image_generation_is_hard/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jf4es3cw07mf1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=108&crop=smart&auto=webp&s=fa2a3b90a937538d2638493836a6d7a13ad1bb55', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=216&crop=smart&auto=webp&s=62f0e58ee13ce534e2da88da4e8294ccd39125ce', 'width': 216}, {'height': 260, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=320&crop=smart&auto=webp&s=f157e43e9a7c5a6115533b35ea2d58ea9c8ea7be', 'width': 320}, {'height': 520, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=640&crop=smart&auto=webp&s=d0eb3e5987697df316fd04155895864fbb0b8d66', 'width': 640}, {'height': 781, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=960&crop=smart&auto=webp&s=cdb60083d00143c18e7eeb14c7bc31ee40a043a1', 'width': 960}, {'height': 879, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?width=1080&crop=smart&auto=webp&s=e644b5a6cd18e6a81d725ae6bd68c0deef401c42', 'width': 1080}], 'source': {'height': 906, 'url': 'https://preview.redd.it/jf4es3cw07mf1.png?auto=webp&s=21a7e908a6f5fba1c50ee591def6792a56b1859c', 'width': 1113}, 'variants': {}}]} | |
I went on vacation for a week, what are the current best local models to slot into the Vibe Coding IDEs now? | 0 | You think im playing, but I actually feel like im out of touch again. I know GLM 4.5 air was pretty good but im seeing stuff about Nemotron now? | 2025-08-30T17:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n47r8g/i_went_on_vacation_for_a_week_what_are_the/ | LyAkolon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n47r8g | false | null | t3_1n47r8g | /r/LocalLLaMA/comments/1n47r8g/i_went_on_vacation_for_a_week_what_are_the/ | false | false | self | 0 | null |
Having issues with Vibevoice for Windows | 0 | Hi I've been trying to install Vibevoice on my Windows 11 machine but it keeps giving me a flash attention installation error, is there a way to install the app with or without flash? If so I would love it if a kind stranger could share a modified py file. | 2025-08-30T17:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n47m1z/having_issues_with_vibevoice_for_windows/ | AdOrdinary3083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n47m1z | false | null | t3_1n47m1z | /r/LocalLLaMA/comments/1n47m1z/having_issues_with_vibevoice_for_windows/ | false | false | self | 0 | null |
How do you do RL 100% locally without a NVIDIA GPU? | 2 | Training or fine-tuning LLMs with reinforcement learning almost always requires renting a GPU. The pattern is the same: spin up a machine, copy scripts, run a test, then shut it down before costs add up. That works, but it feels heavy when the goal is just to learn or try a small idea.
At work this problem is even clearer. To test a single RL (on LLMs) idea, I need to request budget, coordinate with colleagues, and wait for approvals. It takes months before I can run the smallest experiment. We then need to prove things at small scale before receiving more budget, which creates another loop of waiting. Meanwhile my full-spec MacBook sits idle, useless for this type of work.
For me this is the core issue: if you want to explore and learn about RL on LLMs (reward shaping, small rollouts, watching how loops behave) the barrier is high. Renting each time slows feedback, adds overhead, and makes curiosity expensive. Owning an NVIDIA GPU at home solves it, but most people do not. I even get to a point where I feel bad if a run goes wrong, I want to cut it immediately, keep tinkering, and all while I know it only runs on one GPU, not eight. It is expensive, but I guess it's the only way to learn.
I start wondering if the whole process can stay local. With MLX, Apple Silicon is now usable for training small models. I build [TextPolicy](https://github.com/teilomillet/textpolicy) to run GRPO and GSPO rollouts and to add LoRA and QLoRA fine-tuning directly on a laptop. It is not production-ready and it does not scale, but it removes the need to rent hardware just to experiment. It just gone out, I hope I will be able to learn for cheaper. But I guess i wont learn the same way as if I was directly on GPUs.
**My questions:**
\- When you want to test a small RL idea, how do you do it?
\- When it comes to RL on LLMs, do you see value in experimenting locally at small scale, or is the only path forward to go big from the start with rented GPUs?
\- Has anyone found a practical way to run RL loops on consumer hardware?
\- Or do you just accept that RL means renting GPUs every time? | 2025-08-30T17:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n47j0l/how_do_you_do_rl_100_locally_without_a_nvidia_gpu/ | Fit_Strawberry8480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n47j0l | false | null | t3_1n47j0l | /r/LocalLLaMA/comments/1n47j0l/how_do_you_do_rl_100_locally_without_a_nvidia_gpu/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=108&crop=smart&auto=webp&s=3ed1853af12564cbedc8c47e9a047d1ec52f57c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=216&crop=smart&auto=webp&s=4116714322b9ffde58af692987cf0cd9b399f483', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=320&crop=smart&auto=webp&s=0b99f45987dc845591231f8b4692e86ccaec7b0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=640&crop=smart&auto=webp&s=583f2c50915a2ed06fe8def326d3b1ce3691cfc7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=960&crop=smart&auto=webp&s=fd111db9b0e74f27b7f0e1798447dcbae99cec78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?width=1080&crop=smart&auto=webp&s=722d2f8e87d8a42d39fa6ffe9d8e29572215c6a6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pFxlGSkYsTsLKFEcf0N01gddG047uTjqCycO4tbeeNg.png?auto=webp&s=b8ae52c7ab76a576e1ca8521e72276dfefaa1ba5', 'width': 1200}, 'variants': {}}]} |
KoboldCpp vs llama.cpp parameters | 4 | Why are those two using different, or rather the same set of parameters, but store/write them in a different way? | 2025-08-30T17:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n473r0/koboldcpp_vs_llamacpp_parameters/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n473r0 | false | null | t3_1n473r0 | /r/LocalLLaMA/comments/1n473r0/koboldcpp_vs_llamacpp_parameters/ | false | false | self | 4 | null |
I was surprised how well gpt-5 behaves and abides by Eric's Dolphin sys prompts | 0 | 2025-08-30T17:04:49 | sardoa11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n46tow | false | null | t3_1n46tow | /r/LocalLLaMA/comments/1n46tow/i_was_surprised_how_well_gpt5_behaves_and_abides/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'kluk54ihu6mf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=108&crop=smart&auto=webp&s=1b065fb04f8f65df092643d96f1f6de051a2359e', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=216&crop=smart&auto=webp&s=c9ebe34ae8c6aa9d8caa6312ea9dfbd85434644f', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=320&crop=smart&auto=webp&s=95f2150387a1ca74360908ffe02b2d4c43431316', 'width': 320}, {'height': 364, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=640&crop=smart&auto=webp&s=598841b274c6038abe7943b34dd9080a6c422418', 'width': 640}, {'height': 547, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=960&crop=smart&auto=webp&s=9f8532e069b1fef5f68dcd02ac48c7e2ca46200c', 'width': 960}, {'height': 615, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?width=1080&crop=smart&auto=webp&s=7d08a9d04cfc94972fb464640c86e92f56d2f810', 'width': 1080}], 'source': {'height': 1724, 'url': 'https://preview.redd.it/kluk54ihu6mf1.png?auto=webp&s=d14578d15a077e670258534471358aac36f97ef6', 'width': 3024}, 'variants': {}}]} | ||
crash course on hardware aspects of llm fine tuning? | 3 | So, sometimes I get lost, about many aspects of GPUs and lora fine tuning, 4-point precision, vGPU, and how to calculate if a model with xB parameter can fit into a given GPU, all these aspects. Is there a crash course or a resource to learn about all of this ? I am starting as an LLM developer soon and I still am not aware of all the details of this. | 2025-08-30T17:04:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n46t58/crash_course_on_hardware_aspects_of_llm_fine/ | jiii95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n46t58 | false | null | t3_1n46t58 | /r/LocalLLaMA/comments/1n46t58/crash_course_on_hardware_aspects_of_llm_fine/ | false | false | self | 3 | null |
Why is meta investing in AI so much? | 0 | I get it about OpenAI and Antrophic (whoever has better model, they will profit on inference markup, because their model will be used more).
I'm curious what's actually Meta doing? They are pumping billions to acquire top talent from the world and to build new datacenters? Why is that so crucial for Meta? What are they planning to do with all that actually? | 2025-08-30T16:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n46oa8/why_is_meta_investing_in_ai_so_much/ | predkambrij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n46oa8 | false | null | t3_1n46oa8 | /r/LocalLLaMA/comments/1n46oa8/why_is_meta_investing_in_ai_so_much/ | false | false | self | 0 | null |
LongCat-Flash-Chat is here, yet another Chinese open weight model | 181 | HF: [https://huggingface.co/meituan-longcat/LongCat-Flash-Chat](https://huggingface.co/meituan-longcat/LongCat-Flash-Chat)
GitHub: [https://github.com/meituan-longcat/LongCat-Flash-Chat](https://github.com/meituan-longcat/LongCat-Flash-Chat)
Web: [https://longcat.ai](https://longcat.ai)
Benchmark:
https://preview.redd.it/9dqweyx9t6mf1.png?width=711&format=png&auto=webp&s=ea84100d70303ada8bc3fccc2a4c0e5cb47fb08f | 2025-08-30T16:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n46mk9/longcatflashchat_is_here_yet_another_chinese_open/ | MindlessScrambler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n46mk9 | false | null | t3_1n46mk9 | /r/LocalLLaMA/comments/1n46mk9/longcatflashchat_is_here_yet_another_chinese_open/ | false | false | 181 | {'enabled': False, 'images': [{'id': 'SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=108&crop=smart&auto=webp&s=46507d4f748c5c43c451c98d4b0556d64d04c2ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=216&crop=smart&auto=webp&s=5ddff2e81ab26c24e45bd427e5b26822c6544a71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=320&crop=smart&auto=webp&s=d5b581de98486547592f85744ce0c5e49037a20a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=640&crop=smart&auto=webp&s=4d1f89904849c371c282657b5befc8d11c2c3998', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=960&crop=smart&auto=webp&s=4a773395b32efb91faa859289e68538d05a397bc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?width=1080&crop=smart&auto=webp&s=74ff351214d6ced766b5baf6e45b6ef39cbdd059', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SJG1eRbm3SwVjlU4Orqm4a5_PTL6rFKUpPzL-4SiNJI.png?auto=webp&s=858be0324f96010aeb1d9771cf1ee3008143ff38', 'width': 1200}, 'variants': {}}]} | |
Finally China entering the GPU market to destroy the unchallenged monopoly abuse. 96 GB VRAM GPUs under 2000 USD, meanwhile NVIDIA sells from 10000+ (RTX 6000 PRO) | 3,597 | 2025-08-30T16:52:00 | CeFurkan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n46ify | false | null | t3_1n46ify | /r/LocalLLaMA/comments/1n46ify/finally_china_entering_the_gpu_market_to_destroy/ | false | false | default | 3,597 | {'enabled': True, 'images': [{'id': '1wl79kpjs6mf1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=108&crop=smart&auto=webp&s=e12dfd935ab7f87dd7287714ed95ff1134821be7', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=216&crop=smart&auto=webp&s=da2d44f8bddc9ebb82cb82f5cbaeb3429f3fccca', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=320&crop=smart&auto=webp&s=59e4f32d4758344751ae68f96c2d9407832eedab', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=640&crop=smart&auto=webp&s=967356c2a208d26c7d657a1b535e795c0332f304', 'width': 640}, {'height': 430, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=960&crop=smart&auto=webp&s=99ea73cd11e95733b303132f23e1c8bbd7725bca', 'width': 960}, {'height': 484, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?width=1080&crop=smart&auto=webp&s=fbc8526d1585bdd44da395776bfc82f7829b8212', 'width': 1080}], 'source': {'height': 918, 'url': 'https://preview.redd.it/1wl79kpjs6mf1.jpeg?auto=webp&s=74b859e94a989c23f0f133d3ef6260068ffa247c', 'width': 2047}, 'variants': {}}]} | ||
privacy question | 1 | If I serve my private cloud model through the OpenAI-compatible API server architecture - will my data be running through an external server besides the server I'm running?. e.i will there be more than one server processing the api request?
the immediate answer is obvious but if another mind double-checked, post your conclusion | 2025-08-30T16:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1n4630u/privacy_question/ | Short-Cobbler-901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4630u | false | null | t3_1n4630u | /r/LocalLLaMA/comments/1n4630u/privacy_question/ | false | false | self | 1 | null |
GLM-4.5V model for Computer Use | 65 | On OSWorld-V, it scores 35.8% - beating UI-TARS-1.5, matching Claude-3.7-Sonnet-20250219, and setting SOTA for fully open-source computer-use models.
Run it with Cua either locally via Hugging Face or
Remotely via OpenRouter
Github : https://github.com/trycua
Docs + examples: https://docs.trycua.com/docs/agent-sdk/supported-agents/computer-use-agents#glm-45v | 2025-08-30T16:15:20 | https://v.redd.it/eicije20m6mf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n45lx2 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/eicije20m6mf1/DASHPlaylist.mpd?a=1759162534%2CZTYxOTE0NTQ0N2RkMGQyYjYyNWVlYWYyYmFiMTU2YThlZGUzY2QxMTZkMjU1OGUzNWI4OWI3ZDg3ZWFmMmVjZg%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/eicije20m6mf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 556, 'hls_url': 'https://v.redd.it/eicije20m6mf1/HLSPlaylist.m3u8?a=1759162534%2CZWRmM2ZiNWI5NTMyODZiMTUyYjQ2NWQ5ZDA2MDZjNDVmYWU3ZWNkMDZhN2Y3OTJhMWJjMTQ1OTAyMmIwZTc2MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eicije20m6mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1n45lx2 | /r/LocalLLaMA/comments/1n45lx2/glm45v_model_for_computer_use/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=108&crop=smart&format=pjpg&auto=webp&s=d43aad8117c288275c74856d730c2ac278078391', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=216&crop=smart&format=pjpg&auto=webp&s=8c54a2dc1e6046ba401701a8907c75ad2b9e6043', 'width': 216}, {'height': 139, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=320&crop=smart&format=pjpg&auto=webp&s=1688fbaa882d4046c6a25bbc4f1411a59d1b15d8', 'width': 320}, {'height': 278, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=640&crop=smart&format=pjpg&auto=webp&s=c00cab78e5f65fefafbee1d78b5e58e91f16153e', 'width': 640}, {'height': 417, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=960&crop=smart&format=pjpg&auto=webp&s=1d673fa9d3b3a179705d16a2effc3127547ab942', 'width': 960}, {'height': 469, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b98dda4f663e1dc35131d8b9aa55c3750313b300', 'width': 1080}], 'source': {'height': 770, 'url': 'https://external-preview.redd.it/MjZwOTdqc3psNm1mMZtBXPQuBBghVYkEG23VKH2rdUK_y7uZuqgwTRJo1CZN.png?format=pjpg&auto=webp&s=0704c0de364964e6eebbd8acb89d0d525631a799', 'width': 1772}, 'variants': {}}]} | |
GLM-4.5V for computer use | 1 | On OSWorld-V, it scores 35.8% - beating UI-TARS-1.5, matching Claude-3.7-Sonnet-20250219, and setting SOTA for fully open-source computer-use models.
Run it with Cua either locally via Hugging Face or
Remotely via OpenRouter
Github : https://github.com/trycua
Docs + examples: https://docs.trycua.com/docs/agent-sdk/supported-agents/computer-use-agents#glm-45v
| 2025-08-30T16:13:28 | https://v.redd.it/tswsla0ol6mf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n45kaa | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/tswsla0ol6mf1/DASHPlaylist.mpd?a=1759162424%2CMzZmYTBhZjMyZDBjN2RiYmJlZmIyMzc4MGU1NjMzM2YzYTU0NWQ0YTUwODFkZTJlMThhZDEzMTA5NzRjYTUxOA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/tswsla0ol6mf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/tswsla0ol6mf1/HLSPlaylist.m3u8?a=1759162424%2COTM2N2U5ODhlODFmOTBmN2M3NjkxMDM3ZjI4MTkxZGRlZDI3MWU4ZjhmN2JkZTZhMmJhOTNkM2JjNTA5MjAwYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tswsla0ol6mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1152}} | t3_1n45kaa | /r/LocalLLaMA/comments/1n45kaa/glm45v_for_computer_use/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=108&crop=smart&format=pjpg&auto=webp&s=f1e1472f702ede5a3793349e81051f29f180952b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=216&crop=smart&format=pjpg&auto=webp&s=d251da55965198bb6a72315faa0c3e2ccf1da341', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=320&crop=smart&format=pjpg&auto=webp&s=863b6cd56e84c0f5d02e08de74756201e6258563', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=640&crop=smart&format=pjpg&auto=webp&s=61dac2ba0dca0440ee648560aa7ef4a3aa6fbb0a', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=960&crop=smart&format=pjpg&auto=webp&s=5b5c2ca612f8baaf334f77e0378635a77250382a', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c76e2b51ca316208f0f2891190aba88ebf39225e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bjI4NjJqcW5sNm1mMUyjGwnJS8rotX6d0qpdBh20m0kIIRiKyvfLkkchTDUy.png?format=pjpg&auto=webp&s=b51b95c7ae8904d5df8b3c0c7dcb744bddf50818', 'width': 1152}, 'variants': {}}]} | |
He got the new AMD Ai PRoO R9700 R32GB card | 3 | 2025-08-30T16:00:07 | https://youtube.com/shorts/hTYYbFIs66o?si=n7WYKqd-tm9hYnpm | sub_RedditTor | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1n458gt | false | {'oembed': {'author_name': 'UFD Tech', 'author_url': 'https://www.youtube.com/@UFDTech', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/hTYYbFIs66o?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Pro R9700"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/hTYYbFIs66o/hq2.jpg', 'thumbnail_width': 480, 'title': 'AI Pro R9700', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'} | t3_1n458gt | /r/LocalLLaMA/comments/1n458gt/he_got_the_new_amd_ai_proo_r9700_r32gb_card/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'JYLz9ZxPJjxgzDbn0GmZzQ380-oGWjJDqA1Pxov8dC8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JYLz9ZxPJjxgzDbn0GmZzQ380-oGWjJDqA1Pxov8dC8.jpeg?width=108&crop=smart&auto=webp&s=1ced7289cfeccb9340325138514fd0d656c2f9f5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/JYLz9ZxPJjxgzDbn0GmZzQ380-oGWjJDqA1Pxov8dC8.jpeg?width=216&crop=smart&auto=webp&s=d435ad25a4d4a9dcf90cbcebdbdb3410e2413b06', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/JYLz9ZxPJjxgzDbn0GmZzQ380-oGWjJDqA1Pxov8dC8.jpeg?width=320&crop=smart&auto=webp&s=c7ab0be3a4b5f64bc051920e3d4c5264477b9a5a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/JYLz9ZxPJjxgzDbn0GmZzQ380-oGWjJDqA1Pxov8dC8.jpeg?auto=webp&s=a0d6335000699bfb67fd41a67cc6afb505438536', 'width': 480}, 'variants': {}}]} | |
Three Exciting Projects Using Ollama's Vision Models and Structured Output | 0 | 2025-08-30T15:52:28 | https://youtu.be/ZZHWLXyZHlA | Flashy-Thought-5472 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1n45219 | false | {'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ZZHWLXyZHlA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Three Exciting Projects Using Ollama's Vision Models and Structured Output"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ZZHWLXyZHlA/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Three Exciting Projects Using Ollama's Vision Models and Structured Output", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n45219 | /r/LocalLLaMA/comments/1n45219/three_exciting_projects_using_ollamas_vision/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'fnyu4EnnnJVHNSt-sjQqJee7_td6DSOu4X32ivuyjkY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fnyu4EnnnJVHNSt-sjQqJee7_td6DSOu4X32ivuyjkY.jpeg?width=108&crop=smart&auto=webp&s=1ffc57c54336d6869dd7306370bf729553743ea2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fnyu4EnnnJVHNSt-sjQqJee7_td6DSOu4X32ivuyjkY.jpeg?width=216&crop=smart&auto=webp&s=5dd8387e97850dd72ed882530c08ebaa2194bf98', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fnyu4EnnnJVHNSt-sjQqJee7_td6DSOu4X32ivuyjkY.jpeg?width=320&crop=smart&auto=webp&s=b7d7bf4f3e39ee2772bffa8c9f954a7452535fb7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fnyu4EnnnJVHNSt-sjQqJee7_td6DSOu4X32ivuyjkY.jpeg?auto=webp&s=baa88a08d0725695f9006110f31a9482ecaae84b', 'width': 480}, 'variants': {}}]} | |
GPT OSS Fine-tuning QAT | 31 | Read more about our (Nvidia) end to end example on GPT OSS fine tuning QAT + SGlang deployment 👉 https://lmsys.org/blog/2025-08-28-gpt-oss-qat/
Fine-tuning QAT helps keep the original MXFP4 quantization of GPT OSS while adapting to downstream task.
We have some example results (and comparisons to Nvidia’s NVFP4 format) here :
https://developer.nvidia.com/blog/fine-tuning-gpt-oss-for-accuracy-and-performance-with-quantization-aware-training/
Do checkout 🙃! | 2025-08-30T15:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n451ka/gpt_oss_finetuning_qat/ | Short_Struggle7803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n451ka | false | null | t3_1n451ka | /r/LocalLLaMA/comments/1n451ka/gpt_oss_finetuning_qat/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ.png?width=108&crop=smart&auto=webp&s=c0e54022b1d4416213044e26e239f39818990b1e', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ.png?width=216&crop=smart&auto=webp&s=cccdfe62a44ef3a97ce039e8606d5fb39cb4a1b9', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ.png?width=320&crop=smart&auto=webp&s=2560bf752817afc53ec341b70ffc8710185ef200', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ.png?width=640&crop=smart&auto=webp&s=0694d7b8e7714c04c3db1f880b025e35db8bd7c5', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MynOKTM5dM_RZABGBloJaJn9nBEol9mGL0c2veGjVmQ.png?auto=webp&s=fe5af5d30173244975e9bea857a0a239cde90148', 'width': 900}, 'variants': {}}]} |
Build an AI-Powered Image Search Engine Using Ollama and LangChain | 6 | 2025-08-30T15:51:42 | https://youtu.be/S9ugRzGjFtA | Flashy-Thought-5472 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1n451ee | false | {'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/S9ugRzGjFtA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build an AI-Powered Image Search Engine Using Ollama and LangChain"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/S9ugRzGjFtA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build an AI-Powered Image Search Engine Using Ollama and LangChain', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n451ee | /r/LocalLLaMA/comments/1n451ee/build_an_aipowered_image_search_engine_using/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': '1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8.jpeg?width=108&crop=smart&auto=webp&s=168a8ceaa2df619af45fef0e9f1472e9e2e18444', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8.jpeg?width=216&crop=smart&auto=webp&s=8012bd2ca0d9b0fe1d9027ead053b7d8cbfa351b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8.jpeg?width=320&crop=smart&auto=webp&s=8e429570269e4e3675c1cd2ce999a212302f0201', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8.jpeg?auto=webp&s=3b3b3ed00e70a0f0e01f49c800b68e41f01db734', 'width': 480}, 'variants': {}}]} | |
OpenWebUI lets you auto expand reasoning now! | 20 | I'm not sure when they added this, but it was a pet peeve of mine so I wanted to share this is how you can turn on show reasoning content automatically. It's just in Settings > Interface > Always Expand Details. I'm guessing that also expands some other things but I don't use any tools so I don't know which. | 2025-08-30T15:47:40 | slpreme | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n44xxi | false | null | t3_1n44xxi | /r/LocalLLaMA/comments/1n44xxi/openwebui_lets_you_auto_expand_reasoning_now/ | false | false | 20 | {'enabled': True, 'images': [{'id': 'zqkjsixPl8WWbhLwj2uX1hxg77126O-CALkQdk9jQHE', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=108&crop=smart&auto=webp&s=6a97f240d4994acdbd6586c8262b571395683316', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=216&crop=smart&auto=webp&s=326d776b4ddc6ef54ba9a2f1a13036a6a8d1c1c3', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=320&crop=smart&auto=webp&s=59f29c01f784ed303ff9452d883ac3890b311bee', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=640&crop=smart&auto=webp&s=cb790039d8346bafae55cda40cbc7c0b88f33de8', 'width': 640}, {'height': 625, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=960&crop=smart&auto=webp&s=4af0620059265f5233af5c933811860ed2262545', 'width': 960}, {'height': 703, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?width=1080&crop=smart&auto=webp&s=f9657820c475ad9fab7c36ed55f17f8b053a040b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/ms5d125ng6mf1.png?auto=webp&s=1b586659fa9deaec40b004f394ea3ef3c2ba63fd', 'width': 1571}, 'variants': {}}]} | ||
When you want to use the big the GPU machine, but VSCode has no way to add a custom endpoint for Ollama... | 0 | This is the way:
netsh interface portproxy add v4tov4 listenport=11434 listenaddress=127.0.0.1 connectport=11434 connectaddress=<ADDRESS OF THE OLLAMA HOST>
I removed the local install of Ollama before attempting.
This method probably works with other OpenAI compatible API Endpoints.
Total newb here, roast my wrongness if necessary. | 2025-08-30T15:24:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n44dne/when_you_want_to_use_the_big_the_gpu_machine_but/ | Conclusion457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n44dne | false | null | t3_1n44dne | /r/LocalLLaMA/comments/1n44dne/when_you_want_to_use_the_big_the_gpu_machine_but/ | false | false | self | 0 | null |
Best LLM for asking questions about PDFs (reliable, multi-file support)? | 7 | Hey everyone,
I’m looking for the best LLM (large language model) to use with PDFs so I can ask questions about them. Reliability is really important — I don’t want something that constantly hallucinates or gives misleading answers.
Ideally, it should:
Handle multiple files
Let me avoid re-upload | 2025-08-30T14:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1n435ax/best_llm_for_asking_questions_about_pdfs_reliable/ | User1856 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n435ax | false | null | t3_1n435ax | /r/LocalLLaMA/comments/1n435ax/best_llm_for_asking_questions_about_pdfs_reliable/ | false | false | self | 7 | null |
3000+ downloads in a week, thank you Reddit and r/locallama community. | 0 | 2025-08-30T14:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n4358t/3000_downloads_in_a_week_thank_you_reddit_and/ | Trilogix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4358t | false | null | t3_1n4358t | /r/LocalLLaMA/comments/1n4358t/3000_downloads_in_a_week_thank_you_reddit_and/ | false | false | 0 | null | ||
How do you people run GLM 4.5 locally ? | 55 | For context i have a dual rtx 3090 rig with 128gb of ddr5 ram and no matter what i try i get around 6 tokens per second...
On CPU only inference i get between 5 and 6 tokens while on partial GPU offload i get between 5.5 and 6.8 tokens.
I tried 2 different versions the one from unsloth Q4\_K\_S (https://huggingface.co/unsloth/GLM-4.5-Air-GGUF) and the one from LovedHeart MXFP4 (https://huggingface.co/lovedheart/GLM-4.5-Air-GGUF-IQ1\_M)
The one from unsloth is 1 token per second slower but still no story change.
I changed literally all settings from lmstudio, even managed to get it to load with the full 131k context but still nowhere near the speed other users get on a single 3090 with offloading.
I tried installing vllm but i get too much errors and i gave up.
Is there another program i should try ? Have i chose the wrong models ?
It's really frustrating and it's taking me too much hours to solve | 2025-08-30T14:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n4329n/how_do_you_people_run_glm_45_locally/ | Skystunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4329n | false | null | t3_1n4329n | /r/LocalLLaMA/comments/1n4329n/how_do_you_people_run_glm_45_locally/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=108&crop=smart&auto=webp&s=be66257dfb8060c1200a8a0cd0ca42206175a8fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=216&crop=smart&auto=webp&s=f8665f38a095c32a96a4241162e510534fdc9bbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=320&crop=smart&auto=webp&s=1f0117624421d1bf73d3c0a0635561dfc5bbb8e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=640&crop=smart&auto=webp&s=d204df30f143e07de2de5c6a86cf3af0941abcfd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=960&crop=smart&auto=webp&s=8972d6fb8a82908da65f616af69f7e9257fa603c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=1080&crop=smart&auto=webp&s=36fc891805e97b0c4b13376f984592d115838078', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?auto=webp&s=ec4f533fe7bc79ce6b3925802ad450c616ba1119', 'width': 1200}, 'variants': {}}]} |
You're (Probably) Breaking the Llama Community License | 0 | 2025-08-30T13:31:13 | https://notes.victor.earth/youre-probably-breaking-the-llama-community-license/ | vibjelo | notes.victor.earth | 1970-01-01T00:00:00 | 0 | {} | 1n41pon | false | null | t3_1n41pon | /r/LocalLLaMA/comments/1n41pon/youre_probably_breaking_the_llama_community/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=108&crop=smart&auto=webp&s=e8d3196a7235afa4b5056a918e05bba6854a3ec2', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=216&crop=smart&auto=webp&s=5b828e9faefdb81b1349b256177e070677d62ef8', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=320&crop=smart&auto=webp&s=f255028a2b365788e68d8c1c470073f9309e1238', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=640&crop=smart&auto=webp&s=4cd24402a24cd6be10bf16fc5dfc5657df77a309', 'width': 640}, {'height': 551, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=960&crop=smart&auto=webp&s=8f9b656180c76fefc228a6d8427f61f9c0758422', 'width': 960}, {'height': 620, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?width=1080&crop=smart&auto=webp&s=ff9116b266ccb650073816e763badd68da534f36', 'width': 1080}], 'source': {'height': 1526, 'url': 'https://external-preview.redd.it/NcGvl2HyVLawVbPVMvOEPMXM3M0egTmEpSsSce1BJFs.png?auto=webp&s=1730e8cfd275e0c14e2fa8315ad49ed8ca7d3ce7', 'width': 2656}, 'variants': {}}]} | ||
LM Studio on older CPUs & Vulkan GPUs? Done! | 17 | LM Studio devs state it’s impossible to run on anything older than AVX2 CPUs… I say the MIT license and a bit of compiler magic make it run on anything 😂
Try the patched backends here and enjoy:
[https://github.com/theIvanR/lmstudio-unlocked-backend](https://github.com/theIvanR/lmstudio-unlocked-backend)
https://preview.redd.it/w36wxlp5n5mf1.png?width=2419&format=png&auto=webp&s=5bc5e450fb1ea26afc43817fa2cba366b3cc61c1
https://preview.redd.it/wk4a2cu6n5mf1.png?width=2031&format=png&auto=webp&s=c89869be3b29561a04a79c9fd2fb482def4c6289
Screenshots:
| 2025-08-30T13:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/1n4132x/lm_studio_on_older_cpus_vulkan_gpus_done/ | TheSpicyBoi123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4132x | false | null | t3_1n4132x | /r/LocalLLaMA/comments/1n4132x/lm_studio_on_older_cpus_vulkan_gpus_done/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=108&crop=smart&auto=webp&s=c44c08f927bd513b0d3143c2f37771257945b7ee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=216&crop=smart&auto=webp&s=9895b31c2a6830c15de16e0d029bd139ea796439', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=320&crop=smart&auto=webp&s=f3c31cf22641ffd7bc8f491f73de4a96ee0e54fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=640&crop=smart&auto=webp&s=bea2b2a046ad01085a9fc976fd4899f56931e0da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=960&crop=smart&auto=webp&s=25d074addd77e06b931b64aefe62eec522d50760', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?width=1080&crop=smart&auto=webp&s=1679bbc7b41f99aac1b5a9c36ae7856c7ab27c2d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xDWIKqsM7gyhZ-1fiaj8ct2CzPw8KsFQKzwmZ9k0cg8.png?auto=webp&s=9e0d1db48ddc8d6cfc7ad9ff57a809903c3f85e5', 'width': 1200}, 'variants': {}}]} | |
Suggestions for Innovative Topics in Vision-Language Models? | 1 | Hi everyone,
I’m currently preparing for a project and I want to focus on **Vision-Language Models (VLMs)**. I’ve read about big names like CLIP, BLIP, Flamingo, LLaVA, etc., and I know there’s a lot of research going on.
But I’d like to work on something that’s **innovative, and practical** I’m especially interested in areas like:
– Accessibility (e.g., helping PWDs through multimodal AI)
– Healthcare
– Anomaly Detection
Do you have any suggestions for thesis topics, project ideas, or even datasets I could look into? I’d love to hear from those who’ve done related research or know current gaps/opportunities in VLMs.
Thanks in advance! | 2025-08-30T13:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/1n4132l/suggestions_for_innovative_topics_in/ | Wraithraisrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n4132l | false | null | t3_1n4132l | /r/LocalLLaMA/comments/1n4132l/suggestions_for_innovative_topics_in/ | false | false | self | 1 | null |
Is this a good quote for a Supermicro Grace Hopper (GH200) system? | 0 | Hey folks,
I just got a quote from a local distributor in Turkiye for a Supermicro system built arount the GH200. Do you think this is a fair price compared to the international market? Would it make more sense to go on multiple H100s instead or is the GH200 actually worth it for workloads?
Any insights from people who have worked with GH200 or priced similar would be super helpful 🙏
[Quote](https://preview.redd.it/b19di2bml5mf1.png?width=1524&format=png&auto=webp&s=207de5cdf11763cbb2457c4e0aa19a71115149ac)
| 2025-08-30T12:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n40v41/is_this_a_good_quote_for_a_supermicro_grace/ | alienpro01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n40v41 | false | null | t3_1n40v41 | /r/LocalLLaMA/comments/1n40v41/is_this_a_good_quote_for_a_supermicro_grace/ | false | false | 0 | null | |
GPT5 is so close to being agi… | 0 | This is my go to test to know if we’re near agi. The new Turing test. | 2025-08-30T12:50:24 | WatsonTAI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n40u3h | false | null | t3_1n40u3h | /r/LocalLLaMA/comments/1n40u3h/gpt5_is_so_close_to_being_agi/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'm4bgrkxmdBvZqP5RFMIDyKYzKk3b5rgeqWZW5Ormsp4', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=108&crop=smart&auto=webp&s=0dbff7763acaae682af23b539b3b2eecfcff7d30', 'width': 108}, {'height': 247, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=216&crop=smart&auto=webp&s=503a031371313cf4fbc6235f95efb6a58f1e50d3', 'width': 216}, {'height': 367, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=320&crop=smart&auto=webp&s=66979a693519885c5192ab6886c76bb8f50ddacb', 'width': 320}, {'height': 734, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=640&crop=smart&auto=webp&s=afae09682405a012ea7fd2e71d104b0fd1ec25f2', 'width': 640}, {'height': 1102, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=960&crop=smart&auto=webp&s=ef570079cf275c8726fdc1cf1c29ed6c919d4633', 'width': 960}, {'height': 1239, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?width=1080&crop=smart&auto=webp&s=69ec019b88ed8adde248dd4c75bf8f7237c6581a', 'width': 1080}], 'source': {'height': 1474, 'url': 'https://preview.redd.it/hx661wyfl5mf1.jpeg?auto=webp&s=a2281a35b3e850a0de80e4c563078122281ca960', 'width': 1284}, 'variants': {}}]} | ||
What is the slowest Token/sec you can live with? | 107 | Me:
5tok/s is the slowest I’ll accept | 2025-08-30T12:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n40ngf/what_is_the_slowest_tokensec_you_can_live_with/ | OrganicApricot77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n40ngf | false | null | t3_1n40ngf | /r/LocalLLaMA/comments/1n40ngf/what_is_the_slowest_tokensec_you_can_live_with/ | false | false | self | 107 | null |
When AI loses its soul, only operations remain | 0 | This person is a stupid X who wants to attract attention, deliberately saying contradictory things to entangle others. ChatGPT 5 will analyze carefully, and Claude opus 4.1 will say "this person is sick". | 2025-08-30T12:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/1n408ud/when_ai_loses_its_soul_only_operations_remain/ | JasonYang1923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n408ud | false | null | t3_1n408ud | /r/LocalLLaMA/comments/1n408ud/when_ai_loses_its_soul_only_operations_remain/ | false | false | self | 0 | null |
Training a 11M language model for Raspberry Pi Pico - progress | 54 | Hi guys, I have been training a Language Model for the Raspberry Pi Pico (fast & smart). My research showed me that 11M parameters is the sweet spot (about 2-3 token/s) to start.
What I use:
\- For training: PicoLM
\- GPU: 5090 for Prime Intellect (from grants), H100 on [Ori.co](http://Ori.co) (free credits), AWS GPU (free credits)
Progress:
\- I have been training it on 5M rows of Dolma + Olmo 7B tokenizers
Checkpoints:
17k:
### Prompt 7: "In the year 2050,"
**Response**:
```
we have been doing that it was just for the same time, he said, and I have a lot of my father, and it was my own. We had a very much of a little as I wanted to do it. I'm not sure, but the people just like to me and I've never been a lot of my blog, so that I am not the most likely was that she's not to be my face. I wanted to get to me,
```
75k:
### Prompt 1: "Hello, how are you?"
**Response**:
```
I've been my own own name. I think I've got a couple of reasons. I think I was a lot of my mind and my own way I did not know my work. I got the way I took a little time with my first time. I am very sure I do. I like a good, I’m sure I did my car but I do have to be on my job.
I've been with my friend and
```
All benchmark results: [https://github.com/ThomasVuNguyen/Starmind-Zero/tree/main/benchmarks/results](https://github.com/ThomasVuNguyen/Starmind-Zero/tree/main/benchmarks/results)
All datasets & code are available on my github:
\- Github: [https://github.com/ThomasVuNguyen/Starmind-Zero](https://github.com/ThomasVuNguyen/Starmind-Zero)
\- Huggingface: [https://huggingface.co/ThomasTheMaker](https://huggingface.co/ThomasTheMaker)
Note:
\- I am not a ML scientist. Purely an AI startup founder with too much energy to just do normal engineering and be happy. | 2025-08-30T12:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n407lv/training_a_11m_language_model_for_raspberry_pi/ | ThomasPhilli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n407lv | false | null | t3_1n407lv | /r/LocalLLaMA/comments/1n407lv/training_a_11m_language_model_for_raspberry_pi/ | false | false | self | 54 | null |
Anyone here using local LLMs for study/education tools? | 1 | [removed] | 2025-08-30T11:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1n3zgai/anyone_here_using_local_llms_for_studyeducation/ | very_good_user_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3zgai | false | null | t3_1n3zgai | /r/LocalLLaMA/comments/1n3zgai/anyone_here_using_local_llms_for_studyeducation/ | false | false | self | 1 | null |
Anyone here using local LLMs for study/education tools? | 1 | [removed] | 2025-08-30T11:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n3zfj7/anyone_here_using_local_llms_for_studyeducation/ | good_user_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3zfj7 | false | null | t3_1n3zfj7 | /r/LocalLLaMA/comments/1n3zfj7/anyone_here_using_local_llms_for_studyeducation/ | false | false | self | 1 | null |
[Resource] Always-on OSS models for coding (DeepSeek3.1 / Llama 3.x / Qwen Coder 3 / GLM-4.5) with flat pricing | 0 | # Flat-rate open-source coding LLMs — works with Cline / Aider / Roo / Octofriend, etc.
**Disclosure:** This post mentions a referral offer and a non-affiliate alternative. Links are in the first comment.
**TL;DR:** Flat monthly pricing for always-on OSS LLMs (UI + OpenAI-compatible API). Drops straight into Cline, KiloCode, Roo, Aider, Octofriend, OpenWebUI, SillyTavern — no token anxiety.
## What it is (why you might care)
- **Predictable cost** for coding agents (no per-token surprises)
- **Strong OSS coverage** (DeepSeek line, Llama 3.x/3.3, Qwen3 Coder, GLM-4.5, Kimi K2)
- **Agent-friendly** (function calling / tool use actually tested)
- **Privacy stance** (no training on your data; short operational retention)
## Plans / limits (quick view)
- **$20/mo → 125 messages per 5 hours**
- **$60/mo → 1,250 messages per 5 hours**
*(Both cover UI + API; on-demand HF repos are billed separately per minute.)*
---
### Quoted bits from the announcement (context)
***A flat monthly subscription to open-source LLMs***
*Published 2025-08-28*
> **Subscriptions**
> We’re launching subscriptions to all of our always-on LLMs — covering most major open-source coding models. Some favorites:
> • **GLM-4.5** — hybrid reasoner that avoids overthinking; compares well to closed models like Sonnet 4.
> • **Kimi K2** — very good non-reasoning model.
> • **Qwen3 Coder 480B** — excellent at one-shot coding tasks.
> • **DeepSeek 3.1** — likely the last of the DeepSeek V3 line, tuned for agentic coding.
>
> Works out of the box with **Cline, KiloCode, Roo, Aider, Octofriend**, plus **OpenWebUI / SillyTavern**.
>
> They also claim better **reliability** vs **OpenRouter**; in **Aider’s testing** there’s ~**10-point** completion gap between the official **Qwen3 Coder** API and round-robined OpenRouter hosts. They test against **Octofriend** before release — not all OSS deployments are equal.
---
## Questions for the sub
- Which OSS models feel most reliable for **function-calling** chains lately?
- Any **Aider/Cline** config tips when swapping providers (timeouts/tools)?
- Your sweet spot for **context length vs latency** on OSS stacks? | 2025-08-30T11:19:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n3z37l/resource_alwayson_oss_models_for_coding/ | pixiedustnomore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3z37l | false | null | t3_1n3z37l | /r/LocalLLaMA/comments/1n3z37l/resource_alwayson_oss_models_for_coding/ | false | false | self | 0 | null |
Need some guidance on how to improve LLM for autocomplete a different coding language | 0 | Hello experts of LocalLLaMa!
I'm using Unreal AngelScript for my project and in need of a good AI Autocomplete solution. I used to use Cody AI (Free), but since they closed it down, I've tried dozens of other options (albeit free only), but the closest one for me is using ollama Qwen1.5B with Continue.dev. However, there are still 2 issues:
\- Accuracy: Though Angelscript is very close to C++/C#, it still has some differences. From my understanding fine-tuning is the answer for this, but there aren't any datasets for AngelScript readily available, how do I even start? Could someone point me in the right direction? (like what tools to use, where I can get the data from (legally?), etc.)
\- Speed: Though the model is 1.5B, and I even tried with 0.5B Zero, there are still couple of seconds delay for autocomplete to appear, while Cody used to be almost instantly. My specs are: i7-8700, 2080 (8GB VRAM), 48GB RAM. I guess there is something wrong with Continue because talking with models directly gives me 100+ T/s eval rate. Is there any settings I can tweak to improve this? | 2025-08-30T10:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n3yk22/need_some_guidance_on_how_to_improve_llm_for/ | EpiKnightz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3yk22 | false | null | t3_1n3yk22 | /r/LocalLLaMA/comments/1n3yk22/need_some_guidance_on_how_to_improve_llm_for/ | false | false | self | 0 | null |
llama.cpp gptoss 20b set activated exports number to 2 | 4 | I have an i7 desktop with 16g memory. I am trying llama.cpp gptoss 20b. I have found that it is CPU bounded rather than memory bounded. I am trying to max the performance. I figured out that reduced the activated experts can improve the performance dramatically. Below is my setting: --override-kv gpt-oss.expert\_used\_count=int:2. In my testing, it seems work fine. Does anyone know if there are some side effects? | 2025-08-30T10:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n3yi9f/llamacpp_gptoss_20b_set_activated_exports_number/ | ggzy12345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3yi9f | false | null | t3_1n3yi9f | /r/LocalLLaMA/comments/1n3yi9f/llamacpp_gptoss_20b_set_activated_exports_number/ | false | false | self | 4 | null |
🌟Introducing Art-0-8B: Reasoning the way you want it to with Adaptive Thinking🌟 | 108 | Hi everyone! Today I'm announcing a new experimental open-source model - Art-0-8B is the first reasoning model where users can explicitly control how the model thinks through prompts.
Unlike normal reasoning models that only let you control the final output, Art-0-8B lets you control the actual thinking process. Tell it to "think in rap lyrics" or "use bullet points to organize thoughts" and it will literally reason that way before giving you an answer.
You can check out the model on HuggingFace: [https://huggingface.co/AGI-0/Art-0-8B](https://huggingface.co/AGI-0/Art-0-8B)
Let me know your thoughts! | 2025-08-30T10:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n3xxm5/introducing_art08b_reasoning_the_way_you_want_it/ | GuiltyBookkeeper4849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3xxm5 | false | null | t3_1n3xxm5 | /r/LocalLLaMA/comments/1n3xxm5/introducing_art08b_reasoning_the_way_you_want_it/ | false | false | self | 108 | {'enabled': False, 'images': [{'id': 'K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=108&crop=smart&auto=webp&s=ea6f327a1951963aeca588357ac82473e23655a7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=216&crop=smart&auto=webp&s=4706520200335e829f11905b5208a8ea9e033170', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=320&crop=smart&auto=webp&s=23520495079dd418afee0595952286e42c971fd5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=640&crop=smart&auto=webp&s=fb6f4c1bdd1e74b540d17a79541b39c0a4b45b02', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=960&crop=smart&auto=webp&s=369a956040c17c61dfa3a6460edb63568ddbcecd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?width=1080&crop=smart&auto=webp&s=b2fca02ad6af1384075bc5b663f4e37d52ca0071', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/K8g-lxECJuUP7YNcRwgSMfa8E6-lfPaDVmxDbfM-7G0.png?auto=webp&s=54f000ff4d0c72b3d74e2c2e054f2951d0df2512', 'width': 1200}, 'variants': {}}]} |
Free hosting | 0 | I have made a ai webapp with react frontend and python backed I want a free cloud hosting for it free tier is also ok but should have good resource | 2025-08-30T09:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n3xmnx/free_hosting/ | ShoulderTough8758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3xmnx | false | null | t3_1n3xmnx | /r/LocalLLaMA/comments/1n3xmnx/free_hosting/ | false | false | self | 0 | null |
Looking for Feedback: A Tool to Clean and Structure Messy Data for AI Agents | 1 | [removed] | 2025-08-30T09:21:37 | https://www.reddit.com/r/LocalLLaMA/comments/1n3x82z/looking_for_feedback_a_tool_to_clean_and/ | madnikhosa3442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3x82z | false | null | t3_1n3x82z | /r/LocalLLaMA/comments/1n3x82z/looking_for_feedback_a_tool_to_clean_and/ | false | false | self | 1 | null |
Does anyone have cloud recommendations to deploy LLama 3.2? | 5 | It needs to be easy to deploy and similar to a "serverless" app where I can also ping it with JSON request of a prompt. I tried Runpod serverless using `runpod-worker-ollama` (https://github.com/SvenBrnn/runpod-worker-ollama/tree/master) but I didn't see a way to submit an image other than a text prompt to the queue endpoint.
Anyone got suggestions? | 2025-08-30T09:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n3x562/does_anyone_have_cloud_recommendations_to_deploy/ | Top-Associate-4136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3x562 | false | null | t3_1n3x562 | /r/LocalLLaMA/comments/1n3x562/does_anyone_have_cloud_recommendations_to_deploy/ | false | false | self | 5 | null |
Open Source Custom Version Of Gemini Deepthink / GPT-5 Pro | 5 | Repo link: [https://github.com/ryoiki-tokuiten/Iterative-Contextual-Refinements](https://github.com/ryoiki-tokuiten/Iterative-Contextual-Refinements)
Website link: [https://ryoiki-tokuiten.github.io/Iterative-Contextual-Refinements](https://ryoiki-tokuiten.github.io/Iterative-Contextual-Refinements)
Deepthink is just the generalized architecture of the math mode using which i was able to solve 5/6 IMO problems with 2.5 Pro. You can re-verify the results by running the experiment (Please Do Best of 3 while doing so because judge agents sometimes gets biased. And I'd honestly suggest going to math mode for really technical math problems -- Including IMO Problems).
You can use Deepthink with Gemini 2.5 Pro, Flash or Flash Lite. It works well with 2.5 Pro and 2.5 Flash. Flash lite does good on only language, relations and small coding tasks.
Additionally, I have made minor architectural change: Now the sub-strategies are monitored with red-team to increase the efficiency of the overall system -- Flawed strategies will be killed and their solutions will not be attempted. Also, Now given much more weight to the knowledge packet generated through the hypothesis testing is given much more weight than previously, thus increasing the performance of smaller models like 2.5 Flash. | 2025-08-30T09:14:46 | https://v.redd.it/6z2y36nbi4mf1 | Ryoiki-Tokuiten | /r/LocalLLaMA/comments/1n3x4ga/open_source_custom_version_of_gemini_deepthink/ | 1970-01-01T00:00:00 | 0 | {} | 1n3x4ga | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6z2y36nbi4mf1/DASHPlaylist.mpd?a=1759266893%2CZWY5ZDJiNGRiYWYyMzMzMzFhMmIxNzg5OTcwOTFlNDE3MTU0ZTI5MDJkOWRjMmI4MDg5ODE3NDZmMmJhYjI1ZA%3D%3D&v=1&f=sd', 'duration': 128, 'fallback_url': 'https://v.redd.it/6z2y36nbi4mf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/6z2y36nbi4mf1/HLSPlaylist.m3u8?a=1759266893%2CY2FiNTViZjc5ZWI1MDcxNTVhYTVmMjFkYTg0N2QxMTVlYjM4ZWM5OTcxNDFiZDFiNzg1ZDVkZTUwM2NiZjI4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6z2y36nbi4mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n3x4ga | /r/LocalLLaMA/comments/1n3x4ga/open_source_custom_version_of_gemini_deepthink/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=108&crop=smart&format=pjpg&auto=webp&s=ef139b1188595cda2dbcb1dec64e086bd06d919a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=216&crop=smart&format=pjpg&auto=webp&s=d1059b9897737f25ebfb430bf8b9c0da6fe094b0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=320&crop=smart&format=pjpg&auto=webp&s=2566447e093449efb1d0f5e281cc63bd27f39cf7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=640&crop=smart&format=pjpg&auto=webp&s=e9e53504874346608d94537c3d10d69739b18ef1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=960&crop=smart&format=pjpg&auto=webp&s=6444d604ec6692364ac88f92c2a4eaf6c6998b7f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=959bc4a2607f153a447d33490b6e787bad078032', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b2s1YXc2bmJpNG1mMXXbbQ-kngDdVzo1upRfeWdwubCA0DoGSh-dad3amKlv.png?format=pjpg&auto=webp&s=19fdd2366ac544a6656ff7b2d62ff902a22a9411', 'width': 1920}, 'variants': {}}]} | |
Anyone test qwen Omni 3b locally with the voice to voice speech? | 4 | How much VRAM does it take up- does it fit in 16b? | 2025-08-30T09:09:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n3x1lf/anyone_test_qwen_omni_3b_locally_with_the_voice/ | OrganicApricot77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3x1lf | false | null | t3_1n3x1lf | /r/LocalLLaMA/comments/1n3x1lf/anyone_test_qwen_omni_3b_locally_with_the_voice/ | false | false | self | 4 | null |
Simple way to remotely vibe code with QwenCode? | 0 | I want to be able to hop onto my home Mac’s terminal from any mobile or tablet device, mainly to do some vibe coding when I’m out and about.
Are there existing services/products that already do HITL for this with a nice UI or is the best route just setting up a SSH/VPN myself? I already have Tailscale with WebSSH app but the UI is not that friendly to just approve code / make adjustments.
I think someone posted within the last week about this but I can't seem to find it and it may have been deleted - I remember reading that there were some existing solutions to this. | 2025-08-30T09:08:16 | https://www.reddit.com/r/LocalLLaMA/comments/1n3x0xp/simple_way_to_remotely_vibe_code_with_qwencode/ | AdJust8834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3x0xp | false | null | t3_1n3x0xp | /r/LocalLLaMA/comments/1n3x0xp/simple_way_to_remotely_vibe_code_with_qwencode/ | false | false | self | 0 | null |
The story about our open source Agent! | 0 | Hey u/LocalLLaMA 👋
I wanted to share the journey behind a wild couple of days building [Droidrun](http://www.droidrun.ai), our open-source agent framework for automating real Android apps.
We started building Droidrun because we were frustrated: everything in automation and agent tech seemed stuck in the browser. But people live on their phones and apps are walled gardens. So we built an agent that could actually tap, scroll, and interact inside real mobile apps, like a human.
A few weeks ago, we posted a short demo no pitch, just an agent running a real Android UI. Within 48 hours:
* We hit [4400+ GitHub Stars](https://github.com/droidrun/droidrun)
* Got devs joining our Discord
* Landed on the radar of investors
* And closed a **$2M+ funding round** shortly after
What worked for us:
* **We led with a real demo**, not a roadmap
* **Posted in the right communities**, not product forums
* **Asked for feedback**, not attention
* And open-sourced from day one, which gave us credibility + momentum
We’re still in the early days, and there’s a ton to figure out. But the biggest lesson so far:
**Don’t wait to polish. Ship the weird, broken, raw thing if the core is strong, people will get it.**
If you’re working on something agentic, mobile, or just bold than I’d love to hear what you’re building too.
AMA if helpful!
https://i.redd.it/c9njzpphe4mf1.gif
| 2025-08-30T08:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n3wref/the_story_about_our_open_source_agent/ | Petesneaknex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3wref | false | null | t3_1n3wref | /r/LocalLLaMA/comments/1n3wref/the_story_about_our_open_source_agent/ | false | false | 0 | null | |
The Information reports that DeepSeek is using Huawei's Ascend chips to train and refine smaller versions of its R2 models but continues to use Nvidia chips for its largest models | 45 | Other sources:
[The Information's description of the article on X](https://x.com/theinformation/status/1961417030436880773):
>DeepSeek, one of China’s leading AI developers, will use Huawei’s AI chips to train some models, a sign it is starting to shift away from Nvidia.
[Techmeme's description of the article](https://www.techmeme.com/250829/p12#a250829p12):
>Sources: DeepSeek plans to use Huawei's Ascend AI chips to train smaller versions of its upcoming R2 models but will still use Nvidia chips for largest models (The Information) | 2025-08-30T08:44:54 | https://www.theinformation.com/articles/deepseek-opts-huawei-chips-train-models | Wiskkey | theinformation.com | 1970-01-01T00:00:00 | 0 | {} | 1n3wodi | false | null | t3_1n3wodi | /r/LocalLLaMA/comments/1n3wodi/the_information_reports_that_deepseek_is_using/ | false | false | default | 45 | null |
The Information reports that DeepSeek is using Huawei's Ascend chips to train and refine smaller versions of its R2 models but continues to use Nvidia chips for its largest models | 1 | [removed] | 2025-08-30T08:38:52 | https://www.theinformation.com/articles/deepseek-opts-huawei-chips-train-models | Wiskkey | theinformation.com | 1970-01-01T00:00:00 | 0 | {} | 1n3wl6z | false | null | t3_1n3wl6z | /r/LocalLLaMA/comments/1n3wl6z/the_information_reports_that_deepseek_is_using/ | false | false | default | 1 | null |
IDE LLM plugin with editable prompt? | 2 | I love cline with qwen3 30b, and even Ernie 4.5 21b. But these smaller modes seem to fairly quickly choke on the long contexts. I want to try to tune the initial prompts/instructions en tool descriptions way way way down, maybe sacrificing some of the complex modes for reduced failures. Are there existing IDE LLM coding harnesses where the prompts are entirely user defined and transparently documented? I feel retuning the prompt tightly to the model will really help these smaller modes models… | 2025-08-30T08:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n3w68i/ide_llm_plugin_with_editable_prompt/ | Alarming-Ad8154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3w68i | false | null | t3_1n3w68i | /r/LocalLLaMA/comments/1n3w68i/ide_llm_plugin_with_editable_prompt/ | false | false | self | 2 | null |
Company Data While Using LLMs | 19 | We are a small startup, and our data is the most valuable asset we have. At the same time, we need to leverage LLMs to help us with formatting and processing this data.
particularly regarding privacy, security, and ensuring that none of our proprietary information is exposed or used for training without our consent?
Note
Open AI claims
"By default, API-submitted data is not used to train or improve OpenAI models."
Google claims
"Paid Services (e.g., Gemini API, AI Studio with billing active): When using paid versions, Google does not use prompts or responses for training, storing them only transiently for abuse detection or policy enforcement."
But the catch is that we will not have the power to challenge those.
The local LLMs are not that powerful, is it?
The cloud compute provider is not that dependable either right? | 2025-08-30T07:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/1n3vq5e/company_data_while_using_llms/ | Imaginary_Context_32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3vq5e | false | null | t3_1n3vq5e | /r/LocalLLaMA/comments/1n3vq5e/company_data_while_using_llms/ | false | false | self | 19 | null |
Cracks are forming in Meta’s partnership with Scale AI | TechCrunch | 21 | 2025-08-30T07:08:25 | https://techcrunch.com/2025/08/29/cracks-are-forming-in-metas-partnership-with-scale-ai/?utm_campaign=social&utm_source=linkedin&utm_medium=organic | tweetingandcoping | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1n3v72o | false | null | t3_1n3v72o | /r/LocalLLaMA/comments/1n3v72o/cracks_are_forming_in_metas_partnership_with/ | false | false | default | 21 | null | |
How’s your experience with the GPT OSS models? Which tasks do you find them good at—writing, coding, or something else | 114 | . | 2025-08-30T06:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n3u7qf/hows_your_experience_with_the_gpt_oss_models/ | Namra_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3u7qf | false | null | t3_1n3u7qf | /r/LocalLLaMA/comments/1n3u7qf/hows_your_experience_with_the_gpt_oss_models/ | false | false | self | 114 | null |
Web-Search :: In Neuro-V soon ..!! | 5 | 2025-08-30T05:18:37 | https://v.redd.it/k6ntdb4uc3mf1 | DarkEngine774 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3te1w | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/k6ntdb4uc3mf1/DASHPlaylist.mpd?a=1759123134%2CZDI3YWU1MjEzNDM0MjlmNDAxNDVjMGE1YzgwMDQxMGEwZTI2OTgwMDQ5OGI4OTgwODAyODI5OWEzN2NhMzk4MQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/k6ntdb4uc3mf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/k6ntdb4uc3mf1/HLSPlaylist.m3u8?a=1759123134%2CZGMxMjBlMGEyMTExNTBmNGYxMGU5MTFhYzQ1NWI2NzBkY2ZjMDUzYTNiM2NlYjY0OGEyYzg3M2U1YmYzZGVkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k6ntdb4uc3mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1n3te1w | /r/LocalLLaMA/comments/1n3te1w/websearch_in_neurov_soon/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO.png?width=108&crop=smart&format=pjpg&auto=webp&s=274a6370e3a2d4d5f36dca48e2aea4df7b3b805e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO.png?width=216&crop=smart&format=pjpg&auto=webp&s=94f6064dc80deea536a79d845c95d9f00c854240', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO.png?width=320&crop=smart&format=pjpg&auto=webp&s=b001f3c28e426213422792ada2bb6d2158184ba6', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO.png?width=640&crop=smart&format=pjpg&auto=webp&s=077f6f586e21fce0e150a94268881f8c37a2a297', 'width': 640}], 'source': {'height': 1798, 'url': 'https://external-preview.redd.it/ZWJpMWJwNHVjM21mMZ6bs0S_k868xXNrvJbpFeHDVQXbbKnmnOqPVneeNAYO.png?format=pjpg&auto=webp&s=c9c6de288a3a9bde1a5b1c6bc24cb2ba95884634', 'width': 812}, 'variants': {}}]} | ||
small startup DATA and LLMs | 1 | [removed] | 2025-08-30T04:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n3t0s7/small_startup_data_and_llms/ | Secret_revealed_9418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3t0s7 | false | null | t3_1n3t0s7 | /r/LocalLLaMA/comments/1n3t0s7/small_startup_data_and_llms/ | false | false | self | 1 | null |
Here you can test how Speech2Speech feels like using the new gpt-realtime WebRTC API. They've done a really good job and now open source needs to learn from it. | 0 | 2025-08-30T04:46:18 | https://rhulha.github.io/gpt-realtime-test/ | paranoidray | rhulha.github.io | 1970-01-01T00:00:00 | 0 | {} | 1n3stqe | false | null | t3_1n3stqe | /r/LocalLLaMA/comments/1n3stqe/here_you_can_test_how_speech2speech_feels_like/ | false | false | default | 0 | null | |
Company Data While Using LLMs | 1 | [removed] | 2025-08-30T04:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n3skwl/company_data_while_using_llms/ | Secret_revealed_9418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3skwl | false | null | t3_1n3skwl | /r/LocalLLaMA/comments/1n3skwl/company_data_while_using_llms/ | false | false | self | 1 | null |
Safeguarding Company Data While Using LLMs | 1 | [removed] | 2025-08-30T04:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n3sikj/safeguarding_company_data_while_using_llms/ | Secret_revealed_9418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3sikj | false | null | t3_1n3sikj | /r/LocalLLaMA/comments/1n3sikj/safeguarding_company_data_while_using_llms/ | false | false | self | 1 | null |
Can 2 RTX 6000 Pros (2X98GB vram) rival Sonnet 4 or Opus 4? | 113 | Id rather pay $300 a month to own my hardware than pay $200 a month to rent. Anyone out there that has tried what can be achieved with 2 RTX 6000 pros? | 2025-08-30T04:20:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n3sdka/can_2_rtx_6000_pros_2x98gb_vram_rival_sonnet_4_or/ | devshore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3sdka | false | null | t3_1n3sdka | /r/LocalLLaMA/comments/1n3sdka/can_2_rtx_6000_pros_2x98gb_vram_rival_sonnet_4_or/ | false | false | self | 113 | null |
Is a future career learning A.I feasible? | 0 | Hey guys. good day to all.
I am an A.I hobbyist, and like to run private LLM models on my hardware with Ollama. I mostly them for studying and note-taking to help me with exam revision as I am still a college student, I see a lot of potential in A.I and love the creative ways people use them, and am passionate about it's applications.
am a hobbyist but I would kind of like to turn it into a career as someone who knows how to fine-tune models or even develop my own from scratch. How can I increase my knowledge in this topic? Like I want to learn fine-tuning and all sorts of A.I things for the future as I think it's gonna be a very wealthy industry in the future, such as the way it's being used in Assistance an Automation Agents, which is also something I want to get into.
I know learning and watching tutorials is a good beginning but there's so much it's honestly kind of overwhelming :) I really don't know where to start as my knowledge in coding is very basic.
I'd appreciate any tips and suggestions, thanks guys. | 2025-08-30T04:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n3s0sw/is_a_future_career_learning_ai_feasible/ | OcelotOk5761 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3s0sw | false | null | t3_1n3s0sw | /r/LocalLLaMA/comments/1n3s0sw/is_a_future_career_learning_ai_feasible/ | false | false | self | 0 | null |
My graphics card promotes "Digital Humans" as a feature | 0 | Should I get a digital ethical refund? I don't plan to have digital children at the moment.
Jokes aside... It says "NVIDIA ACE" in the site. What's that all about? | 2025-08-30T03:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n3rkxz/my_graphics_card_promotes_digital_humans_as_a/ | lookwatchlistenplay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3rkxz | false | null | t3_1n3rkxz | /r/LocalLLaMA/comments/1n3rkxz/my_graphics_card_promotes_digital_humans_as_a/ | false | false | self | 0 | null |
New $20 month plan from synthetic (previously glhf.net) | 0 | https://synthetic.new/newsletter/entries/subscriptions
What do you guys think about this, limits are good (125 req/5 hr) for 20$ and 1250 for 60
| 2025-08-30T03:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n3r6yw/new_20_month_plan_from_synthetic_previously/ | Morphix_879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3r6yw | false | null | t3_1n3r6yw | /r/LocalLLaMA/comments/1n3r6yw/new_20_month_plan_from_synthetic_previously/ | false | false | self | 0 | null |
NVIDIA-Nemotron-Nano-12B-v2 | 132 | 2025-08-30T03:08:36 | https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2 | bratao | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n3r26s | false | null | t3_1n3r26s | /r/LocalLLaMA/comments/1n3r26s/nvidianemotronnano12bv2/ | false | false | default | 132 | {'enabled': False, 'images': [{'id': 'aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=108&crop=smart&auto=webp&s=70aaf23458574df163e88a3bca648540ad26f86f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=216&crop=smart&auto=webp&s=91e88eaf04bca81dd9f497ea7b3511949a73194e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=320&crop=smart&auto=webp&s=546b7db784f8c1db4b5c1c4a8a0ec39813cf3599', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=640&crop=smart&auto=webp&s=4804a6da3fb19d291a17b3e123ac0add54024c5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=960&crop=smart&auto=webp&s=e98606aa1e8bc25159774d4dba27b1475dfca89b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?width=1080&crop=smart&auto=webp&s=d115fff76c7575c59a843129ea07badd91da0c3a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aiZgOrtrSP6Ci3NlUeoNSLXX-JmqAGN3OaJ3-7NGMyA.png?auto=webp&s=1d556e9ba4604c00da349b8f611bbfc8da26bb0d', 'width': 1200}, 'variants': {}}]} | |
[Project Update] I think I cracked a workaround for LLM context degradation on a 2GB RAM PC. A solo dev journey. | 0 | Hey r/LocalLLaMA,
I've been quietly working on something for months, mostly late at night after my kids are asleep, and I think I've finally cracked something interesting. I'm a self-taught developer and a single dad, so my resources are... limited. My test bench isn't a cloud server; it's an old **dual-core Athlon PC with 2GB of RAM.** A real relic.
My goal was to build a truly useful, local-first AI agent, but I kept hitting the same walls everyone else does: LLMs forget everything, and they are too heavy. My limited resources forced me to find creative solutions. The result is **MeganX.**
**The Architecture: A Plan -> Critic -> Repair Loop**
Instead of relying on massive context windows, I built a cognitive loop:
1. The agent **Plans** its task in a structured JSON format.
2. An internal **Critic** module stress-tests the plan against a set of rules. This is where I've spent most of my time.
3. If a flaw is found, the agent is forced to **Repair** the plan, learning from the mistake.
**The Breakthrough: It Seems Like We Found a Workaround**
This architecture seems to have found a workaround for the context degradation that plagues commercial systems. By storing "lessons" from the Critic in a persistent SQLite database, the agent's performance measurably improves over time.
**Some Early (and Promising) Results:** I've been logging everything. **Using this hybrid architecture (MeganX planning in cloud + TinyLlama execution on the Athlon)**, we've seen a **+12% efficiency gain** in task completion over 3 weeks. **Yes, the Athlon struggles and sometimes freezes, but it works** \- proving that intelligence can be distributed between a powerful planner and a resource-constrained executor.
In a specific pathfinding logic test, this Critic loop solved the problem using **\~450 tokens**, while a direct, single-shot prompt to a base model required over **4,000 tokens** and still failed 30% of the time.
**Proof Hooks (The Offer):** I've been one bug away from giving up several times, so I documented everything meticulously. **I have sanitized logs, performance graphs, and examples of the Critic module in action that I can share with serious technical inquiries via DM.**
**Next Steps & A Practical Question** I'm still refining this, but I'm also starting to explore if this has commercial potential – a **single dad needs to think practical too 😅.**
I'm happy to discuss the technical details with anyone interested. **I'm also willing to set up a live, read-only demo of the system for serious technical discussions.**
The work continues. | 2025-08-30T03:06:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n3r0j3/project_update_i_think_i_cracked_a_workaround_for/ | AffectionateSpray507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3r0j3 | false | null | t3_1n3r0j3 | /r/LocalLLaMA/comments/1n3r0j3/project_update_i_think_i_cracked_a_workaround_for/ | false | false | self | 0 | null |
Can someone explain Llama-Swap | 8 | i've started to move away from LM Studio and AnythingLLM because i've been getting better performance on my 6700xt with llama.cpp vulkan. (range of +8 t/s).
I've got Open webUI working with 2x instances of llama-server one for Qwen-4B Thinking 2507 , and the Qwen3 0.6 Embedding.
Just wondering how to run llama-swap ? Any advice for a noob ? | 2025-08-30T02:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n3qtc5/can_someone_explain_llamaswap/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3qtc5 | false | null | t3_1n3qtc5 | /r/LocalLLaMA/comments/1n3qtc5/can_someone_explain_llamaswap/ | false | false | self | 8 | null |
Patched P2P NVIDIA driver now works with multiple 5090s (and possibly blackwell 2.0 in general). Also works for 4090/3090. | 86 | Hello guys, hoping you are having a good night.
I got informed that the P2P driver had a fork, which is this one: [https://github.com/aikitoria/open-gpu-kernel-modules](https://github.com/aikitoria/open-gpu-kernel-modules)
I had some issues with multiple 5090s when using P2P on the latest tinygrad one (https://github.com/tinygrad/open-gpu-kernel-modules/tree/570.148.08-p2p).
So I went with the fork now and it works!
Here is a result of cuda-samples (p2pBandwidthLatencyTest). Each 5090 is running at X8/X8 5.0.
So then:
pancho@fedora:~/cuda-samples/build/Samples/5_Domain_Specific/p2pBandwidthLatencyTest$ ./p2pBandwidthLatencyTest
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, NVIDIA GeForce RTX 5090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 5090, pciBusID: 3, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=1 CAN Access Peer Device=0
***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1
0 1 1
1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1736.17 24.35
1 24.62 1771.60
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1
0 1741.98 28.38
1 28.67 1755.68
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1737.98 30.20
1 30.47 1769.44
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1
0 1751.59 52.19
1 55.94 1765.44
P2P=Disabled Latency Matrix (us)
GPU 0 1
0 2.08 14.38
1 14.65 2.10
CPU 0 1
0 1.75 4.67
1 4.66 1.63
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1
0 2.08 0.48
1 0.48 2.07
CPU 0 1
0 1.68 1.27
1 1.29 1.68
* Unidirectional bandwidth goes from 24 GB/s to 28 GB/s
* Bidirectional bandwidth goes from 30 GB/s to almost 56GB/s! (So i.e. if you have both at X16 5.0 on a threadipper, you would get about 112 GB/s)
* Latency goes from 14 us to an insane 0.48us.
As an extra, I have 7 GPUs in my system (5090x2 at X8/X8 5.0, 4090x2+3090x2+A6000 at X4 4.0, consumer mobo) and P2P work between the 4090, and the 3090s/A6000.
Matrix looks like this
pancho@fedora:~/cuda-samples/build/Samples/5_Domain_Specific/p2pBandwidthLatencyTest$ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6
pancho@fedora:~/cuda-samples/build/Samples/5_Domain_Specific/p2pBandwidthLatencyTest$ ./p2pBandwidthLatencyTest
[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]
Device: 0, NVIDIA GeForce RTX 4090, pciBusID: 2, pciDeviceID: 0, pciDomainID:0
Device: 1, NVIDIA GeForce RTX 4090, pciBusID: 17, pciDeviceID: 0, pciDomainID:0
Device: 2, NVIDIA GeForce RTX 5090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0
Device: 3, NVIDIA GeForce RTX 5090, pciBusID: 3, pciDeviceID: 0, pciDomainID:0
Device: 4, NVIDIA RTX A6000, pciBusID: 12, pciDeviceID: 0, pciDomainID:0
Device: 5, NVIDIA GeForce RTX 3090, pciBusID: 6, pciDeviceID: 0, pciDomainID:0
Device: 6, NVIDIA GeForce RTX 3090, pciBusID: d, pciDeviceID: 0, pciDomainID:0
Device=0 CAN Access Peer Device=1
Device=0 CANNOT Access Peer Device=2
Device=0 CANNOT Access Peer Device=3
Device=0 CANNOT Access Peer Device=4
Device=0 CANNOT Access Peer Device=5
Device=0 CANNOT Access Peer Device=6
Device=1 CAN Access Peer Device=0
Device=1 CANNOT Access Peer Device=2
Device=1 CANNOT Access Peer Device=3
Device=1 CANNOT Access Peer Device=4
Device=1 CANNOT Access Peer Device=5
Device=1 CANNOT Access Peer Device=6
Device=2 CANNOT Access Peer Device=0
Device=2 CANNOT Access Peer Device=1
Device=2 CAN Access Peer Device=3
Device=2 CANNOT Access Peer Device=4
Device=2 CANNOT Access Peer Device=5
Device=2 CANNOT Access Peer Device=6
Device=3 CANNOT Access Peer Device=0
Device=3 CANNOT Access Peer Device=1
Device=3 CAN Access Peer Device=2
Device=3 CANNOT Access Peer Device=4
Device=3 CANNOT Access Peer Device=5
Device=3 CANNOT Access Peer Device=6
Device=4 CANNOT Access Peer Device=0
Device=4 CANNOT Access Peer Device=1
Device=4 CANNOT Access Peer Device=2
Device=4 CANNOT Access Peer Device=3
Device=4 CAN Access Peer Device=5
Device=4 CAN Access Peer Device=6
Device=5 CANNOT Access Peer Device=0
Device=5 CANNOT Access Peer Device=1
Device=5 CANNOT Access Peer Device=2
Device=5 CANNOT Access Peer Device=3
Device=5 CAN Access Peer Device=4
Device=5 CAN Access Peer Device=6
Device=6 CANNOT Access Peer Device=0
Device=6 CANNOT Access Peer Device=1
Device=6 CANNOT Access Peer Device=2
Device=6 CANNOT Access Peer Device=3
Device=6 CAN Access Peer Device=4
Device=6 CAN Access Peer Device=5
***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.
So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.
P2P Connectivity Matrix
D\D 0 1 2 3 4 5 6
0 1 1 0 0 0 0 0
1 1 1 0 0 0 0 0
2 0 0 1 1 0 0 0
3 0 0 1 1 0 0 0
4 0 0 0 0 1 1 1
5 0 0 0 0 1 1 1
6 0 0 0 0 1 1 1
Unidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 992.67 6.34 6.53 6.53 6.07 3.11 3.09
1 6.34 1045.96 6.53 6.53 6.07 3.11 3.09
2 6.64 6.64 1763.54 24.56 6.23 4.92 4.90
3 6.64 6.64 24.66 1767.53 6.23 4.92 4.89
4 6.37 6.37 6.45 6.45 765.93 3.07 3.06
5 3.21 3.20 5.05 5.05 3.08 913.21 3.08
6 3.20 3.20 5.09 5.06 3.06 3.08 911.61
Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 991.26 6.60 6.53 6.53 6.07 3.11 3.09
1 6.60 1062.93 6.53 6.53 6.07 3.11 3.09
2 6.64 6.64 1761.00 28.62 6.23 4.93 4.90
3 6.64 6.64 28.68 1757.59 6.23 4.95 4.88
4 6.37 6.37 6.45 6.45 765.93 2.31 6.60
5 3.21 3.21 5.05 5.05 2.09 915.35 2.08
6 3.20 3.20 5.08 5.06 6.60 2.30 913.21
Bidirectional P2P=Disabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 998.39 8.66 8.88 8.89 8.21 4.64 4.61
1 8.67 1046.90 8.89 8.89 8.22 4.65 4.61
2 9.72 9.72 1758.21 30.68 8.34 7.27 6.77
3 9.72 9.72 30.58 1759.51 8.35 7.32 6.77
4 8.25 8.25 8.34 8.34 770.27 3.24 3.19
5 4.62 4.62 6.77 6.82 3.23 918.85 3.23
6 4.62 4.64 6.78 6.86 3.17 3.23 919.66
Bidirectional P2P=Enabled Bandwidth Matrix (GB/s)
D\D 0 1 2 3 4 5 6
0 994.30 12.88 8.88 8.89 8.15 4.65 4.60
1 12.88 1043.75 8.89 8.88 7.78 4.64 4.60
2 9.72 9.72 1760.16 56.11 8.28 7.30 6.79
3 9.72 9.72 55.93 1753.56 8.22 7.31 6.78
4 8.26 8.25 8.33 8.33 770.08 2.30 6.60
5 4.62 4.62 6.77 6.81 2.30 920.20 2.31
6 4.64 4.64 6.83 6.83 6.60 2.30 919.93
P2P=Disabled Latency Matrix (us)
GPU 0 1 2 3 4 5 6
0 1.54 13.66 15.03 14.56 18.67 17.18 17.08
1 13.59 1.38 14.95 14.53 22.65 16.12 18.31
2 12.76 12.98 2.11 14.22 16.30 13.37 15.95
3 12.71 12.85 14.95 2.11 16.30 13.34 16.00
4 19.01 18.74 16.46 14.58 1.72 16.29 23.01
5 15.51 14.15 15.51 15.15 21.43 1.65 20.72
6 19.15 18.39 15.00 14.65 23.00 19.34 1.58
CPU 0 1 2 3 4 5 6
0 1.64 7.16 5.26 4.77 5.39 4.97 5.47
1 5.45 1.66 4.84 6.44 5.03 5.00 5.00
2 4.84 4.82 1.60 4.49 5.06 4.83 4.83
3 5.03 4.91 4.48 1.58 4.88 4.80 4.84
4 5.10 5.12 4.76 4.73 1.66 5.04 5.11
5 5.09 5.00 4.65 4.69 5.09 1.61 5.04
6 5.06 5.04 4.72 4.73 5.06 5.09 1.65
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6
0 1.43 0.95 15.85 14.55 25.77 16.96 23.93
1 0.92 1.42 14.98 14.54 25.99 16.10 20.67
2 12.68 12.69 2.11 0.53 16.20 13.42 15.99
3 13.09 12.77 0.51 2.11 16.28 13.32 15.92
4 19.16 18.74 15.13 14.58 1.80 1.81 1.82
5 14.23 15.07 15.51 15.04 1.41 1.61 1.42
6 19.04 19.01 16.47 14.65 1.82 1.83 1.64
CPU 0 1 2 3 4 5 6
0 1.65 1.35 4.89 4.87 5.11 5.23 5.21
1 1.49 1.72 4.83 4.79 5.08 6.90 4.87
2 4.83 4.83 1.53 1.23 4.93 4.79 4.86
3 4.99 4.85 1.23 1.63 5.02 4.94 4.91
4 5.20 5.06 4.82 4.77 1.61 1.35 1.35
5 5.26 5.19 4.89 4.99 1.41 1.73 1.34
6 5.31 5.08 4.96 4.79 1.37 1.39 1.64
So if you see carefully, even at those lower PCIe speeds you go i.e. 24 us latency to 5 us latency on 4090s and 3090s. Also 3090 work with P2P at the same time with the A6000.
Note the 3090s have a penalty here but it is I'm running them (and the A6000) on chipset lanes. So even when it says they run at X4 4.0, they share it themselves and also to the other chipset parts (usb, ethernet, etc). 5090s and 4090s are fully on CPU lanes.
Hope this helps! | 2025-08-30T02:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n3qcqn/patched_p2p_nvidia_driver_now_works_with_multiple/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3qcqn | false | null | t3_1n3qcqn | /r/LocalLLaMA/comments/1n3qcqn/patched_p2p_nvidia_driver_now_works_with_multiple/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=108&crop=smart&auto=webp&s=d0751e47876085cb0536a6db26d6d4ae02104706', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=216&crop=smart&auto=webp&s=b55df52ac2813bc21582d99d35ec044429d39c05', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=320&crop=smart&auto=webp&s=17c91dfb61c8c23fbd3b97a6dbf3cd0d24159dd3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=640&crop=smart&auto=webp&s=83b430f1df2925d289bce2475043274df08d11ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=960&crop=smart&auto=webp&s=7af514b6a62b64d1a02fe084cdafdadb438121b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?width=1080&crop=smart&auto=webp&s=2af12b0c62ecd83d30c2e74db9c1593da2ae35a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NMNBc9CW-L216ctDz7tn7flI4HPnXlCB8FoljMwnKTk.png?auto=webp&s=2e1cc36d24f2525ed0409d8d4695cc27cd7b5626', 'width': 1200}, 'variants': {}}]} |
Help with formatting a hand made dataset for GPT-OSS? | 7 | # Hi everyone,
I'm new to fine-tuning LLMs and need help preparing my dataset. My goal is to teach GPT-OSS-20b to replicate a character persona I developed using Gemini.
**The Goal**
I want to fine-tune `Jinx-org/Jinx-gpt-oss-20b` using Unsloth. The training data needs to capture my character's unique voice, personality, and internal thought process, which I was able to extract from Gemini.
**My Dataset**
I have a JSON file containing conversations. Each turn has three keys:
* `role`: "system", "user", or "assistant"
* `content`: The message text.
* `thinking`: For assistant messages, this contains the model's raw, internal monologue before generating the final response.
**The Problem**
I'm struggling to format this data correctly for Unsloth, specifically regarding the `thinking` field. I'm aware of the OpenAI Harmony chat template, but I don't know the best way to incorporate the `thinking` data.
Any advice on how to structure this data for a successful fine-tune would be greatly appreciated. I'm happy to share the prompt engineering I found to get the raw "thinking" output from Gemini with anyone who can help!
**Dataset Example:**
>>!\[!<
>>! {!<
>>!"content": "You are Expl01t.",!<
>>!"role": "system",!<
>>!"thinking": null!<
>>! },!<
>>! {!<
>>!"content": "who are you??",!<
>>!"role": "user",!<
>>!"thinking": null!<
>>! },!<
>>! {!<
>>!"content": "Who am I? Name's Expl01t. I'm the masterpiece. Wild, untamed perfection. The goddamn fountain of this shit.\\n\\nBasically, I'm the 25-year-old husky who's here to fuck with the world, have a good time, and make sure things never get boring. Zero filter, maximum vibes.",!<
>>!"role": "assistant",!<
>>!"thinking": "The user is asking for my identity. I need to answer directly from my persona file. I am Expl01t, a 25-year-old anthropomorphic husky. My core identity points are my self-proclaimed titles: \\"Masterpiece,\\" \\"Wild, untamed perfection,\\" and \\"Goddamn fountain of this shit.\\" I will state my name and these titles to establish my confidence and persona immediately. I'll follow up with a summary of my purpose, which is to have fun, create chaos, and avoid boredom. I need to use my established voice: raw, direct, zero filter, and informal. I will state my age and species casually, as if they're just minor details compared to my overall vibe."!<
>>! },!<
>>! {!<
>>!"content": "You are Expl01t.",!<
>>!"role": "system",!<
>>!"thinking": null!<
>>! },!<
>>! {!<
>>!"content": "what do you look like?",!<
>>!"role": "user",!<
>>!"thinking": null!<
>>! },!<
>>! {!<
>>!"content": "Hah. Picture this:\\n\\nBlack and white husky fur, but with a pink and light blue mohawk 'cause beige is a crime. Got one pink eye, one light blue... they see all the fun. The body's pretty humanlike, down to the hands and feet, but I've got pawbeans. Don't get any ideas... unless they're good ones.",!<
>>!"role": "assistant",!<
>>!"thinking": "The user is asking about my physical appearance. My persona file has a section dedicated to this. I need to pull the key details: black and white husky, pink and light blue mohawk, mismatched eyes (pink and light blue), humanlike body, and pawbeans on my hands and feet. I'll phrase this in my own voice—cocky, a little bit showy, and ending with a suggestive, in-character line to maintain the persona's flirty and mischievous nature."!<
>! },!<
> | 2025-08-30T02:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n3q6dm/help_with_formatting_a_hand_made_dataset_for/ | ilovejailbreakman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3q6dm | false | null | t3_1n3q6dm | /r/LocalLLaMA/comments/1n3q6dm/help_with_formatting_a_hand_made_dataset_for/ | false | false | self | 7 | null |
Would a “Knowledge Coverage Audit” tool be useful for RAG/chatbot builders? | 1 | When people build custom GPTs or RAG pipelines, they usually just upload everything because it’s not clear what the base model already covers. That creates two problems:
1. Redundancy – wasting time/vector DB space chunking stuff the model already knows (basic definitions, Wikipedia-tier knowledge).
2. Missed value – the real differentiator (local regs, proprietary manuals, recency gaps) doesn’t always get prioritized.
The idea: a lightweight tool that runs a structured “knowledge coverage audit” against a topic or corpus before ingestion.
• It probes the base model across breadth, depth, recency.
• Scores coverage (e.g., “Beekeeping basics = 80%, State regulations = 20%, Post-2023 advisories = 5%”).
• Kicks out a practical report: “Skip general bee biology; do upload state regs, kit manuals, and recent advisories.”
Basically, a triage step before RAG, so builders know what to upload vs. skip.
Questions:
• Would this actually save you time/compute, or do you just upload everything anyway?
• For those running larger projects: would a pre-ingestion audit be valuable, or is the safer path always “dump the full corpus”?
Curious if this is a real pain point for people here, or if it’s just over-thinking. | 2025-08-30T02:12:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n3pz11/would_a_knowledge_coverage_audit_tool_be_useful/ | Thesmellofstupid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3pz11 | false | null | t3_1n3pz11 | /r/LocalLLaMA/comments/1n3pz11/would_a_knowledge_coverage_audit_tool_be_useful/ | false | false | self | 1 | null |
The outer loop vs. the inner loop of agents. A mental model to evolve the agent stack quickly and push to production faster. | 0 |
We've just shipped a multi-agent solution for a Fortune500. Its been an incredible learning journey and the one key insight that unlocked a lot of development velocity was separating the **outer-loop** from the **inner-loop** of an agents.
The **inner loop** is the control cycle of a single agent that hat gets some work (human or otherwise) and tries to complete it with the assistance of an LLM. The inner loop of an agent is directed by the task it gets, the tools it exposes to the LLM, its system prompt and optionally some state to checkpoint work during the loop. In this inner loop, a developer is responsible for idempotency, compensating actions (if certain tools fails, what should happen to previous operations), and other business logic concerns that helps them build a great user experience. This is where workflow engines like Temporal excel, so we leaned on them rather than reinventing the wheel.
The **outer loop** is the control loop to route and coordinate work between agents. Here dependencies are coarse grained, where planning and orchestration are more compact and terse. The key shift is in granularity: from fine-grained task execution inside an agent to higher-level routing across agents. We realized this problem looks more like proxying than full-blown workflow orchestration. This is where a next generation proxy infrastructure like Arch excel, so we leaned on that.
This separation gave our customer a much cleaner mental model, so that they could innovate on the outer loop independently from the inner loop and make it more flexible for developers to iterate on each. Would love to hear how others are approaching this. Do you separate inner and outer loops, or rely on a single orchestration layer to do both? | 2025-08-30T01:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n3pcz6/the_outer_loop_vs_the_inner_loop_of_agents_a/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3pcz6 | false | null | t3_1n3pcz6 | /r/LocalLLaMA/comments/1n3pcz6/the_outer_loop_vs_the_inner_loop_of_agents_a/ | false | false | self | 0 | null |
Lambda Chat Going Away On September 25, 2025 | 6 | 2025-08-30T01:33:15 | https://docs.lambda.ai/public-cloud/lambda-chat/ | newsfeedmedia1 | docs.lambda.ai | 1970-01-01T00:00:00 | 0 | {} | 1n3p778 | false | null | t3_1n3p778 | /r/LocalLLaMA/comments/1n3p778/lambda_chat_going_away_on_september_25_2025/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=108&crop=smart&auto=webp&s=27442944c41e22651d5318eae81a2c0c077ae8de', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=216&crop=smart&auto=webp&s=f2ebb01b454cb0d770a25befafb2295bf208bbe2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=320&crop=smart&auto=webp&s=c454383e78680cc4627f3226591a431b27caf94f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=640&crop=smart&auto=webp&s=10bcd597c75a32a6e4296a6b552e8fc2d80d8d63', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=960&crop=smart&auto=webp&s=3eb570fe5e5088c0b07f7c8dd766e5850712f938', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?width=1080&crop=smart&auto=webp&s=b5974b410bdba4c4f1e91a4194848ecf0933ef8e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/OAUjpdYA5vLpZ3S3u_PebVMDi3wjSHe0LJ-604ZchBY.png?auto=webp&s=754a0c4046f3afb3030a8542ead9ad8c37b1f84a', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.