license: gpl-3.0
tags:
- lora
- merge
- comfyui
- stable-diffusion
configs:
- config_name: config
data_files: config/*.json
features:
- name: algo_version
dtype: string
- name: arch_preset
dtype: string
- name: lora_content_hashes
sequence: string
- name: score
dtype: float64
- name: config
dtype:
struct:
- name: merge_mode
dtype: string
- name: sparsification
dtype: string
- name: sparsification_density
dtype: float64
- name: dare_dampening
dtype: float64
- name: merge_refinement
dtype: string
- name: auto_strength
dtype: string
- name: optimization_mode
dtype: string
- name: strategy_set
dtype: string
- name: candidates
sequence:
struct:
- name: rank
dtype: int64
- name: config
dtype:
struct:
- name: merge_mode
dtype: string
- name: sparsification
dtype: string
- name: sparsification_density
dtype: float64
- name: dare_dampening
dtype: float64
- name: merge_refinement
dtype: string
- name: auto_strength
dtype: string
- name: optimization_mode
dtype: string
- name: strategy_set
dtype: string
- name: score_heuristic
dtype: float64
- name: score_measured
dtype: float64
- name: score_final
dtype: float64
LoRA Optimizer — Community Cache
Shared analysis results for the LoRA Optimizer ComfyUI node.
LoRA merge analysis is hardware-agnostic — the same LoRA files always produce the same conflict metrics and optimal merge config regardless of GPU tier. This dataset lets users share and reuse those results so nobody has to run the AutoTuner from scratch.
How It Works
The AutoTuner computes pairwise conflict metrics (cosine similarity, sign conflicts, subspace overlap) and tests merge parameter combinations to find the best config for a set of LoRAs. These results are keyed by content hash (SHA256[:16] of file contents) — not by filename — so they're portable across systems and private by design.
When community_cache=upload_and_download is set in the AutoTuner node:
- Download: Before running analysis, the node checks this dataset for existing results. A config hit skips the entire sweep (~30–120s saved). Lora/pair cache hits speed up the analysis phase even without a full config hit.
- Upload: After a successful sweep (or when replaying from local memory), results are uploaded if the local score beats the current community score for that LoRA set.
Privacy
LoRA filenames are never stored here. Only SHA256[:16] content hashes are used as keys. The uploaded data contains:
- Per-prefix conflict metrics (cosine similarity, sign conflict ratios, subspace overlap)
- Winning merge configuration (sparsification method, merge strategy, refinement level, etc.)
- A composite quality score
No file paths, no usernames, no LoRA names.
File Structure
lora/
{content_hash}.lora.json # Per-LoRA per-prefix conflict stats
pair/
{hash_a}_{hash_b}.pair.json # Pairwise conflict metrics (hashes sorted)
config/
{hash_a}_{hash_b}_..._{arch}.config.json # Best merge config + score for a LoRA set
All files include an algo_version field. Results from incompatible algorithm versions are ignored automatically.
Usage
In the LoRA AutoTuner node, set community_cache to upload_and_download. That's the only option — there's no passive download-only mode. If you benefit from the cache, you contribute back.
| Value | Behavior |
|---|---|
disabled (default) |
No network interaction |
upload_and_download |
Download precomputed results and contribute yours back |
Network errors are silently ignored — the node always falls back to local computation.
Setup
One time:
pip install huggingface_hub
huggingface-cli login
The node picks up your stored token automatically. No environment variables needed for most users.
Headless/server alternative: set HF_TOKEN as an environment variable.
Then: set community_cache=upload_and_download in the AutoTuner node and run as normal. Everything else is automatic.
Score-Based Replacement
Configs are only uploaded when your local score beats the community score. Users with more thorough sweeps (top_n=10) or better hardware naturally contribute higher-quality results over time.