id stringlengths 14 14 | type stringclasses 21
values | name_nodes stringlengths 2 36 | mean_nodes stringlengths 2 11.7k | frame_nodes stringlengths 2 216 | raw_content stringlengths 1 560 |
|---|---|---|---|---|---|
0x27b88ea58118 | base_name | [] | [] | [] | 2. Two Paths to Expand Context |
0x952c62e843b3 | principle | ["0x27b88ea58118"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x90563024aeb2"] | 2. Two Paths to Expand Context
When context is insufficient, there are exactly two ways to expand it:
**1. Imagination (endogenous)**
> Use what is already known to infer what is not yet known.
> Generate possible cases without requiring additional input from the world.
**2. Exploration (exogenous)**
> Acquire additi... |
0x9256b4ecbe34 | base_name | [] | [] | [] | 3. The Convergence Loop |
0xda008ef7483f | principle | ["0x9256b4ecbe34"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0xfbaec867b0e9", "0x90563024aeb2"] | 3. The Convergence Loop
The two paths are not independent — they loop:
```
context insufficient
→ [1] imagine: generate cases from what is known
→ [2] explore: observe to confirm or refute
→ gap narrows → context updated
→ sufficient → collapse
→ insufficient → return to [1] with updated context
```
Converg... |
0xe8f469964683 | base_name | [] | [] | [] | 4. Meaning Depends on Frame |
0xdde0a531ff5f | principle | ["0xe8f469964683"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x0ad428c0a634"] | 4. Meaning Depends on Frame
```
M = f(N, F)
```
- **N**: name / identifier of a concept
- **F**: the set of currently active context — the frame
- **M**: meaning — only valid within that specific F
Same N, different F → different M.
There is no absolute meaning. Meaning is a relation between concept and context.
Col... |
0x5acb21c154f4 | base_name | [] | [] | [] | 5. Memory is Layered by Rate of Change |
0x2da5380afcb2 | principle | ["0x5acb21c154f4"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x700ff6e270fd"] | 5. Memory is Layered by Rate of Change
Not all context changes at the same speed.
Slower layers are not reset by faster layers.
```
invariant → cross-domain, never changes
quasi-invariant → learned gradually, stable after many observations
spatial / episodic → changes per situation
momentary → exi... |
0x46c9a1c3a457 | base_name | [] | [] | [] | 6. Gap is the Driver of Learning |
0xb8f06727c897 | principle | ["0x46c9a1c3a457"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x90563024aeb2"] | 6. Gap is the Driver of Learning
Gap = the distance between imagination and observation.
Gap is not an error — gap is a signal.
No gap → no learning.
Large persistent gap → fastest learning.
```
gap appears
→ classify: where in the loop does the gap live?
→ generate hypothesis to fill the gap
→ test via explora... |
0x0f6c4645b0ce | base_name | [] | [] | [] | 7. Multi-channel Input — Collect First, Filter Later |
0x4408ea16d4e2 | principle | ["0x0f6c4645b0ce"] | [] | ["0x501d59a1e46c", "0x852eecb66da1", "0xfbe37abdbc60", "0x0ad428c0a634", "0xfbaec867b0e9", "0x16f9f618ef6e"] | 7. Multi-channel Input — Collect First, Filter Later
The human brain renders a world model from at least 5 parallel channels.
No channel is "primary" — each provides a type of context that no other channel can replace.
**Principle:**
> You cannot know in advance which channel is meaningful in this situation.
> Collec... |
0x8567f1c3b538 | base_name | [] | [] | [] | 8. Three Object Classes in Any Game |
0x74b2e67eea2b | principle | ["0x8567f1c3b538"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x0ad428c0a634", "0x700ff6e270fd"] | 8. Three Object Classes in Any Game
Every game world has exactly 3 types of entities:
**1. Avatar Player (self)**
> The entity you control. Moves in response to your actions.
> You do NOT "become" the player — you OBSERVE and CONTROL it from outside (3rd person perspective).
> This means: you can see the player's coor... |
0xa8bcddb17909 | base_name | [] | [] | [] | 9. Context Query Protocol — Questions Before Actions |
0xf09b98a55109 | principle | ["0xa8bcddb17909"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60", "0x15fbb59966b0", "0xfbaec867b0e9", "0x16f9f618ef6e", "0x90563024aeb2"] | 9. Context Query Protocol — Questions Before Actions
> The quality of a decision is bounded by the quality of the context that produced it.
> Context does not arrive automatically — it must be QUERIED.
> Each query = attention directed at a specific sensor or memory channel.
> No query = no context from that channel = ... |
0xa434b76bf070 | base_name | [] | [] | [] | 10. Universal Game Solving — Interaction Chain |
0x16bc590112dd | principle | ["0xa434b76bf070"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x15fbb59966b0", "0x0ad428c0a634", "0x700ff6e270fd", "0x90563024aeb2"] | 10. Universal Game Solving — Interaction Chain
> This principle applies to ALL games, ALL levels, ALL genres.
> It is the invariant structure of how difficulty scales.
**The simplest game:**
```
A → B
Player → Goal
```
**How difficulty increases — the ONLY way:**
```
A → [interact O1] → B
A → [interact O1] → [interac... |
0x1fe7f13596e2 | base_name | [] | [] | [] | 11. Think Before Look — Superposition of Possibilities |
0xaad993208b13 | principle | ["0x1fe7f13596e2"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x700ff6e270fd", "0xfbaec867b0e9", "0x16f9f618ef6e", "0x90563024aeb2"] | 11. Think Before Look — Superposition of Possibilities
> Everything exists in superposition until input collapses it.
> Cognition is NOT: see → think → act.
> Cognition IS: think (imagine possibilities) → see (input collapses) → act (on collapsed reality).
**Before receiving ANY new input, the mind must IMAGINE what i... |
0xb220c9010b97 | base_name | [] | [] | [] | 12. Color Gradient = Map Topology |
0x8c5db08bec89 | principle | ["0xb220c9010b97"] | [] | ["0x501d59a1e46c", "0x852eecb66da1", "0xfbe37abdbc60", "0x15fbb59966b0"] | 12. Color Gradient = Map Topology
> Before detecting objects, read the MAP itself from color gradients.
> Bright pixels = paths. Dark pixels = walls. The brightness pattern IS the road network.
In ARC-3 games:
```
color 3 (dark gray, bright relative to walls) = corridor = CAN walk
color 4 (near black) ... |
0x6440644d0596 | base_name | [] | [] | [] | 13. Bipolar Field Map — Object as Anchor Between Invariant and Variant |
0x9fe76cd76d71 | principle | ["0x6440644d0596"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0x15fbb59966b0", "0x700ff6e270fd", "0x16f9f618ef6e", "0x90563024aeb2"] | 13. Bipolar Field Map — Object as Anchor Between Invariant and Variant
> Every object sits at the CENTER between two poles.
> Properties of the object are VALUE NODES that orbit between the object and their pole.
> This is the fundamental data structure for representing knowledge.
**Structure:**
```
[INVARIANT POLE] ←... |
0x2c95a4388a18 | base_name | [] | [] | [] | 14. The Fundamental Triad — Player, Door, Object |
0x899d8f1d9217 | principle | ["0x2c95a4388a18"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60", "0x700ff6e270fd"] | 14. The Fundamental Triad — Player, Door, Object
> Every game is exactly 3 things:
> 1. PLAYER (constant) — always exists, always controllable, always wants to reach door.
> 2. DOOR (constant) — always exists, always the goal, always requires conditions to enter.
> 3. OBJECTS (variable) — everything between player and ... |
0xf02ade742b38 | base_name | [] | [] | [] | 15. Node Flattening — List All, Then Fold by Shared Values |
0x583103bd0498 | principle | ["0xf02ade742b38"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60", "0x15fbb59966b0"] | 15. Node Flattening — List All, Then Fold by Shared Values
> When there are many objects between Player and Door, the chain gets complex.
> Solution: flatten everything into individual value nodes, then FOLD by shared values.
**Step 1: Flatten**
Every object → list ALL its properties as individual nodes on the Player→... |
0x4094b9ec7038 | base_name | [] | [] | [] | 16. Test Unknown Objects Using ALL Known Mechanisms |
0xe5a7501fd97f | principle | ["0x4094b9ec7038"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60"] | 16. Test Unknown Objects Using ALL Known Mechanisms
> When you discover a new object, do NOT test it only one way.
> Test it using EVERY mechanism you have already learned from other objects.
**The mistake:**
- Rotator triggers by "pass through" (not "stand next to")
- Yellow square tested by "stand next to + press di... |
0xa8d326b5e850 | base_name | [] | [] | [] | 17. Fractal Decomposition of Input — Invariant Extraction at Every Scale |
0x6546dcaa6bdc | principle | ["0xa8d326b5e850"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x700ff6e270fd", "0x16f9f618ef6e"] | 17. Fractal Decomposition of Input — Invariant Extraction at Every Scale
> Sensor collects raw signal. I.P PROCESSES it by extracting invariant at every scale.
> Never test specific cases. Extract the RULE that covers all cases.
**Method: recursive split of VARIANT into (invariant_small + variant_small)**
```
Layer 1... |
0x9c3bc2783720 | base_name | [] | [] | [] | 18. Binary Classification First — The Universal Split |
0x92d4031822ae | principle | ["0x9c3bc2783720"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x16f9f618ef6e"] | 18. Binary Classification First — The Universal Split
> Before any game-specific logic, split ALL pixels into exactly 2 groups.
> Then split again. 2 → 3 → done. Universal. Works on any 2D game.
**Step 1: Binary split**
```
[ALL PIXELS]
/ \
[RESPONSIVE] [NON-RESPONSIVE]
(delta > 0) (d... |
0x85636c9b5780 | base_name | [] | [] | [] | 19. Reverse Trace — Learn Mechanism from Solution, Not from Trial |
0x4e5434589a9a | principle | ["0x85636c9b5780"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60", "0x15fbb59966b0", "0x16f9f618ef6e", "0x90563024aeb2"] | 19. Reverse Trace — Learn Mechanism from Solution, Not from Trial
> If you have both the start (A) and the end (Z), don't learn forward A→Z.
> Learn BACKWARD Z→A. The backward trace reveals the mechanism.
**Forward learning** (A→Z): try action → observe → try another → slow, wasteful, may never converge.
**Backward t... |
0xb2abb3f544f2 | base_name | [] | [] | [] | 20. Empirical Mechanism Distribution — What ARC Actually Requires |
0x96def52c02a4 | principle | ["0xb2abb3f544f2"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x0ad428c0a634", "0xfbaec867b0e9", "0x16f9f618ef6e"] | 20. Empirical Mechanism Distribution — What ARC Actually Requires
> From reverse-tracing 400 ARC classic tasks (2874 observations):
> The data tells you what matters. Not theory — data.
**Binary tree of ALL mechanisms found:**
```
L0: SHAPE CHANGES(487) vs SAME SHAPE(815)
│
├── SHAPE CHANGES (37%)
│ ├── expand(121)... |
0x0cf670172556 | base_name | [] | [] | [] | 21. Three Chain Orders — The Only Structures That Exist |
0xf0f86879690d | principle | ["0x0cf670172556"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x700ff6e270fd"] | 21. Three Chain Orders — The Only Structures That Exist
> From 400 ARC tasks reverse-traced: only 3 chain orderings appear. Not 6. Not arbitrary.
> Chain STRUCTURE is invariant. Chain CONTENT is variant (fractal sub-chains inside).
**The 3 orders (empirical, from 1302 observations):**
```
Order 1 (50%): WHY → WHERE →... |
0xd0cf6574e5d2 | base_name | [] | [] | [] | 22. Five Grounding Primitives — Binary Is Not Enough |
0xa56e17c296c2 | principle | ["0xd0cf6574e5d2"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0x852eecb66da1", "0xfbe37abdbc60", "0x15fbb59966b0", "0xed0b40836a29"] | 22. Five Grounding Primitives — Binary Is Not Enough
> Binary (P18) covers most classification but cannot represent everything alone.
> Minimum grounding set = 5 irreducible primitives:
| # | Primitive | Mechanism | Covers |
|---|-----------|-----------|--------|
| G1 | **Binary** | A ↔ B | distinction, existence, K/U... |
0x23f98228b589 | base_name | [] | [] | [] | 23. Epistemic vs Exploit Action |
0xe9aeabe8eb71 | principle | ["0x23f98228b589"] | [] | ["0x501d59a1e46c", "0x852eecb66da1"] | 23. Epistemic vs Exploit Action
> When rules unknown: best action = maximize INFORMATION GAIN, not reward.
```
Epistemic: "what does this action REVEAL about the system?"
Exploit: "what does this action GET me toward goal?"
```
Switch from epistemic to exploit when confidence > threshold.
Not by step count — by con... |
0x69c49da6d806 | base_name | [] | [] | [] | 24. NMF — Name, Meaning, Frame |
0xab6899eb90f6 | principle | ["0x69c49da6d806"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x0ad428c0a634", "0x700ff6e270fd"] | 24. NMF — Name, Meaning, Frame
> Every piece of information is filtered through 3 simultaneous lenses:
```
Name = f(Meaning, Frame)
```
- **Name** — label in current context (can change per frame)
- **Meaning** — stable concept across contexts (transfers)
- **Frame** — reference system used to interpret (can switch)
... |
0xe7fe09267d7b | base_name | [] | [] | [] | 25. E = Superpose(K, U) — Existence as Grounded Known + Structured Unknown |
0x06a09513b910 | principle | ["0xe7fe09267d7b"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0x700ff6e270fd", "0x0f437799202f"] | 25. E = Superpose(K, U) — Existence as Grounded Known + Structured Unknown
> Classical: A = constant + variable.
> General: E = Superpose(K, U).
- **K** = grounded known anchor (stabilized, not just "constant")
- **U** = structured unknown (NOT noise — unresolved structured presence)
```
U = V(K, Δ₁, Δ₂, ..., Δn)
Δ... |
0x310768bb9aef | base_name | [] | [] | [] | 26. Grounded Node = [G, T, S, W, X] |
0x9e9f14b0d8ff | principle | ["0x310768bb9aef"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0"] | 26. Grounded Node = [G, T, S, W, X]
> A node is not a point in a graph. It is a grounded entity in an existence field.
```
N = [G, T, S, W, X]
G = Grounding — primordial axes coordinates, ontological anchor
T = Time trace — when created, when last activated
S = State — current expression (DERIVED, no... |
0x59e09db80c55 | base_name | [] | [] | [] | 27. Five-Layer Architecture Stack |
0x3ff163dd704f | principle | ["0x59e09db80c55"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0x15fbb59966b0", "0x0ad428c0a634", "0x0f437799202f"] | 27. Five-Layer Architecture Stack
> From substrate to compression — the full grounded system:
```
Layer 0: Existence Core (E) — substrate containing everything
Layer 1: Primordial Axes (P) — binary grounding poles (K↔U, etc.)
Layer 2: Grounded Nodes (N) — entities anchored to at least 1 axis
Lay... |
0x9905d1439e93 | base_name | [] | [] | [] | 28. Frame Adjudicator — ACCEPT / REJECT / PARALLEL |
0x965f29b86147 | principle | ["0x9905d1439e93"] | [] | ["0x501d59a1e46c", "0xfbe37abdbc60", "0x16f9f618ef6e"] | 28. Frame Adjudicator — ACCEPT / REJECT / PARALLEL
> Before processing any input, evaluate the frame:
```
ACCEPT — frame consistent with observation → proceed
REJECT — frame contradicts observation → reframe from internal model
PARALLEL — insufficient evidence → maintain multiple frames
→ choose action ... |
0x3faa3c845140 | base_name | [] | [] | [] | 29. World Model = 5 Primitives |
0xb09efa80ec61 | principle | ["0x3faa3c845140"] | [] | ["0x501d59a1e46c", "0x852eecb66da1"] | 29. World Model = 5 Primitives
> To represent ANY interactive world:
1. **Spatial occupancy** — where objects are, what space they fill
2. **Movement vector** — how objects move
3. **Static vs dynamic** — background vs actors
4. **Scalar state** — values that change (health, energy, count)
5. **Contact effects** — wha... |
0xf4afd2b71e52 | base_name | [] | [] | [] | 30. Scaffold Architecture — IPOD + VEG |
0x90ed35855707 | principle | ["0xf4afd2b71e52"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x700ff6e270fd", "0xfbaec867b0e9", "0x16f9f618ef6e", "0x90563024aeb2"] | 30. Scaffold Architecture — IPOD + VEG
The principles above are implemented through two orthogonal structures:
**IPOD = the body (invariant structure)**
```
I (Input) — sensors that collect all channels (P7)
P (Process) — brain that runs convergence loop (P3) using query protocol (P9)
O (Output) — actuators that ex... |
0xb4c612ede35e | base_name | [] | [] | [] | 31. Collapse Condition = Context Count, Not Numeric Threshold |
0x4e4f4731f378 | principle | ["0xb4c612ede35e"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0x852eecb66da1", "0xfbe37abdbc60", "0x15fbb59966b0", "0x0ad428c0a634", "0x700ff6e270fd", "0xfbaec867b0e9", "0x16f9f618ef6e", "0x90563024aeb2"] | 31. Collapse Condition = Context Count, Not Numeric Threshold
> A chain does not collapse by exceeding a fixed numeric threshold.
> A chain collapses when the NUMBER OF MATCHING CONTEXT CLUES is maximal.
**Wrong approach:**
```
score = Σ (power_i × ratio_i) ← fixed numbers, same for every input
if score > thresho... |
0xc7484593ee95 | base_name | [] | [] | [] | 32. Frame Switch Eliminates U |
0x969914de2ae2 | principle | ["0xc7484593ee95"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0xfbaec867b0e9", "0x16f9f618ef6e"] | 32. Frame Switch Eliminates U
> When stuck, the problem is not lack of compute — it is lack of frame.
> Switch frame → see new K/U split → new information → U shrinks → closer to collapse.
Every observation is filtered through a frame. Same data, different frame → different K and different U.
When all actions within t... |
0x54ea06e69006 | base_name | [] | [] | [] | 33. Reduce to 2^10 Before Brute Force |
0x7d079b652339 | principle | ["0x54ea06e69006"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x15fbb59966b0"] | 33. Reduce to 2^10 Before Brute Force
> Never brute force more than ~1000 combinations. If search space > 2^10, STOP and reduce first.
> Human cannot enumerate 4^31. Neither should agent. Add input to shrink U.
**Reduction chain (each step cuts exponentially):**
```
RAW: N_objects ^ N_clicks (e.g., 4... |
0x3bb88ce3537b | base_name | [] | [] | [] | 34. Metaphor = Structure Transfer Across Domains |
0xa1a06e5855a4 | principle | ["0x3bb88ce3537b"] | [] | ["0x501d59a1e46c", "0x15fbb59966b0", "0x0ad428c0a634"] | 34. Metaphor = Structure Transfer Across Domains
> When stuck in domain A, find domain B with ISOMORPHIC structure. Solution in B transfers to A.
> This is P24 (NMF) at macro scale: same Meaning, different Name, different Frame.
Metaphor is not decoration — it is the mechanism of cross-domain intelligence.
```
Domain... |
0x844970691d53 | base_name | [] | [] | [] | 35. Gap + Metaphor = Universal Reasoning Tool |
0xf37c1da19133 | principle | ["0x844970691d53"] | [] | ["0x501d59a1e46c", "0x91515f441344", "0xfbe37abdbc60", "0x15fbb59966b0", "0x0ad428c0a634", "0x700ff6e270fd", "0xfbaec867b0e9", "0x16f9f618ef6e", "0x90563024aeb2"] | 35. Gap + Metaphor = Universal Reasoning Tool
> Gap is not just P6 (learning signal). Gap is the UNIVERSAL MEASURE of difference when comparing any two things.
> Metaphor is not just P34 (structure transfer). Metaphor provides GROUNDING for the unknown.
> Combined: metaphor gives you a complete structure to compare aga... |
0x7960621cdb5b | base_name | [] | [] | [] | 01_TOB_Framework |
0x147963c74a2b | category | ["0x7960621cdb5b"] | [] | ["0x501d59a1e46c"] | 01_TOB_Framework |
0x107412ad6cd0 | base_name | [] | [] | [] | 04_Others |
0xa68ce18afd9b | category | ["0x107412ad6cd0"] | [] | ["0x501d59a1e46c"] | 04_Others |
0x737f4396e5ce | base_name | [] | [] | [] | 05_System_Prompt |
0x64d9f5ce0a51 | category | ["0x737f4396e5ce"] | [] | ["0x501d59a1e46c"] | 05_System_Prompt |
0x1f8a84a372f9 | base_name | [] | [] | [] | Danh_Nghia_He |
0x59442f906471 | keyword | ["0x8c7c32544486"] | [] | [] | flow |
0x8c7c32544486 | base_name | [] | [] | [] | flow |
0xd7174fe4b292 | keyword | ["0xa21faa3941b0"] | [] | [] | output |
0xa21faa3941b0 | base_name | [] | [] | [] | output |
0x8e3d76e6df6f | keyword | ["0x4e2484008abe"] | [] | [] | graph |
0x4e2484008abe | base_name | [] | [] | [] | graph |
0x83b5a2c2f920 | keyword | ["0xa0abd9df4e8e"] | [] | [] | learning |
0xa0abd9df4e8e | base_name | [] | [] | [] | learning |
0xc6f62c6de314 | keyword | ["0x12faded5c82a"] | [] | [] | invariant |
0x12faded5c82a | base_name | [] | [] | [] | invariant |
0x7151fc571443 | keyword | ["0x4416ad448cb6"] | [] | [] | node |
0x4416ad448cb6 | base_name | [] | [] | [] | node |
0xcc88b6fef577 | keyword | ["0x8cab01d21b72"] | [] | [] | input |
0x8cab01d21b72 | base_name | [] | [] | [] | input |
0xd108758a5fb5 | keyword | ["0xee9361d7806d"] | [] | [] | variant |
0xee9361d7806d | base_name | [] | [] | [] | variant |
0x92eab4e046c9 | keyword | ["0x533a14a6b55f"] | [] | [] | edge |
0x533a14a6b55f | base_name | [] | [] | [] | edge |
0x7bf3334e8663 | keyword | ["0xf6f268ccddf6"] | [] | [] | threshold |
0xf6f268ccddf6 | base_name | [] | [] | [] | threshold |
0x1f64900026eb | keyword | ["0xf5df57509a5c"] | [] | [] | data |
0xf5df57509a5c | base_name | [] | [] | [] | data |
0x2fe776bc92e5 | keyword | ["0x34a31d7ffc86"] | [] | [] | ipod |
0x53070d9c0f82 | keyword | ["0xebcd41d11f3d"] | [] | [] | sao ch |
0xebcd41d11f3d | base_name | [] | [] | [] | sao ch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.