File size: 2,843 Bytes
4e9c66d 55a6ac0 d089458 0bb1a6c 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 7571005 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 55a6ac0 d089458 7571005 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
title: Coherent Compute Engine
emoji: 🌖
colorFrom: pink
colorTo: red
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: false
license: other
short_description: Live coherence + throughput benchmark (no precomputed result
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/685edcb04796127b024b4805/
---
# README.md — Coherent Compute Engine
**Coherent Compute Engine** is a live benchmark Space that measures **real** compute throughput and stability for a coherent state update rule — with **no precomputed results** and **no estimates**.
It’s designed to be understandable, verifiable, and brutally honest:
- Everything is computed **now**, on the Space machine.
- Baselines (Python loop + vectorised NumPy) are measured **live** on the **same machine**.
- Results can be downloaded as a **receipt** (JSON) with a SHA-256 hash.
## What an “item” is
**One item = one coherent update of `[Ψ, E, L]` per oscillator per step.**
So:
`items/sec = (N oscillators × steps) / elapsed_seconds`
We report throughput in **billions of items/sec** (“B/s”).
## What this Space measures
For the chosen oscillator count and step count, it reports:
- **Throughput (B/s)**: billions of coherent updates per second
- **Coherence (|C|)**: a stability proxy computed from a normalised dot product of sampled `Ψ` before/after
- **Mean Energy**: bounded mean proxy from `E` in `[0, 1.5]`
- **Elapsed Time (s)**
- **Engine**: `numba` when available; otherwise `numpy`
- **Verification baselines (optional)**:
- **Baseline (Vectorised NumPy)**
- **Baseline (Python loop, capped)** — safety-capped and subset-based to keep the Space responsive
- **Speedup factors** vs those baselines
## Receipts: verification you can download
Each run emits a small JSON “receipt” containing:
- input settings (N, steps)
- engine name
- measured metrics
- runtime info
- **SHA-256 hash** of the canonical JSON
This supports the “don’t trust it, verify it” approach.
## Why baselines exist (and why they’re not a contest)
Baselines are **verification anchors**:
- They show what “normal” Python looks like (slow floor)
- They show what vectorised NumPy looks like (standard reference)
- They show what the engine path achieved under the same rules
No claims about beating GPUs or other systems. Just measured, reproducible data.
## Running locally
```bash
pip install -r requirements.txt
python app.py
Safety rails
To keep the Space stable:
• oscillator count is clamped to a safe max
• steps are clamped
• Python loop baseline is time-capped and subset-based
That ensures the Space stays responsive while still measuring real throughput.
⸻
Built by RFTSystems.
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |