V8 cipher-agnostic byte-amplification detector
What it is
V8 is a reference detector trained against corpus_v1.0βcorpus_v1.5 under NullRabbit's pre-registration discipline. The model is one demonstrable outcome of that methodology; the methodology is the contribution.
This is the work of the substrate paper (in preparation): an iterative leak-surface peeling pattern applied across multiple training cycles, with each cycle pre-registered, audited on close, and retracted in writing when a leak fires. V8 is the cycle that landed cipher-agnostic-v2 β a manifest of seven byte-amplification features that detect the attack mechanism without relying on any chain-protocol-specific signal. Cross-chain transfer follows from that property: V8 trained on Sui detects Solana byte-amplification attacks at the wire because the wire shape is the same.
The model itself is a calibrated histogram gradient-boosting classifier (CalibratedClassifierCV(HistGradientBoostingClassifier, method='isotonic', cv=5)) over seven features, calibrated for operating-point selection. Single-bundle scoring; not a packet-level streaming detector.
V8 is published as the data-layer artefact of NullRabbit Labs' research on autonomous defence for decentralised networks. The governance layer is published separately (see Related).
Architecture
- Estimator:
CalibratedClassifierCV(HistGradientBoostingClassifier, method='isotonic', cv=5). - Manifest:
cipher-agnostic-v2(7 features). Seefeature_namesin the joblib payload. - Training corpus: 1,972 bundles drawn from
corpus_v1.0βcorpus_v1.5(897 attack + 1,075 benign). - Fidelity filter:
lab+lab-tls-fronted. - Features version:
v1.1. - Seed: 42.
Features
The cipher-agnostic-v2 manifest names seven features computed from two bundle modalities:
| Feature | Source modality | Semantics |
|---|---|---|
resp.req_bytes_max |
responses.parquet |
Maximum observed request size in the response time-series |
resp.resp_bytes_max |
responses.parquet |
Maximum observed response size |
resp.amp_ratio_max |
responses.parquet |
Maximum per-request response:request byte ratio |
resp.amp_ratio_mean |
responses.parquet |
Mean response:request byte ratio |
resp.amp_ratio_median |
responses.parquet |
Median response:request byte ratio |
pcap.unique_dst_ports |
packets.pcap |
Distinct destination TCP ports observed (capped at 5) |
pcap.unique_src_ports |
packets.pcap |
Distinct source TCP ports observed (capped at 5) |
Cipher-agnostic means the features are computable on encrypted wire traffic from packet sizes, timing, and cardinality β no cleartext payload bytes required. This is what makes the cardinality features pcap-derived and the response features parquet-derived work together at training time on cleartext lab captures and at inference time on TLS-fronted production traffic.
Training data
The training corpus is proprietary. The training surface is NullRabbit's archived corpus_v1.0βcorpus_v1.10 (and beyond); the model was trained on the subset of v1.0βv1.5 at fidelity_class β {lab, lab-tls-fronted}.
A curated, public sample of the corpus is available on Hugging Face as NullRabbit/nr-bundles-public β 31 bundles spanning seven vulnerability families across Sui and Solana, CC-BY-4.0. The bundle format is open and specified at nr-bundle-spec (MIT). External researchers building their own corpus against the spec can reproduce the methodology, retrain V8-class detectors on their own data, and compare against this reference model.
Intended use
- Reference detector for byte-amplification attacks on validator-infrastructure JSON-RPC endpoints. Trained on Sui
sui_F10_multi_get_objects_ampand adjacent primitives; transfers cross-chain to SolanaSOL_F10_multi_get_accounts_ampat 100% recall in the published cross-chain leave-one-primitive-out evaluation. - Methodology demonstration: V8 is the worked example of how a pre-registered, audit-disciplined training cycle produces a detector whose limitations are characterised honestly. The card's Load-bearing limitations section is the methodology demonstration; the model is the artefact that supports it.
- Reproducibility anchor: train a parallel detector against your own bundle corpus and compare. The seven-feature manifest is the contract.
Load-bearing limitations
This section is the most important part of the card. Each limitation is anchored in pre-registered evidence and surfaced because it would otherwise become a deployment-time surprise.
Phase 1 close-gate scope
V8 is Phase-1-close-gate-cleared on sui_F10_multi_get_objects_amp at lab-tls-fronted fidelity β extractor-numerical-equivalence between the production extractor (IBSR collect-payload mode at post-term loopback vantage) and the offline reference extractor on all seven features, within PHASE_1_TOLERANCE. The model-side close-gate (PHASE_1_SCORE_CLASS_MATCH per Decision D-025) β which verifies that prediction-class equivalence holds across configuration shifts that move features into and out of the model's training distribution β is still in flight as of this card's date. The numerical-equivalence layer is unblocked; the deployment-claim-load-bearing model-side gate is not.
Cardinality envelope
V8's pcap.unique_*_ports features are extracted with a cap-at-5 ceiling that aligns the IBSR and offline extractors above five distinct source/destination TCP ports per direction. Below five distinct ports, the two extractors diverge by +1 due to IBSR's broader observation coverage (TC-layer control-packet observation plus warmup-window timing). Score interpretation below the envelope is regime-conditional; the close-gate clearance is band-bounded at β₯5-port cardinality.
Saturation envelope
The IBSR extractor's BPF ringbuf saturates at ~80 MB/sec sustained payload (~3,400 RPCs/sec for F10-class amplification, default 16 MiB ringbuf). Above this rate, feature values under-count along axes the model is most sensitive to (the response-byte-distribution features). Production deployment beyond this saturation envelope will produce regressions in score that look like detection failure but are extraction failure.
Out-of-training-distribution attack-shape mis-scoring
V8's training distribution covered F10 reproducer configurations at --ids-per-request 5/10/25 --workers 1/2/8 --delay-ms 0. The model has been observed to score "benign" on attack-shape configurations outside that distribution β specifically on the --ids-per-request 1 --workers 1 --delay-ms 500 low-volume regime captured for the Phase 1 close-gate paired bundles. This is the gap that Decision D-025's PHASE_1_SCORE_CLASS_MATCH gate exists to close. V8 is not a universal F10 detector. It is an F10 detector inside its training distribution.
Cross-chain transfer is class-specific
V8 transfers cleanly cross-chain to SOL_F10_multi_get_accounts_amp at 100% recall in the published cross-chain leave-one-primitive-out evaluation. It does not transfer to other Solana classes. The parallel V14 (compute_amp family) and V11 (rate_limiter_bypass family) binary detectors achieve 0% recall on SOL_F14 and 0% recall on SOL_P07 respectively when trained Sui-only and evaluated on Solana. Joint training (the multi-class softmax architecture detailed in companion research) is the architecturally-correct fix for those classes; no feature surgery on V8 will produce a model that detects SOL_F14 or SOL_P07.
Binary detector β family-specific, not universal
V8 is a binary detector trained on the byte-amplification family only (Sui F10, Solana F10). Attacks from other vulnerability families β reconnaissance (nmap_slow), service_misconfig (ssh_pwauth, grafana_anon), auth_bypass (admin_rpc_probe), rate_limiter_bypass (simulate_compute_flood) β produce wire shapes V8 does not recognise as attack-shape. V8 will score them "benign". This is correct behaviour for a family-specific detector, not a failure mode. Production deployment must compose V8 with parallel family detectors (V9 recon, V10 auth, V11 app-DoS, V13 misconfig, V14 compute-amp) or use the multi-class softmax model published separately at NullRabbit/multiclass-folded.
Empty-bundle mis-scoring
V8 was trained on bundles that observed at least some RPC traffic during the capture window. When responses.parquet is missing or zero-rows (typical for passive-workload bundles like sui_BENIGN_passive_fullnode and solana_BENIGN_validator_passive), the five resp.* features collapse to zero. V8's decision tree doesn't have rules covering that part of feature space and may produce a high attack-score on the all-zero vector. The predict.py helper shipped with this model (see How to use) applies a scoreability gate that refuses to predict on zero-rows-or-missing-responses bundles; the gate is the recommended mitigation.
Disclosure context
The training corpus includes bundles for primitives at varying disclosure states. SOL_F10_multi_get_accounts_amp is publicly disclosed per NR-2026-001. Other primitives represent methodology-class findings or are referenced in coordinated-disclosure channels with respective ecosystems. Disclosure-status information travels with the bundles in nr-bundles-public; this model card is the inference-layer cross-reference.
Evaluation
- Training-set decision agreement: 100% (all 1,972 bundles).
- Phase 1 close-gate clearance: 7/7 features pass numerical-equivalence between production extractor and offline extractor on held-out + multi-window + low-cardinality + paired bundle sub-experiments (band-bounded as documented above).
- Cross-chain leave-one-primitive-out: 100% recall on
SOL_F10_multi_get_accounts_ampzero-shot from Sui training.
Full evaluation evidence and audit trail lives in the substrate paper and in the nr-substrate working repo's docs/PHASE-1-CLOSE-GATE-CLEARED-2026-05-06.md + companion artefacts. The substrate paper is in preparation.
How to use
Recommended path: predict.py (scoreability-gated)
The repository ships with predict.py β a thin scoreability-gated inference helper that wraps the raw estimator with two production-side gates:
- Scoreability gate: refuses to score bundles where
responses.parquetis missing or zero-rows. V8's training distribution doesn't cover all-zero feature vectors (see "Empty-bundle mis-scoring" in Load-bearing limitations above), so the gate returns an explicitverdict: "unscoreable"instead of a spurious attack score on passive-workload bundles. - Feature-coverage gate: emits a
feature_coverageflag ("full"when raw packets.pcap is present;"resp_only"when it isn't) so callers can downweight or ignore predictions where the two cardinality features defaulted to 0.
from huggingface_hub import hf_hub_download
from predict import load_v8, score_bundle
model_path = hf_hub_download(
repo_id="NullRabbit/v8-cipher-agnostic", filename="model.joblib"
)
payload = load_v8(model_path)
record = score_bundle("/path/to/some/bundle_dir", payload)
if record["verdict"] == "unscoreable":
print(f"refused: {record['reason']}")
else:
print(f"V8 score: {record['v8_score']:.4f} ({record['verdict']}, "
f"coverage={record['feature_coverage']})")
predict.py depends on the bundle-spec reference parser:
pip install git+https://github.com/NullRabbitLabs/nr-bundle-spec.git
For a full worked example that loads a bundle from nr-bundles-public via the spec parser, applies the scoreability gate, and renders verdicts on attack + benign + passive-benign samples, see inference_example.py.
Bypassing the gate
Callers with their own pre-filtering pipeline (or who explicitly want the raw model output) can load the estimator directly:
import joblib
import numpy as np
payload = joblib.load(model_path)
model = payload["model"] # CalibratedClassifierCV
features = payload["feature_names"] # 7-feature contract
X = np.array([[...]]) # shape (n_samples, 7)
score = model.predict_proba(X)[:, 1]
This path is the responsibility of the caller. If you feed an all-zero feature vector to model.predict_proba, V8 will return ~0.9977, which is spurious. The scoreability gate exists for exactly that case. See the Load-bearing limitations section.
Methodology
NullRabbit's training cycles follow pre-registration discipline. Each cycle has a design document committed before the trainer runs. Audits run on close against sanity floors, per-feature ablation trails, and falsification holdouts. Where an audit fires, training halts, the design is re-registered, and the prior version is retracted in writing.
The iterative leak-surface peeling pattern is the methodology contribution: detection of a training-time leak (a feature whose discriminative signal turns out to come from a labelling artefact or capture-pipeline asymmetry rather than from the attack mechanism) triggers a corpus delta + re-train + re-audit, with each cycle narrowing the leak surface. V8 is the cycle that landed when the methodology's leak-surface was small enough that the manifest generalised across chains; the cycles before it (V1βV7) closed specific leaks named in the substrate paper's leak-surface appendix.
The corpus format and family taxonomy are open at nr-bundle-spec. The methodology is open (in preparation as the substrate paper). The specific corpus contents beyond nr-bundles-public are proprietary.
Related
- Bundle format spec:
nr-bundle-spec(MIT) - Reference public bundles: NullRabbit/nr-bundles-public (CC-BY-4.0)
- Earned-autonomy paper (governance layer for autonomous defence for decentralised networks): Zenodo DOI 10.5281/zenodo.18406828
- Substrate paper (data-layer methodology, in preparation)
- NullRabbit Labs: huggingface.co/NullRabbit
- Website: nullrabbit.ai
Citation
@misc{nullrabbit_v8_cipher_agnostic_2026,
author = {NullRabbit},
title = {V8 cipher-agnostic byte-amplification detector},
year = {2026},
month = may,
version = {1},
publisher = {Hugging Face},
url = {https://huggingface.co/NullRabbit/v8-cipher-agnostic},
note = {Reference binary detector for byte-amplification attacks on validator-infrastructure JSON-RPC endpoints. Trained on the bundle v1 corpus specified at nr-bundle-spec v0.1.0; curated public sample at NullRabbit/nr-bundles-public.},
}
Contact
Research enquiries: simon@nullrabbit.ai
Spec compliance or format questions β open an issue at nr-bundle-spec.