--- title: LTMarX emoji: šŸŽ¬ colorFrom: blue colorTo: purple sdk: docker app_port: 7860 pinned: false --- # LTMarX — Video Watermarking Imperceptible 32-bit watermarking for video. Embeds a payload into the luminance channel using DWT/DCT transform-domain quantization (DM-QIM) with BCH error correction. Survives re-encoding, rescaling, brightness/contrast/saturation adjustments, and cropping up to ~20%. All processing runs in the browser — no server round-trips needed. ## Quick Start ```bash npm install npm run dev # Web UI at localhost:5173 npm test # Run test suite ``` ## CLI ```bash npx tsx server/cli.ts embed -i input.mp4 -o output.mp4 --key SECRET --preset moderate --payload DEADBEEF npx tsx server/cli.ts detect -i output.mp4 --key SECRET npx tsx server/cli.ts presets ``` ## Docker ```bash docker build -t ltmarx . docker run -p 7860:7860 ltmarx ``` ## Architecture ``` core/ Pure TypeScript watermark engine (isomorphic, zero platform deps) ā”œā”€ā”€ dwt.ts Haar DWT (forward/inverse, multi-level) ā”œā”€ā”€ dct.ts 8Ɨ8 DCT with zigzag scan ā”œā”€ā”€ dmqim.ts Dither-Modulated QIM (embed/extract with soft decisions) ā”œā”€ā”€ bch.ts BCH(63,36,5) over GF(2^6), Berlekamp-Massey decoding ā”œā”€ā”€ crc.ts CRC-4 integrity check ā”œā”€ā”€ tiling.ts Periodic tile layout + autocorrelation-based grid recovery ā”œā”€ā”€ masking.ts Perceptual masking (variance-adaptive quantization step) ā”œā”€ā”€ keygen.ts Seeded PRNG for dithers and permutations ā”œā”€ā”€ embedder.ts Y-plane → watermarked Y-plane ā”œā”€ā”€ detector.ts Y-plane(s) → payload + confidence ā”œā”€ā”€ presets.ts Named configurations (light → fortress) └── types.ts Shared types web/ Frontend (Vite + React + Tailwind) ā”œā”€ā”€ src/ │ ā”œā”€ā”€ App.tsx │ ā”œā”€ā”€ components/ │ │ ā”œā”€ā”€ EmbedPanel.tsx Upload, configure, embed, download │ │ ā”œā”€ā”€ DetectPanel.tsx Upload, detect, display results │ │ ā”œā”€ā”€ ComparisonView.tsx Side-by-side / difference viewer │ │ ā”œā”€ā”€ RobustnessTest.tsx Automated attack battery (re-encode, crop, etc.) │ │ ā”œā”€ā”€ HowItWorks.tsx Interactive explainer with D3 visualizations │ │ ā”œā”€ā”€ StrengthSlider.tsx Preset selector with snap points │ │ ā”œā”€ā”€ ResultCard.tsx Detection result display │ │ └── ApiDocs.tsx Inline API reference │ ā”œā”€ā”€ lib/ │ │ └── video-io.ts Frame extraction, encoding, attack simulations │ └── workers/ │ └── watermark.worker.ts └── index.html server/ Node.js CLI + HTTP API ā”œā”€ā”€ cli.ts CLI for embed/detect ā”œā”€ā”€ api.ts HTTP server (serves web UI + REST endpoints) └── ffmpeg-io.ts FFmpeg subprocess for YUV420p I/O tests/ Vitest test suite ``` **Design principle:** `core/` has zero platform dependencies — it operates on raw `Uint8Array` Y-plane buffers. The same code runs in the browser (via Canvas + ffmpeg.wasm) and on the server (via Node.js + FFmpeg). ## Watermarking Pipeline ### Embedding ``` Y plane → 2-level Haar DWT → HL subband → periodic tile grid → per tile: 8Ɨ8 DCT blocks → select mid-freq zigzag coefficients → DM-QIM embed coded bits (with per-block dithering and perceptual masking) → inverse DCT → inverse DWT → modified Y plane ``` ### Payload Encoding ``` 32-bit payload → CRC-4 append → BCH(63,36,5) encode → keyed interleave → map to DCT coefficients across tiles (with wraparound redundancy) ``` ### Detection ``` Y plane(s) → DWT → HL subband → tile grid → per tile: DCT → DM-QIM soft extract → soft-combine across tiles and frames → keyed de-interleave → BCH soft decode → CRC verify → payload ``` ### Crop-Resilient Detection When the frame has been cropped, the detector doesn't know the original tile grid alignment. It searches over three alignment parameters: 1. **DWT padding** (0–3 per axis) — the crop may break DWT pixel pairing 2. **DCT block shift** (0–7 per axis) — the crop may misalign 8Ɨ8 block boundaries within the subband 3. **Tile dither offset** (0–N per axis) — the crop shifts which tile-phase position each block maps to The total search space is 16 Ɨ 64 Ɨ N² candidates (~37K for the strong preset). To make this fast: - DCT coefficients are precomputed once per (pad, shift) combination using only tile 0 - Dither offsets are swept cheaply using just DM-QIM re-extraction on cached coefficients - Candidates are ranked by signal magnitude (sum of squared averaged soft bits) - Only the top 50 candidates are fully decoded with all frames This runs in ~1 second for 32 frames on a 512Ɨ512 video. ## Presets | Preset | Delta | Tile Period | Zigzag Positions | Masking | Use Case | |--------|-------|-------------|------------------|---------|----------| | **Light** | 50 | 256px | 3–14 (mid-freq) | No | Near-invisible, mild compression | | **Moderate** | 62 | 240px | 3–14 (mid-freq) | Yes | Balanced with perceptual masking | | **Strong** | 110 | 208px | 1–20 (low+mid) | Yes | Heavy re-encoding, rescaling, cropping | | **Fortress** | 150 | 192px | 1–20 (low+mid) | Yes | Maximum robustness | All presets use BCH(63,36,5) with CRC-4 and 2-level DWT. Higher delta = stronger embedding = more visible artifacts but better survival under attacks. The "strong" and "fortress" presets use more DCT coefficients (zigzag positions 1–20 vs 3–14) for additional redundancy. ## Robustness The web UI includes an automated robustness test battery. Each test applies an attack to the watermarked video and attempts detection: | Attack | Variants Tested | |--------|----------------| | **Re-encode** | CRF 23, 28, 33, 38, 43 | | **Downscale** | 25%, 50%, 75%, 90% | | **Brightness** | -0.2, +0.2, +0.4 | | **Contrast** | 0.5Ɨ, 1.5Ɨ, 2.0Ɨ | | **Saturation** | 0Ɨ, 0.5Ɨ, 2.0Ɨ | | **Crop** | 5%, 10%, 15%, 20% (per side) | ## API ### Embedding ```typescript import { embedWatermark } from './core/embedder'; import { getPreset } from './core/presets'; const config = getPreset('moderate'); const result = embedWatermark(yPlane, width, height, payload, key, config); // result.yPlane: watermarked Y plane (Uint8Array) // result.psnr: quality metric (dB) ``` ### Detection ```typescript import { detectWatermarkMultiFrame } from './core/detector'; import { getPreset } from './core/presets'; const result = detectWatermarkMultiFrame(yPlanes, width, height, key, config); // result.detected: boolean // result.payload: Uint8Array | null // result.confidence: 0–1 ``` ### Crop-Resilient Detection ```typescript const result = detectWatermarkMultiFrame( yPlanes, width, height, key, config, { cropResilient: true } ); ``` ### Auto-Detection (tries all presets) ```typescript import { autoDetectMultiFrame } from './core/detector'; const result = autoDetectMultiFrame(yPlanes, width, height, key); // result.presetUsed: which preset matched ``` ## HTTP API ``` POST /api/embed { videoBase64, key, preset, payload } POST /api/detect { videoBase64, key, preset?, frames? } GET /api/health → { status: "ok" } ``` ## Testing ```bash npm test # Run all tests npm run test:watch # Watch mode ``` 25 tests across 6 files covering: DWT round-trip, DCT round-trip, DM-QIM embed/extract, BCH encode/decode with error correction, CRC append/verify, full embed-detect pipeline across presets, false positive rejection (wrong key, unwatermarked frame), crop-resilient detection (arbitrary offset and ~20% crop). ## Browser Encoding The web UI encodes watermarked video using ffmpeg.wasm (x264 in WebAssembly). To avoid memory pressure, frames are encoded in chunks of 100 and concatenated at the end. Peak memory stays proportional to chunk size rather than scaling with video length.