google/gemma-4-31B-it
LiteRT-LM Optimized

Deterministic Projection Memory (DPM) Artifact for Security Telemetry


🟒 Overview

This repository contains a specialized LiteRT-LM conversion of google/gemma-4-31B-it. It is engineered for local-first DPM//BENCH experiments, specifically targeting long-horizon incident narratives and red-team traces.

Objective: Package the base instruction model into a runtime format for deterministic projection memory experiments, ensuring that append-only event logs map to a consistent structured memory surface.


πŸ› οΈ Conversion Architecture

The conversion utilizes the latest LiteRT-LM stack, requiring specific flags to handle the Gemma 4 per-layer embedding structure.

View Conversion Script
python -m litert_torch.generative.export_hf \
    --model /path/to/google/gemma-4-31B-it \
    --output_dir /path/to/out/gemma-4-31B-it-litert-lm \
    --externalize_embedder True \
    --single_token_embedder True \
    --experimental_lightweight_conversion True \
    --bundle_litert_lm True \
    --task text_generation

Critical Flags for Compatibility:

  • --externalize_embedder True: Essential for per-layer embedding paths.
  • --experimental_lightweight_conversion True: Prevents runtime artifact corruption.
  • --bundle_litert_lm True: Packages tokenizer and templates into the .litertlm artifact.

πŸ’» Infrastructure Requirements

Requirement Specification Context
RAM 128 GB+ Minimum for 31B conversion overhead
Disk Space 500 GB Workspace for intermediate FlatBuffer assets
Storage Type NVMe SSD Crucial for large model serialization
Inference Apple Silicon / GPU 31B is unsuitable for fast CPU-only DPM

πŸ” Validation Protocol

For a successful DPM//BENCH run, the artifact must maintain byte-stability. Ensure the following conditions are met:

  1. Integrity: LiteRT-LM binary successfully parses the .litertlm bundle.
  2. Determinism: At temp 0 and a fixed seed, repeated projection calls must yield identical memory-surface bytes.
  3. Format: JSON-only prompts must satisfy schema constraints under high-compression DPM tests.

⚠️ Implementation Boundaries

  • Intended Use: Security incident summarization, telemetry trace compression, and blue-team event reasoning.
  • Non-Intended Use: This is not a standalone decision-making system. It is a projection tool. All outputs require human review and replay-validation in high-stakes environments.

Base Model: google/gemma-4-31B-it

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for macmacmacmac/gemma-4-31B-it-litert-lm

Finetuned
(146)
this model