YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
texformer-576m-int8_dynamic
This repository contains a TeXformer 576m checkpoint in int8_dynamic precision for OCR-to-LaTeX generation.
This is a custom TeXformer architecture checkpoint (model.pt) plus tokenizer assets.
It is not a standard transformers AutoModel checkpoint.
This export is derived from a checkpoint trained in bf16.
Files
model.pt: TeXformer checkpointtokenizer/pdf_tokenizer.json: PDF-side tokenizertokenizer/latex_tokenizer.json: LaTeX-side tokenizertokenizer/pdf_tags.json: frequent PDF tag metadatatokenizer/latex_commands.json: frequent LaTeX command metadata
Architecture
- Parameters (deduplicated): 576,538,624
- Parameters (state_dict entries): 625,690,624
- Encoder layers: 16
- Decoder layers: 16
- Hidden size (
d_model): 1024 - Attention heads: 16
- Feed-forward size (
d_ff): 4224 - Max encoder length: 2560
- Max decoder length: 2560
- Stored precision:
int8_dynamic
Quantization
- Quantization method:
int8_dynamic - Checkpoint payload key:
model_state_dict - Runtime support: CPU only for quantized execution (
torch.ao.quantization.quantize_dynamic). - CUDA/MPS: not supported for running this model in quantized INT8 form.
- Original model training precision:
bf16 - Sample tensor dtype:
torch.float32 - Notes: Load by applying quantize_dynamic to TeXFormer skeleton before load_state_dict.
Usage
from pathlib import Path
import torch
import torch.nn as nn
from huggingface_hub import snapshot_download
from texformer.models.model import TeXFormer, TeXFormerConfig
from texformer.tokenization.tokenizer import TeXFormerTokenizer
repo_id = "aamingem/texformer-576m-int8_dynamic"
local_dir = Path(snapshot_download(repo_id=repo_id))
tokenizer_dir = local_dir / "tokenizer"
checkpoint = torch.load(local_dir / "model.pt", map_location="cpu", weights_only=False)
if torch.cuda.is_available():
device = torch.device("cuda")
elif hasattr(torch.backends, "mps") and torch.backends.mps.is_available():
device = torch.device("mps")
else:
device = torch.device("cpu")
config = TeXFormerConfig(**checkpoint["config"])
model = TeXFormer(config)
# Dynamic int8 quantization is CPU-oriented in PyTorch.
model_int8 = torch.ao.quantization.quantize_dynamic(model, {nn.Linear}, dtype=torch.qint8)
model_int8.load_state_dict(checkpoint["model_state_dict"], strict=False)
if device.type != "cpu":
print(f"Using {device.type} tokenizer path; model runs on CPU for dynamic int8.")
tokenizer = TeXFormerTokenizer(tokenizer_dir)
print("Loaded dynamic int8 model with tokenizer:", tokenizer.pdf_vocab_size, tokenizer.latex_vocab_size)
Intended Use
- OCR-to-LaTeX / PDF-text-to-LaTeX sequence generation
- Research and experimentation on scientific document conversion
Limitations
- May produce incorrect or non-compiling LaTeX.
- Performance depends on input extraction quality.
- Not intended for high-stakes use without human verification.