YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

NVIDIA Nemotron Parse v1.1 (GGUF, Q4_K_M)

Production-ready GGUF quantization of nvidia/nemotron-parse-v1.1 for distributed document parsing and extraction โ€” powered by the Aether edge inference runtime.

Highlights

  • ~2B parameters โ€” NVIDIA document parsing model. Extracts structured data from documents, PDFs, and web pages.
  • ~2 GB Q4_K_M quantized โ€” optimized for distributed edge inference
  • LLaMA architecture โ€” proven, stable, well-tested
  • Aether runtime compatible โ€” layer-sharded across distributed nodes via Edgework.ai

Model Details

Property Value
Base model nvidia/nemotron-parse-v1.1
Parameters ~2B
Architecture LLaMA
Quantization Q4_K_M
Format GGUF
Size ~2 GB
License other

Usage

With llama.cpp

./llama-cli -m nvidia-nemotron-parse-v1.1-q4_k_m.gguf -p "Your prompt here" -n 256

With Aether (Distributed Inference)

This model is deployed across the Aether distributed inference network. Weights are layer-sharded and distributed across multiple edge nodes for parallel inference.

Deployment Architecture

This model runs on the Aether distributed inference runtime โ€” our custom engine that shards model layers across multiple nodes for parallel execution:

  1. Coordinator receives requests and manages token generation
  2. Layer nodes each hold a subset of model layers
  3. Hidden states flow between nodes via gRPC
  4. Zero cold start via warm pool scheduling

Deployed via Edgework.ai โ€” bringing fast, cheap, and private inference as close to the user as possible.

About

Published by AFFECTIVELY ยท Managed by @buley

We quantize and publish production-ready models for distributed edge inference via the Aether runtime. Every release is tested for correctness and stability before publication.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support