Instructions to use amazingvince/Nanonets-OCR-s-Fast-Preprocessor with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amazingvince/Nanonets-OCR-s-Fast-Preprocessor with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="amazingvince/Nanonets-OCR-s-Fast-Preprocessor") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("amazingvince/Nanonets-OCR-s-Fast-Preprocessor") model = AutoModelForImageTextToText.from_pretrained("amazingvince/Nanonets-OCR-s-Fast-Preprocessor") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use amazingvince/Nanonets-OCR-s-Fast-Preprocessor with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "amazingvince/Nanonets-OCR-s-Fast-Preprocessor" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amazingvince/Nanonets-OCR-s-Fast-Preprocessor", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/amazingvince/Nanonets-OCR-s-Fast-Preprocessor
- SGLang
How to use amazingvince/Nanonets-OCR-s-Fast-Preprocessor with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "amazingvince/Nanonets-OCR-s-Fast-Preprocessor" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amazingvince/Nanonets-OCR-s-Fast-Preprocessor", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "amazingvince/Nanonets-OCR-s-Fast-Preprocessor" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amazingvince/Nanonets-OCR-s-Fast-Preprocessor", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use amazingvince/Nanonets-OCR-s-Fast-Preprocessor with Docker Model Runner:
docker model run hf.co/amazingvince/Nanonets-OCR-s-Fast-Preprocessor
What makes this different from the base model?
What exactly does Fast-Preprocessor mean?
By default, the preprocessor here is set to "slow". This was required when Qwen 2.5 VL was released, as there were large deviations in results between the fast and slow preprocessors. This issue has been somewhat fixed now (PR).
In vLLM, I was experiencing strange behavior and couldn't fully saturate the GPU, so I wanted to try the fast preprocessor. Since you can't override this value at runtime in vLLM, I uploaded a copy with the preprocessor set to "fast".