diff --git "a/patterns/structured-generation/vlm-structured-generation.html" "b/patterns/structured-generation/vlm-structured-generation.html" --- "a/patterns/structured-generation/vlm-structured-generation.html" +++ "b/patterns/structured-generation/vlm-structured-generation.html" @@ -2,7 +2,7 @@ - + @@ -65,7 +65,6 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin - @@ -74,7 +73,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin - + @@ -135,9 +134,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
@@ -257,7 +248,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

3.1 Introduction

In this chapter we’ll start to look at how we can use Visual Language Models (VLMs) to extract structured information from images of documents.

-

We already saw what this looked like at a conceptual level in the previous chapter. In this chapter we’ll get hands on with some code examples to illustrate how this can be done in practice. To start we’ll focus on some relativey simple documents and tasks. This allows us to focus on the core concepts without getting bogged down in too many complexities. It also means we can use open source models that can be run locally.

+

We already saw what this looked like at a conceptual level in the previous chapter. In this chapter we’ll get hands on with some code examples to illustrate how this can be done in practice. To start we’ll focus on some relatively simple documents and tasks. This allows us to focus on the core concepts without getting bogged down in too many complexities. We’ll use open source models accessed via the Hugging Face Inference API (you can also run them locally — see the appendix).

3.2 The Sloane Index Cards Dataset

@@ -267,7 +258,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

The dataset is available in Parquet format on Hugging Face so it can be easily loaded using the datasets library.

Let’s load the dataset and take a look at one row.

-
+
from datasets import load_dataset
 
 ds = load_dataset("biglam/sloane-catalogues", split="train")
@@ -283,7 +274,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin
 

We can see that we have a dictionary that contains an image as well as some additional metadata fields.

Let’s take a look at an actual example image from the dataset.

-
+
ds[0]["image"]
@@ -294,7 +285,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

Let’s look at a couple more examples to get a sense of the variety in the dataset.

-
+
ds[2]["image"]
@@ -305,7 +296,7 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

one more from later in the dataset

-
+
ds[50]["image"]
@@ -320,478 +311,124 @@ pre > code.sourceCode > span > a:first-child::before { text-decoration: underlin

3.3 Setup

-
-

3.3.1 Start LM Studio

- -

We’ll use LM Studio for this notebook. Since we’ll be using the OpenAI Python client to interact with models run by LM Studio it will be fairly easy to switch to a different model/tool for running the models since many tools have an OpenAI compatible endpoint.

-
-
-
- -
-
-

If you haven’t already, make sure to install LM Studio by following the instructions on the LM Studio website.

-
-
-
-

While LM Studio is primarily known as a GUI tool for interacting with local LLMs, it also includes a built-in API server that is compatible with the OpenAI API. This allows us to use the same code we would use for OpenAI hosted models to interact with local models running in LM Studio.

-

LM Studio has a command line interface (CLI) that we can use to start the server. We can check that the lms command is available here:

-
-
!lms
-
-
-
   __   __  ___  ______          ___        _______   ____
-
-  / /  /  |/  / / __/ /___ _____/ (_)__    / ___/ /  /  _/
-
- / /__/ /|_/ / _\ \/ __/ // / _  / / _ \  / /__/ /___/ /  
-
-/____/_/  /_/ /___/\__/\_,_/\_,_/_/\___/  \___/____/___/  
-
-
-
-lms - LM Studio CLI - v0.0.47
-
-GitHub: https://github.com/lmstudio-ai/lms
-
-
-
-Usage
-
-Usage: lms [options] [command]
-
-
-
-LM Studio CLI
-
-
-
-Options:
-
-      -h, --help  display help for command
-
-
-
-Manage Models:
-
-      get         Searching and downloading a model from online.
-
-      import      Import a model file into LM Studio
-
-      ls          List all downloaded models
-
-
-
-Use Models:
-
-      chat        Open an interactive chat with the currently loaded model.
-
-      load        Load a model
-
-      ps          List all loaded models
-
-      server      Commands for managing the local server
-
-      unload      Unload a model
-
-
-
-Develop & Publish Artifacts:
-
-      clone       Clone an artifact from LM Studio Hub to a local folder.
-
-      create      Create a new project with scaffolding
-
-      dev         Starts the development server for the plugin in the current folder.
-
-      login       Authenticate with LM Studio
-
-      push        Uploads the plugin in the current folder to LM Studio Hub.
-
-
-
-System Management:
-
-      bootstrap   Bootstrap the CLI
-
-      flags       Set or get experiment flags
-
-      log         Log operations. Currently only supports streaming logs from LM Studio via `lms log
-
-                  stream`
-
-      runtime     Manage runtime engines
-
-      status      Prints the status of LM Studio
-
-      version     Prints the version of the CLI
-
-
-
-Commands:
-
-      help        display help for command
-
-
-
-
-

We can check that LM Studio server is running by using the lms server start command.

-
-
!lms server start
-
-
Success! Server is now running on port 1234
-
-
-
-
-

3.3.2 Connect to LM Studio

-

We can use the OpenAI Python client (TODO add link), to connect to LM studio. By default LM studio is running on port 1234 on localhost so we can connect to it here. The default api_key is lm-studio.

-
-
from openai import OpenAI
-
-client = OpenAI(
-    base_url="http://localhost:1234/v1",
-    api_key="lm-studio"
-)
-
-

We can use various different methods with the client, for example we can access the models available:

-
-
from rich import print as rprint
-models = client.models.list()
-rprint(f"Connected. Models: {[m.id for m in models.data]}")
-
-
Connected. Models: ['qwen3-vl-2b-instruct-mlx', 'qwen/qwen2.5-vl-7b', 'qwen/qwen3-vl-8b', 'qwen/qwen3-vl-4b', 
-'text-embedding-nomic-embed-text-v1.5', 'qwen3-vl-30b-a3b-instruct', 'qwen3-vl-30b-a3b-thinking@4bit', 
-'qwen3-vl-30b-a3b-thinking@3bit', 'qwen/qwen3-4b-thinking-2507', 'google/gemma-3-12b', 'google/gemma-3-4b', 
-'qwen2-0.5b-instruct-fingreylit', 'google/gemma-3n-e4b', 'granite-vision-3.3-2b', 'ibm/granite-4-h-tiny', 
-'iconclass-vlm', 'mlx-community/qwen2.5-vl-3b-instruct', 'lmstudio-community/qwen2.5-vl-3b-instruct', 
-'lfm2-vl-1.6b', 'mimo-vl-7b-rl-2508@q4_k_s', 'mimo-vl-7b-rl-2508@q8_0', 'qwen3-30b-a3b-instruct-2507', 
-'qwen3-4b-instruct-2507-mlx', 'openai/gpt-oss-20b', 'mistralai/mistral-small-3.2', 
-'qwen3-30b-a3b-instruct-2507-mlx', 'liquid/lfm2-1.2b', 'smollm3-3b-mlx', 'unsloth/smollm3-3b', 
-'ggml-org/smollm3-3b', 'mlx-community/smollm3-3b']
-
-
-
- -
- +
+
+
import os
+from openai import OpenAI
+from rich import print as rprint
+from dotenv import load_dotenv
+load_dotenv()
+
+client = OpenAI(
+    base_url="https://router.huggingface.co/v1",
+    api_key=os.environ.get("HF_TOKEN"),
+)
+
+

We’ll use Qwen/Qwen3-VL-8B-Instruct throughout this chapter — an 8 billion parameter vision-language model that offers a good balance of quality and speed for document understanding tasks.

3.4 Basic VLM Query

Let’s start by defining a simple function that we can use to query a VLM with an image and a prompt. This function will handle converting the image to base64 and sending the request to the model.

-

For this notebook we’ll default to using the qwen3-vl-2b model which is a small 2 billion parameter model that can be run locally in LM Studio. We may want to experiment with different models later on or try slightly bigger models but this one should be sufficient for our initial experiments.

-
+

We’ll default to using the Qwen/Qwen3-VL-8B-Instruct model via HF Inference Providers. You could experiment with different models later on — the code works with any OpenAI-compatible VLM endpoint.

+
Code -
import base64
-from PIL.Image import Image as PILImage
-from io import BytesIO
-
-def query_image(image: str | PILImage, prompt: str, model: str='qwen3-vl-2b-instruct-mlx', max_image_size: int=1024):
-    """Query VLM with an image."""
-    if isinstance(image, PILImage):
-        # Convert PIL Image to bytes and encode to base64
-        buffered = BytesIO()
-        # ensure image is not too big
-        if image.size > (max_image_size, max_image_size):
-            image.thumbnail((max_image_size, max_image_size))   
-        image.save(buffered, format="JPEG")
-        image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
-    else:
-        # Assume image is a file path
-        with open(image, "rb") as f:
-            image_base64 = base64.b64encode(f.read()).decode('utf-8')
-    #
-    # Query
-    response = client.chat.completions.create(
-        model=model,
-        messages=[{
-            "role": "user",
-            "content": [
-                {"type": "text", "text": prompt},
-                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}
-            ]
-        }]
-    )
-    return response.choices[0].message.content
+
import base64
+from PIL.Image import Image as PILImage
+from io import BytesIO
+
+def query_image(image: str | PILImage, prompt: str, model: str='Qwen/Qwen3-VL-8B-Instruct', max_image_size: int=1024, client=client) -> str:
+    """Query VLM with an image."""
+    if isinstance(image, PILImage):
+        # Convert PIL Image to bytes and encode to base64
+        buffered = BytesIO()
+        # ensure image is not too big
+        if image.size > (max_image_size, max_image_size):
+            image.thumbnail((max_image_size, max_image_size))   
+        image.save(buffered, format="JPEG")
+        image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
+    else:
+        # Assume image is a file path
+        with open(image, "rb") as f:
+            image_base64 = base64.b64encode(f.read()).decode('utf-8')
+    #
+    # Query
+    response = client.chat.completions.create(
+        model=model,
+        messages=[{
+            "role": "user",
+            "content": [
+                {"type": "text", "text": prompt},
+                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}
+            ]
+        }]
+    )
+    return response.choices[0].message.content

3.5 102: Simple VLM Query Example

To get started let’s do a simple query to describe an image from the dataset.

-
-
image = ds[0]["image"]
-
-# Query the VLM to describe the image
-description = query_image(image, "Describe this image.", model='qwen3-vl-2b-instruct-mlx')
-rprint(description)
+
+
image = ds[0]["image"]
+
+# Query the VLM to describe the image
+description = query_image(image, "Describe this image.", model='Qwen/Qwen3-VL-8B-Instruct')
+rprint(description)
-
This is a library reference card from The British Library's Reference Division, specifically for the Reprographic 
-Section. It is a form used to catalog and manage manuscripts.
-
-The card has several fields filled in with handwritten information, likely for a specific manuscript. The main 
-details are:
-
-- **Department:** Manuscripts
-- **Shelfmark:** SLDANE 3972 C. (Vol 1)
-- **Order SCH No:** 98876
-- **Author:** SIR HANS SLOANES LIBRARY (This appears to be a typographical error, likely meant to be "SIR HANS 
-SLOANES")
-- **Title:** CATALOGUE OF SIR HANS SLOANES LIBRARY
-- **Place and date of publication:** (This field is blank)
-- **Centimetres:** 1, 2, 3, 4, 5
-- **Inches:** 1, 2
-
-The card also includes a reduction number: "RD RS8" and "Reduction 12". The card is for a manuscript titled 
-"Catalogue of Sir Hans Sloane's Library" with the shelfmark SLDANE 3972 C. (Vol 1) and Order SCH No 98876.
-
-The card is from The British Library, Reprographic Section, and the address is Gt Russell St, London WC1B 3DG.
+
This is a black-and-white image of a form from The British Library's Reprographic Section, used to request a copy 
+of a manuscript or archival material.
+
+Here is a breakdown of the information on the form:
+
+*   **Institution:** The British Library, Reference Division, Reprographic Section.
+*   **Address:** Great Russell Street, London WC1B 3DG.
+*   **Department:** Manuscripts.
+*   **Shelfmark:** SLOANE 3972.C. (Vol. 1)
+*   **Order Number:** SCH NO 98876
+*   **Author:** This field is blank.
+*   **Title:** CATALOGUE OF SIR HANS SLOANES LIBRARY
+*   **Place and date of publication:** This field is blank.
+*   **Scale/Reduction:** The form indicates a reduction of 12, meaning the reproduction will be scaled down to 
+1/12th of the original size. A ruler scale is provided for reference in centimetres and inches.
+
+The form appears to be filled out by hand for a specific item: Volume 1 of the "Catalogue of Sir Hans Sloane's 
+Library," which is held in the Manuscripts department under the shelfmark SLOANE 3972.C. This catalogue was 
+compiled by Sir Hans Sloane himself and is a significant historical document detailing his extensive collection, 
+which later formed part of the foundation of the British Museum and now resides at the British Library.
 

We can see we get a fairly useful description of the card. If we compare against the image we can see most of the details it mentions appear to be largely correct.

-
-
image
-
+
+
image
+
-

+

@@ -802,48 +439,49 @@ The card is from The British Library, Reprographic Section, and the address is G

3.6 Classification

We’ll define a fairly simple prompt that asks the VLM to decide if a page is one of three categories. We describe each of these categopries and then ask the model to only return one of these as the output. We’ll do this for ten examples and we’ll also log how long it’s taking.

-
-
import time
-from tqdm.auto import tqdm
-
-sample_size = 10
-
-sample = ds.take(sample_size)
-
-prompt = """Classify this image into one of the following categories:
-
-1. **Index/Reference Card**: A library catalog or reference card
-
-2. **Manuscript Page**: A handwritten or historical document page
-
-3. **Other**: Any document that doesn't fit the above categories
-
-Examine the overall structure, layout, and content type to determine the classification. Focus on whether the document is a structured catalog/reference tool (Index Card) or a historical manuscript with continuous text (Manuscript Page).
-
-Return only the category name: "Index/Reference Card", "Manuscript Page", or "Other"
-"""
-
-results = []
-# Time the execution using standard Python
-start_time = time.time()
-for row in tqdm(sample):
-    image = row['image']
-    results.append(query_image(image, prompt))
-elapsed_time = time.time() - start_time
-print(f"Execution time: {elapsed_time:.2f} seconds")
-rprint(results)
+
+
import time
+from tqdm.auto import tqdm
+from rich import print as rprint
+
+sample_size = 10
+
+sample = ds.take(sample_size)
+
+prompt = """Classify this image into one of the following categories:
+
+1. **Index/Reference Card**: A library catalog or reference card
+
+2. **Manuscript Page**: A handwritten or historical document page
+
+3. **Other**: Any document that doesn't fit the above categories
+
+Examine the overall structure, layout, and content type to determine the classification. Focus on whether the document is a structured catalog/reference tool (Index Card) or a historical manuscript with continuous text (Manuscript Page).
+
+Return only the category name: "Index/Reference Card", "Manuscript Page", or "Other"
+"""
+
+results = []
+# Time the execution using standard Python
+start_time = time.time()
+for row in tqdm(sample):
+    image = row['image']
+    results.append(query_image(image, prompt, model='Qwen/Qwen3-VL-8B-Instruct'))
+elapsed_time = time.time() - start_time
+print(f"Execution time: {elapsed_time:.2f} seconds")
+rprint(results)
-
Execution time: 100.05 seconds
+
Execution time: 18.31 seconds
[
     'Index/Reference Card',
-    'Manuscript Page',
+    'Other',
     'Manuscript Page',
     'Manuscript Page',
     'Manuscript Page',
@@ -857,34 +495,34 @@ The card is from The British Library, Reprographic Section, and the address is G
 

Let’s check the result that was predicted as “index/reference card”

-
-
sample[0]['image']
-
+
+
sample[0]['image']
+
-

+

We can extrapolate how long this would take for the full dataset

-
-
# Calculate average time per image
-avg_time_per_image = elapsed_time / sample_size
-
-# Project time for full dataset
-total_images = len(ds)
-projected_time = avg_time_per_image * total_images
-
-print(f"Sample processing time: {elapsed_time:.2f} seconds ({elapsed_time/60:.2f} minutes)")
-print(f"Average time per image: {avg_time_per_image:.2f} seconds")
-print(f"Total images in dataset: {total_images}")
-print(f"Projected time for full dataset: {projected_time/60:.2f} minutes ({projected_time/3600:.2f} hours)")
+
+
# Calculate average time per image
+avg_time_per_image = elapsed_time / sample_size
+
+# Project time for full dataset
+total_images = len(ds)
+projected_time = avg_time_per_image * total_images
+
+print(f"Sample processing time: {elapsed_time:.2f} seconds ({elapsed_time/60:.2f} minutes)")
+print(f"Average time per image: {avg_time_per_image:.2f} seconds")
+print(f"Total images in dataset: {total_images}")
+print(f"Projected time for full dataset: {projected_time/60:.2f} minutes ({projected_time/3600:.2f} hours)")
-
Sample processing time: 100.05 seconds (1.67 minutes)
-Average time per image: 10.01 seconds
+
Sample processing time: 18.31 seconds (0.31 minutes)
+Average time per image: 1.83 seconds
 Total images in dataset: 2734
-Projected time for full dataset: 455.91 minutes (7.60 hours)
+Projected time for full dataset: 83.45 minutes (1.39 hours)
@@ -892,53 +530,53 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)

In the previous example, we relied on the model to return the label in the correct format. While this often works, it can sometimes lead to inconsistencies in the output. To address this, we can use Pydantic models to define a structured output format. This way, we can ensure that the output adheres to a specific schema.

In this example, we’ll define a Pydantic model for our classification task. The model will have a single field category which can take one of three literal values "Index/Reference Card", "Manuscript Page", or "other".

What this means in practice is that the model will only be able to return one of these three values for the category field.

-
-
from pydantic import BaseModel, Field
-from typing import Literal
-
-class PageCategory(BaseModel):
-    category: Literal["Index/Reference Card", "Manuscript Page", "other"] = Field(
-        ..., description="The category of the image"
-    )
+
+
from pydantic import BaseModel, Field
+from typing import Literal
+
+class PageCategory(BaseModel):
+    category: Literal["Index/Reference Card", "Manuscript Page", "other"] = Field(
+        ..., description="The category of the image"
+    )

When using the OpenAI client we can specify this Pydantic model as the response_format when making the request. This tells the model to return the output in a format that can be parsed into the Pydantic model (the APIs for this are still evolving so may change slightly over time).

-
-
buffered = BytesIO()
-image.save(buffered, format="JPEG")
-image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
-completion = client.beta.chat.completions.parse(
-    model="qwen/qwen2.5-vl-7b",
-    messages=[
-         {
-            "role": "user",
-            "content": [
-                {
-                    "type": "text",
-                    "text": prompt,
-                },
-                {
-                    "type": "image_url",
-                    "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"},
-                },
-            ],
-        },
-    ],
-    max_tokens=100,
-    temperature=0.7,
-    response_format=PageCategory,
-)
-rprint(completion)
-rprint(completion.choices[0].message.parsed)
+
+
buffered = BytesIO()
+image.save(buffered, format="JPEG")
+image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
+completion = client.beta.chat.completions.parse(
+    model="Qwen/Qwen3-VL-8B-Instruct",
+    messages=[
+         {
+            "role": "user",
+            "content": [
+                {
+                    "type": "text",
+                    "text": prompt,
+                },
+                {
+                    "type": "image_url",
+                    "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"},
+                },
+            ],
+        },
+    ],
+    max_tokens=200,
+    temperature=0.7,
+    response_format=PageCategory,
+)
+rprint(completion)
+rprint(completion.choices[0].message.parsed)
ParsedChatCompletion[PageCategory](
-    id='chatcmpl-v8bizojixwds0z7pg8j0th',
+    id='47c1a911111046291f6bbec65ebb1ead',
     choices=[
         ParsedChoice[PageCategory](
             finish_reason='stop',
             index=0,
             logprobs=None,
             message=ParsedChatCompletionMessage[PageCategory](
-                content='{"category": "Manuscript Page"}',
+                content='{\n  "category": "Manuscript Page"\n}',
                 refusal=None,
                 role='assistant',
                 annotations=None,
@@ -949,19 +587,34 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)
) ) ], - created=1761588374, - model='qwen/qwen2.5-vl-7b', + created=1771255803, + model='qwen/qwen3-vl-8b-instruct', object='chat.completion', service_tier=None, - system_fingerprint='qwen/qwen2.5-vl-7b', + system_fingerprint='', usage=CompletionUsage( - completion_tokens=10, - prompt_tokens=142, - total_tokens=152, - completion_tokens_details=None, - prompt_tokens_details=None - ), - stats={} + completion_tokens=13, + prompt_tokens=966, + total_tokens=979, + completion_tokens_details=CompletionTokensDetails( + accepted_prediction_tokens=0, + audio_tokens=0, + reasoning_tokens=0, + rejected_prediction_tokens=0, + text_tokens=13, + image_tokens=0, + video_tokens=0 + ), + prompt_tokens_details=PromptTokensDetails( + audio_tokens=0, + cached_tokens=0, + cache_creation_input_tokens=0, + cache_read_input_tokens=0, + text_tokens=228, + image_tokens=738, + video_tokens=0 + ) + ) )
@@ -970,12 +623,12 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)
-
-
image
-
+
+
image
+
-

+

@@ -985,130 +638,130 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)

3.7 Beyond classifying - Extracting structured information

So far we’ve focused on classifying images but what if we want to extract information from the images? Let’s take the first example from the dataset again.

-
-
index_image = ds[0]['image']
-index_image
-
+
+
index_image = ds[0]['image']
+index_image
+
-

+

If we have an image like this we don’t just want to assign a label from it (we may do this as a first step) we actually want to extract the various fields from the card in a structured way. We can again use a Pydantic model to define the structure of the data we want to extract.

-
-
from pydantic import BaseModel, Field
-from typing import Optional
-
-
-class BritishLibraryReprographicCard(BaseModel):
-    """
-    Pydantic model for extracting information from British Library Reference Division 
-    reprographic cards used to document manuscripts and other materials.
-    """
-    
-    department: str = Field(
-        ..., 
-        description="The division that holds the material (e.g., 'MANUSCRIPTS')"
-    )
-    
-    shelfmark: str = Field(
-        ..., 
-        description="The library's classification/location code (e.g., 'SLOANE 3972.C. (VOL 1)')"
-    )
-    
-    order: str = Field(
-        ..., 
-        description="Order reference, typically starting with 'SCH NO' followed by numbers"
-    )
-    
-    author: Optional[str] = Field(
-        None, 
-        description="Author name if present, null if blank or marked with diagonal line"
-    )
-    
-    title: str = Field(
-        ..., 
-        description="The name of the work or manuscript"
-    )
-    
-    place_and_date_of_publication: Optional[str] = Field(
-        None, 
-        description="Place and date of publication if present, null if blank"
-    )
-    
-    reduction: int = Field(
-        ..., 
-        description="The reduction number shown at the bottom of the card"
-    )
+
+
from pydantic import BaseModel, Field
+from typing import Optional
+
+
+class BritishLibraryReprographicCard(BaseModel):
+    """
+    Pydantic model for extracting information from British Library Reference Division 
+    reprographic cards used to document manuscripts and other materials.
+    """
+    
+    department: str = Field(
+        ..., 
+        description="The division that holds the material (e.g., 'MANUSCRIPTS')"
+    )
+    
+    shelfmark: str = Field(
+        ..., 
+        description="The library's classification/location code (e.g., 'SLOANE 3972.C. (VOL 1)')"
+    )
+    
+    order: str = Field(
+        ..., 
+        description="Order reference, typically starting with 'SCH NO' followed by numbers"
+    )
+    
+    author: Optional[str] = Field(
+        None, 
+        description="Author name if present, null if blank or marked with diagonal line"
+    )
+    
+    title: str = Field(
+        ..., 
+        description="The name of the work or manuscript"
+    )
+    
+    place_and_date_of_publication: Optional[str] = Field(
+        None, 
+        description="Place and date of publication if present, null if blank"
+    )
+    
+    reduction: int = Field(
+        ..., 
+        description="The reduction number shown at the bottom of the card"
+    )

We’ll now create a function to handle the querying process using this structured schema.

-
-
def query_image_structured(image, prompt, schema, model='qwen3-vl-2b-instruct-mlx'):
-    """
-    Query VLM with an image and get structured output based on a Pydantic schema.
-    
-    Args:
-        image: PIL Image or file path to the image
-        prompt: Text prompt describing what to extract
-        schema: Pydantic model class defining the expected output structure
-        model: Model ID to use for the query
-    
-    Returns:
-        Parsed Pydantic model instance with the extracted data
-    """
-    # Convert image to base64
-    if isinstance(image, PILImage):
-        buffered = BytesIO()
-        image.save(buffered, format="JPEG")
-        image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
-    else:
-        with open(image, "rb") as f:
-            image_base64 = base64.b64encode(f.read()).decode('utf-8')
-    
-    # Query with structured output
-    completion = client.beta.chat.completions.parse(
-        model=model,
-        messages=[{
-            "role": "user",
-            "content": [
-                {"type": "text", "text": prompt},
-                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}
-            ]
-        }],
-        response_format=schema,
-        temperature=0.3  # Lower temperature for more consistent extraction
-    )
-    
-    # Return the parsed structured data
-    return completion.choices[0].message.parsed
+
+
def query_image_structured(image, prompt, schema, model='Qwen/Qwen3-VL-8B-Instruct'):
+    """
+    Query VLM with an image and get structured output based on a Pydantic schema.
+    
+    Args:
+        image: PIL Image or file path to the image
+        prompt: Text prompt describing what to extract
+        schema: Pydantic model class defining the expected output structure
+        model: Model ID to use for the query
+    
+    Returns:
+        Parsed Pydantic model instance with the extracted data
+    """
+    # Convert image to base64
+    if isinstance(image, PILImage):
+        buffered = BytesIO()
+        image.save(buffered, format="JPEG")
+        image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
+    else:
+        with open(image, "rb") as f:
+            image_base64 = base64.b64encode(f.read()).decode('utf-8')
+    
+    # Query with structured output
+    completion = client.beta.chat.completions.parse(
+        model=model,
+        messages=[{
+            "role": "user",
+            "content": [
+                {"type": "text", "text": prompt},
+                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}}
+            ]
+        }],
+        response_format=schema,
+        temperature=0.3  # Lower temperature for more consistent extraction
+    )
+    
+    # Return the parsed structured data
+    return completion.choices[0].message.parsed

We also need to define a prompt that describes what information we want to extract from the card.

-
-
# Example usage
-extraction_prompt = """
-Extract the information from this British Library card into structured data (JSON format).
-
-Read each field on the card and extract the following information:
-- department: The division name (e.g., "MANUSCRIPTS")
-- shelfmark: The catalog number (e.g., "SLOANE 3972.C. (VOL 1)")
-- order: The SCH NO reference number
-- author: The author name, or null if blank
-- title: The full title of the work
-- place_and_date_of_publication: Publication info, or null if blank
-- reduction: The reduction number (as integer) at bottom of card
-
-Return the exact text as shown on the card. For empty fields with diagonal lines or no text, use null.
-"""
-result = query_image_structured(index_image, extraction_prompt, BritishLibraryReprographicCard)
-rprint(result)
+
+
# Example usage
+extraction_prompt = """
+Extract the information from this British Library card into structured data (JSON format).
+
+Read each field on the card and extract the following information:
+- department: The division name (e.g., "MANUSCRIPTS")
+- shelfmark: The catalog number (e.g., "SLOANE 3972.C. (VOL 1)")
+- order: The SCH NO reference number
+- author: The author name, or null if blank
+- title: The full title of the work
+- place_and_date_of_publication: Publication info, or null if blank
+- reduction: The reduction number (as integer) at bottom of card
+
+Return the exact text as shown on the card. For empty fields with diagonal lines or no text, use null.
+"""
+result = query_image_structured(index_image, extraction_prompt, BritishLibraryReprographicCard)
+rprint(result)
BritishLibraryReprographicCard(
     department='MANUSCRIPTS',
     shelfmark='SLOANE 3972.C. (VOL 1)',
-    order='98876',
-    author='HANS SLOANES',
+    order='SCH NO 98876',
+    author=None,
     title='CATALOGUE OF SIR HANS SLOANES LIBRARY',
     place_and_date_of_publication=None,
     reduction=12
@@ -1116,14 +769,14 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)
-
-
rprint(result)
+
+
rprint(result)
BritishLibraryReprographicCard(
     department='MANUSCRIPTS',
     shelfmark='SLOANE 3972.C. (VOL 1)',
-    order='98876',
-    author='HANS SLOANES',
+    order='SCH NO 98876',
+    author=None,
     title='CATALOGUE OF SIR HANS SLOANES LIBRARY',
     place_and_date_of_publication=None,
     reduction=12
@@ -1131,23 +784,76 @@ Projected time for full dataset: 455.91 minutes (7.60 hours)
-
-
index_image
-
+
+
index_image
+
-

+

+
+
+

3.8 Appendix: Using a Local Model

+

All the code in this chapter uses the OpenAI-compatible API, which means you can swap in a local model server with a single change to the client setup. Everything else — schemas, prompts, .parse() calls — works identically.

+
+
# Replace the HF Inference client with a local server
+from openai import OpenAI
+
+client = OpenAI(
+    base_url="http://localhost:1234/v1",  # LM Studio default port
+    api_key="lm-studio"                   # Default API key
+)
+
+

Popular local options:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
ToolBest forNotes
LM StudioGetting started quicklyGUI-based, MLX acceleration on Mac, built-in model browser
OllamaCLI workflowsSimple ollama run commands, runs on port 11434
vLLMProduction & batch processingGPU-optimized, highest throughput, best for large-scale extraction
+
+
+
+ +
+
+Note +
+
+
+

Smaller local models (2B-4B parameters) work well for simpler tasks like classification, but for accurate structured extraction you’ll generally want 8B+ parameter models. The trade-off is between running costs/speed (local, smaller models) and extraction quality (API or larger models).

+
+