ai-patterns-for-glam / search.json
davanstrien's picture
davanstrien HF Staff
Upload folder using huggingface_hub
b292d4f verified
[
{
"objectID": "index.html",
"href": "index.html",
"title": "AI Design Patterns for GLAM",
"section": "",
"text": "Welcome",
"crumbs": [
"Welcome"
]
},
{
"objectID": "index.html#about-this-book",
"href": "index.html#about-this-book",
"title": "AI Design Patterns for GLAM",
"section": "About This Book",
"text": "About This Book\n\n\n\n\n\n\nNote\n\n\n\nThis is a work in progress. Content will be added and updated over time. This is an early draft!\n\n\nThis book presents practical AI design patterns for Galleries, Libraries, Archives, and Museums (GLAM). Rather than focusing on specific technologies that will inevitably change, we document patterns which aim to capture reusable solutions to common challenges faced when implementing AI in GLAM contexts.\n\nGoal of this book\nThere are many exciting possibilities for applying AI in GLAM contexts, but also significant challenges. This book aims to provide a practical guide to help practitioners navigate this rapidly evolving landscape.\nWhilst there is a lot of hype around the adoption of AI in the GLAM sector there isn’t always clear guidance on how to approach projects in a structured way. This book aims to help fill that gap by providing a set of patterns that can be adapted and applied to a wide range of use cases.",
"crumbs": [
"Welcome"
]
},
{
"objectID": "index.html#background-to-this-work",
"href": "index.html#background-to-this-work",
"title": "AI Design Patterns for GLAM",
"section": "Background to this work",
"text": "Background to this work\nThis documentation emerged from work with the National Library of Scotland, but the patterns and approaches apply broadly across the GLAM sector.\n\nAbout the Author\nDaniel van Strien",
"crumbs": [
"Welcome"
]
},
{
"objectID": "patterns/what-is-an-ai-pattern.html",
"href": "patterns/what-is-an-ai-pattern.html",
"title": "1  What is an AI Pattern?",
"section": "",
"text": "1.1 Why Patterns?\nA pattern is a reusable solution to a commonly occurring problem. The concept comes from architecture—Christopher Alexander’s work on design patterns—and was later adopted by software engineering. In this book, we apply the same idea to AI implementations in GLAM contexts.\nAI and machine learning are evolving rapidly. The models, APIs, and frameworks we use today will be superseded—often within months. But the underlying problems—extracting structured data from historical documents, assessing condition at scale, making collections discoverable—persist.\nPatterns help us in three ways:\nThey’re technology-agnostic. A pattern describes what problem you’re solving and why an approach works, not just which model to use. When better models emerge, the pattern still applies.\nThey’re communicable. Patterns give teams a shared vocabulary. Saying “we’re using a structured extraction pattern” conveys more than listing the specific models and APIs involved.\nThey’re adaptable. The same pattern can be implemented differently depending on your constraints—budget, infrastructure, staff expertise, risk tolerance.",
"crumbs": [
"Design Patterns",
"<span class='chapter-number'>1</span>  <span class='chapter-title'>What is an AI Pattern?</span>"
]
},
{
"objectID": "patterns/what-is-an-ai-pattern.html#anatomy-of-a-pattern",
"href": "patterns/what-is-an-ai-pattern.html#anatomy-of-a-pattern",
"title": "1  What is an AI Pattern?",
"section": "1.2 Anatomy of a Pattern",
"text": "1.2 Anatomy of a Pattern\nEach pattern in this book follows a consistent structure:\nThe Challenge What recurring problem does this pattern address? What makes it difficult or impossible to solve with traditional approaches?\nSolution Overview The high-level approach. What makes this work? What are the key components?\nImplementation Technical walkthrough with working code. We use real examples from GLAM collections, not toy datasets.\nConsiderations When should you use this pattern? What are the tradeoffs? What might go wrong?",
"crumbs": [
"Design Patterns",
"<span class='chapter-number'>1</span>  <span class='chapter-title'>What is an AI Pattern?</span>"
]
},
{
"objectID": "patterns/what-is-an-ai-pattern.html#patterns-in-this-book",
"href": "patterns/what-is-an-ai-pattern.html#patterns-in-this-book",
"title": "1  What is an AI Pattern?",
"section": "1.3 Patterns in This Book",
"text": "1.3 Patterns in This Book\nThis book currently covers:\n\nStructured Information Extraction — Using Vision Language Models to extract structured metadata from document images (index cards, forms, registers)\n\nAdditional patterns will be added as the book develops.",
"crumbs": [
"Design Patterns",
"<span class='chapter-number'>1</span>  <span class='chapter-title'>What is an AI Pattern?</span>"
]
},
{
"objectID": "patterns/structured-generation/intro.html",
"href": "patterns/structured-generation/intro.html",
"title": "2  Structured Document Processing",
"section": "",
"text": "2.1 The Challenge\nMany GLAM institutions have vast collections of structured documents—index cards, forms, registers—containing valuable information locked in physical or image formats. Manual transcription doesn’t scale, but the structured nature of these documents makes them ideal candidates for AI-powered processing.\nUnlocking this data means better discovery, new research possibilities, and integration with modern cataloguing systems.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>2</span>  <span class='chapter-title'>Structured Document Processing</span>"
]
},
{
"objectID": "patterns/structured-generation/intro.html#dont-we-just-need-ocr",
"href": "patterns/structured-generation/intro.html#dont-we-just-need-ocr",
"title": "2  Structured Document Processing",
"section": "2.2 Don’t we just need OCR?",
"text": "2.2 Don’t we just need OCR?\nTraditional OCR extracts text from images, but that’s only half the problem. Consider an index card with a name, date, reference number, and description arranged in specific positions. OCR gives you a block of text—but not which part is the name, which is the date, or how they relate.\nOften, you don’t even need the raw text—you need the information it contains. A catalogue record doesn’t need “Mr. John Smith, 1847” preserved exactly; it needs name: \"John Smith\" and year: 1847 as usable data.\nWith OCR alone, you still need someone to parse text into structured fields. For hundreds of documents, that’s manageable. For hundreds of thousands, it’s not.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>2</span>  <span class='chapter-title'>Structured Document Processing</span>"
]
},
{
"objectID": "patterns/structured-generation/intro.html#solution-overview",
"href": "patterns/structured-generation/intro.html#solution-overview",
"title": "2  Structured Document Processing",
"section": "2.3 Solution Overview",
"text": "2.3 Solution Overview\nStructured extraction is a pattern that works across modalities—text, images, audio transcripts. The core idea is the same: constrain a model to return data in a predefined schema rather than freeform text.\nFor document images, we use Vision Language Models (VLMs). Unlike OCR, VLMs understand both visual layout and textual content together. They can see that “1847” appears in the date field position, not just that the characters “1847” exist somewhere on the page.\nStructured output generation constrains the model to return your fields, your format. The result: input in, structured JSON out.\nThis section focuses on the image case—extracting from document images—but the same principles apply when working with text or other formats.\n\n2.3.1 What this pattern looks like\n\n\n\n\n\nflowchart LR\n A[Document Image] --&gt; B[VLM + Schema]\n B --&gt; C[Structured JSON]\n C --&gt; D[Catalogue/Database]\n\n\n\n\n\n\nThe following chapters walk through this in detail—starting with basic VLM queries, then building to real extraction workflows with evaluation strategies.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>2</span>  <span class='chapter-title'>Structured Document Processing</span>"
]
},
{
"objectID": "patterns/structured-generation/intro.html#when-to-use-this-pattern",
"href": "patterns/structured-generation/intro.html#when-to-use-this-pattern",
"title": "2  Structured Document Processing",
"section": "2.4 When to Use This Pattern",
"text": "2.4 When to Use This Pattern\nGood fit:\n\nForms, index cards, registers with consistent layouts\nDocuments where you know what fields you want to extract\nCollections too large for manual transcription\n\nLess suited:\n\nFree-form manuscripts with no predictable structure\nDocuments requiring deep contextual interpretation\nCases where verbatim transcription is the goal (use OCR instead)",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>2</span>  <span class='chapter-title'>Structured Document Processing</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html",
"href": "patterns/structured-generation/vlm-structured-generation.html",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "",
"text": "3.1 Introduction\nIn this chapter we’ll start to look at how we can use Visual Language Models (VLMs) to extract structured information from images of documents.\nWe already saw what this looked like at a conceptual level in the previous chapter. In this chapter we’ll get hands on with some code examples to illustrate how this can be done in practice. To start we’ll focus on some relatively simple documents and tasks. This allows us to focus on the core concepts without getting bogged down in too many complexities. We’ll use open source models accessed via the Hugging Face Inference API (you can also run them locally — see the appendix).",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#the-sloane-index-cards-dataset",
"href": "patterns/structured-generation/vlm-structured-generation.html#the-sloane-index-cards-dataset",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.2 The Sloane Index Cards Dataset",
"text": "3.2 The Sloane Index Cards Dataset\nWe’ll use the Sloane Index Cards Dataset from Hugging Face for our examples. This is a publicly available dataset that is well suited to demonstrating structured information extraction with VLMs.\n\nThe files in this dataset are derived from microfilm copies of the original library catalogue of Sir Hans Sloane, now presented across 9 volumes, Sloane MS 3972 C 1-8, and the name index to the Sloane library catalogue, Sloane MS 3972 D. The catalogues are crucial for understanding the development of Sloane’s collections, the present-day collections of the British Library, British Museum and Natural History Museum, and to identifying collection items which are now dispersed across a number of institutions.\n\nThe dataset is available in Parquet format on Hugging Face so it can be easily loaded using the datasets library.\nLet’s load the dataset and take a look at one row.\n\nfrom datasets import load_dataset\n\nds = load_dataset(\"biglam/sloane-catalogues\", split=\"train\")\nds[0]\n\n{'image': &lt;PIL.JpegImagePlugin.JpegImageFile image mode=L size=3144x2267&gt;,\n 'filename': 'sloane_ms_3972_c!1_001.jpg',\n 'collection': 'sloane_ms_3972_c!1_jpegs',\n 'page_number': 1,\n 'page_index_in_directory': 0,\n 'source': 'British Library Sloane Manuscripts'}\n\n\nWe can see that we have a dictionary that contains an image as well as some additional metadata fields.\nLet’s take a look at an actual example image from the dataset.\n\nds[0][\"image\"]\n\n\n\n\n\n\n\n\nLet’s look at a couple more examples to get a sense of the variety in the dataset.\n\nds[2][\"image\"]\n\n\n\n\n\n\n\n\none more from later in the dataset\n\nds[50][\"image\"]\n\n\n\n\n\n\n\n\nWe can see we have a mixture of different types of digitised content here including index cards from the original microfilm as well as the actual handwritten manuscript pages from Sloane’s collection.\nWe’ll look at how we can use VLMs to work with kind of collection.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#setup",
"href": "patterns/structured-generation/vlm-structured-generation.html#setup",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.3 Setup",
"text": "3.3 Setup\n\n3.3.1 Connecting to a Vision Language Model\nWe’ll use Hugging Face Inference Providers to access VLMs via an API. This means we don’t need to install or run any models locally — we just need a free Hugging Face account and an API token.\nSince the Hugging Face Inference API is compatible with the OpenAI Python client, all the code in this chapter will also work with local model servers (like LM Studio, Ollama, or vLLM) with just a one-line change to the client setup. See the appendix at the end of this chapter for details.\n\n\n\n\n\n\nTipGetting a Hugging Face Token\n\n\n\n\nCreate a free account at huggingface.co\nGo to Settings → Access Tokens\nCreate a new token with Read access\nSet it as an environment variable: export HF_TOKEN=hf_... or add it to a .env file\n\n\n\n\nimport os\nfrom openai import OpenAI\nfrom rich import print as rprint\nfrom dotenv import load_dotenv\nload_dotenv()\n\nclient = OpenAI(\n base_url=\"https://router.huggingface.co/v1\",\n api_key=os.environ.get(\"HF_TOKEN\"),\n)\n\nWe’ll use Qwen/Qwen3-VL-8B-Instruct throughout this chapter — an 8 billion parameter vision-language model that offers a good balance of quality and speed for document understanding tasks.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#basic-vlm-query",
"href": "patterns/structured-generation/vlm-structured-generation.html#basic-vlm-query",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.4 Basic VLM Query",
"text": "3.4 Basic VLM Query\nLet’s start by defining a simple function that we can use to query a VLM with an image and a prompt. This function will handle converting the image to base64 and sending the request to the model.\nWe’ll default to using the Qwen/Qwen3-VL-8B-Instruct model via HF Inference Providers. You could experiment with different models later on — the code works with any OpenAI-compatible VLM endpoint.\n\n\nCode\nimport base64\nfrom PIL.Image import Image as PILImage\nfrom io import BytesIO\n\ndef query_image(image: str | PILImage, prompt: str, model: str='Qwen/Qwen3-VL-8B-Instruct', max_image_size: int=1024, client=client) -&gt; str:\n \"\"\"Query VLM with an image.\"\"\"\n if isinstance(image, PILImage):\n # Convert PIL Image to bytes and encode to base64\n buffered = BytesIO()\n # ensure image is not too big\n if image.size &gt; (max_image_size, max_image_size):\n image.thumbnail((max_image_size, max_image_size)) \n image.save(buffered, format=\"JPEG\")\n image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')\n else:\n # Assume image is a file path\n with open(image, \"rb\") as f:\n image_base64 = base64.b64encode(f.read()).decode('utf-8')\n #\n # Query\n response = client.chat.completions.create(\n model=model,\n messages=[{\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": prompt},\n {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_base64}\"}}\n ]\n }]\n )\n return response.choices[0].message.content",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#simple-vlm-query-example",
"href": "patterns/structured-generation/vlm-structured-generation.html#simple-vlm-query-example",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.5 102: Simple VLM Query Example",
"text": "3.5 102: Simple VLM Query Example\nTo get started let’s do a simple query to describe an image from the dataset.\n\nimage = ds[0][\"image\"]\n\n# Query the VLM to describe the image\ndescription = query_image(image, \"Describe this image.\", model='Qwen/Qwen3-VL-8B-Instruct')\nrprint(description)\n\nThis is a black-and-white image of a form from The British Library's Reprographic Section, used to request a copy \nof a manuscript or archival material.\n\nHere is a breakdown of the information on the form:\n\n* **Institution:** The British Library, Reference Division, Reprographic Section.\n* **Address:** Great Russell Street, London WC1B 3DG.\n* **Department:** Manuscripts.\n* **Shelfmark:** SLOANE 3972.C. (Vol. 1)\n* **Order Number:** SCH NO 98876\n* **Author:** This field is blank.\n* **Title:** CATALOGUE OF SIR HANS SLOANES LIBRARY\n* **Place and date of publication:** This field is blank.\n* **Scale/Reduction:** The form indicates a reduction of 12, meaning the reproduction will be scaled down to \n1/12th of the original size. A ruler scale is provided for reference in centimetres and inches.\n\nThe form appears to be filled out by hand for a specific item: Volume 1 of the \"Catalogue of Sir Hans Sloane's \nLibrary,\" which is held in the Manuscripts department under the shelfmark SLOANE 3972.C. This catalogue was \ncompiled by Sir Hans Sloane himself and is a significant historical document detailing his extensive collection, \nwhich later formed part of the foundation of the British Museum and now resides at the British Library.\n\n\n\nWe can see we get a fairly useful description of the card. If we compare against the image we can see most of the details it mentions appear to be largely correct.\n\nimage\n\n\n\n\n\n\n\n\nThere are workflows where open ended description like this could be useful but this isn’t usually the kind of format we want if we want to take some action or do something based on the predictions of the model. In these cases it’s usually nice to have some more controlled output, for example, a label.",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#classification",
"href": "patterns/structured-generation/vlm-structured-generation.html#classification",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.6 Classification",
"text": "3.6 Classification\n\nWe’ll define a fairly simple prompt that asks the VLM to decide if a page is one of three categories. We describe each of these categopries and then ask the model to only return one of these as the output. We’ll do this for ten examples and we’ll also log how long it’s taking.\n\nimport time\nfrom tqdm.auto import tqdm\nfrom rich import print as rprint\n\nsample_size = 10\n\nsample = ds.take(sample_size)\n\nprompt = \"\"\"Classify this image into one of the following categories:\n\n1. **Index/Reference Card**: A library catalog or reference card\n\n2. **Manuscript Page**: A handwritten or historical document page\n\n3. **Other**: Any document that doesn't fit the above categories\n\nExamine the overall structure, layout, and content type to determine the classification. Focus on whether the document is a structured catalog/reference tool (Index Card) or a historical manuscript with continuous text (Manuscript Page).\n\nReturn only the category name: \"Index/Reference Card\", \"Manuscript Page\", or \"Other\"\n\"\"\"\n\nresults = []\n# Time the execution using standard Python\nstart_time = time.time()\nfor row in tqdm(sample):\n image = row['image']\n results.append(query_image(image, prompt, model='Qwen/Qwen3-VL-8B-Instruct'))\nelapsed_time = time.time() - start_time\nprint(f\"Execution time: {elapsed_time:.2f} seconds\")\nrprint(results)\n\n\n\n\nExecution time: 18.31 seconds\n\n\n[\n 'Index/Reference Card',\n 'Other',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page',\n 'Manuscript Page'\n]\n\n\n\nLet’s check the result that was predicted as “index/reference card”\n\nsample[0]['image']\n\n\n\n\n\n\n\n\nWe can extrapolate how long this would take for the full dataset\n\n# Calculate average time per image\navg_time_per_image = elapsed_time / sample_size\n\n# Project time for full dataset\ntotal_images = len(ds)\nprojected_time = avg_time_per_image * total_images\n\nprint(f\"Sample processing time: {elapsed_time:.2f} seconds ({elapsed_time/60:.2f} minutes)\")\nprint(f\"Average time per image: {avg_time_per_image:.2f} seconds\")\nprint(f\"Total images in dataset: {total_images}\")\nprint(f\"Projected time for full dataset: {projected_time/60:.2f} minutes ({projected_time/3600:.2f} hours)\")\n\nSample processing time: 18.31 seconds (0.31 minutes)\nAverage time per image: 1.83 seconds\nTotal images in dataset: 2734\nProjected time for full dataset: 83.45 minutes (1.39 hours)\n\n\n\n3.6.1 Classifying with structured labels\nIn the previous example, we relied on the model to return the label in the correct format. While this often works, it can sometimes lead to inconsistencies in the output. To address this, we can use Pydantic models to define a structured output format. This way, we can ensure that the output adheres to a specific schema.\nIn this example, we’ll define a Pydantic model for our classification task. The model will have a single field category which can take one of three literal values \"Index/Reference Card\", \"Manuscript Page\", or \"other\".\nWhat this means in practice is that the model will only be able to return one of these three values for the category field.\n\nfrom pydantic import BaseModel, Field\nfrom typing import Literal\n\nclass PageCategory(BaseModel):\n category: Literal[\"Index/Reference Card\", \"Manuscript Page\", \"other\"] = Field(\n ..., description=\"The category of the image\"\n )\n\nWhen using the OpenAI client we can specify this Pydantic model as the response_format when making the request. This tells the model to return the output in a format that can be parsed into the Pydantic model (the APIs for this are still evolving so may change slightly over time).\n\nbuffered = BytesIO()\nimage.save(buffered, format=\"JPEG\")\nimage_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')\ncompletion = client.beta.chat.completions.parse(\n model=\"Qwen/Qwen3-VL-8B-Instruct\",\n messages=[\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"text\",\n \"text\": prompt,\n },\n {\n \"type\": \"image_url\",\n \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_base64}\"},\n },\n ],\n },\n ],\n max_tokens=200,\n temperature=0.7,\n response_format=PageCategory,\n)\nrprint(completion)\nrprint(completion.choices[0].message.parsed)\n\nParsedChatCompletion[PageCategory](\n id='47c1a911111046291f6bbec65ebb1ead',\n choices=[\n ParsedChoice[PageCategory](\n finish_reason='stop',\n index=0,\n logprobs=None,\n message=ParsedChatCompletionMessage[PageCategory](\n content='{\\n \"category\": \"Manuscript Page\"\\n}',\n refusal=None,\n role='assistant',\n annotations=None,\n audio=None,\n function_call=None,\n tool_calls=None,\n parsed=PageCategory(category='Manuscript Page')\n )\n )\n ],\n created=1771255803,\n model='qwen/qwen3-vl-8b-instruct',\n object='chat.completion',\n service_tier=None,\n system_fingerprint='',\n usage=CompletionUsage(\n completion_tokens=13,\n prompt_tokens=966,\n total_tokens=979,\n completion_tokens_details=CompletionTokensDetails(\n accepted_prediction_tokens=0,\n audio_tokens=0,\n reasoning_tokens=0,\n rejected_prediction_tokens=0,\n text_tokens=13,\n image_tokens=0,\n video_tokens=0\n ),\n prompt_tokens_details=PromptTokensDetails(\n audio_tokens=0,\n cached_tokens=0,\n cache_creation_input_tokens=0,\n cache_read_input_tokens=0,\n text_tokens=228,\n image_tokens=738,\n video_tokens=0\n )\n )\n)\n\n\n\nPageCategory(category='Manuscript Page')\n\n\n\n\nimage",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#beyond-classifying---extracting-structured-information",
"href": "patterns/structured-generation/vlm-structured-generation.html#beyond-classifying---extracting-structured-information",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.7 Beyond classifying - Extracting structured information",
"text": "3.7 Beyond classifying - Extracting structured information\nSo far we’ve focused on classifying images but what if we want to extract information from the images? Let’s take the first example from the dataset again.\n\nindex_image = ds[0]['image']\nindex_image\n\n\n\n\n\n\n\n\nIf we have an image like this we don’t just want to assign a label from it (we may do this as a first step) we actually want to extract the various fields from the card in a structured way. We can again use a Pydantic model to define the structure of the data we want to extract.\n\nfrom pydantic import BaseModel, Field\nfrom typing import Optional\n\n\nclass BritishLibraryReprographicCard(BaseModel):\n \"\"\"\n Pydantic model for extracting information from British Library Reference Division \n reprographic cards used to document manuscripts and other materials.\n \"\"\"\n \n department: str = Field(\n ..., \n description=\"The division that holds the material (e.g., 'MANUSCRIPTS')\"\n )\n \n shelfmark: str = Field(\n ..., \n description=\"The library's classification/location code (e.g., 'SLOANE 3972.C. (VOL 1)')\"\n )\n \n order: str = Field(\n ..., \n description=\"Order reference, typically starting with 'SCH NO' followed by numbers\"\n )\n \n author: Optional[str] = Field(\n None, \n description=\"Author name if present, null if blank or marked with diagonal line\"\n )\n \n title: str = Field(\n ..., \n description=\"The name of the work or manuscript\"\n )\n \n place_and_date_of_publication: Optional[str] = Field(\n None, \n description=\"Place and date of publication if present, null if blank\"\n )\n \n reduction: int = Field(\n ..., \n description=\"The reduction number shown at the bottom of the card\"\n )\n\nWe’ll now create a function to handle the querying process using this structured schema.\n\ndef query_image_structured(image, prompt, schema, model='Qwen/Qwen3-VL-8B-Instruct'):\n \"\"\"\n Query VLM with an image and get structured output based on a Pydantic schema.\n \n Args:\n image: PIL Image or file path to the image\n prompt: Text prompt describing what to extract\n schema: Pydantic model class defining the expected output structure\n model: Model ID to use for the query\n \n Returns:\n Parsed Pydantic model instance with the extracted data\n \"\"\"\n # Convert image to base64\n if isinstance(image, PILImage):\n buffered = BytesIO()\n image.save(buffered, format=\"JPEG\")\n image_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')\n else:\n with open(image, \"rb\") as f:\n image_base64 = base64.b64encode(f.read()).decode('utf-8')\n \n # Query with structured output\n completion = client.beta.chat.completions.parse(\n model=model,\n messages=[{\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": prompt},\n {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_base64}\"}}\n ]\n }],\n response_format=schema,\n temperature=0.3 # Lower temperature for more consistent extraction\n )\n \n # Return the parsed structured data\n return completion.choices[0].message.parsed\n\nWe also need to define a prompt that describes what information we want to extract from the card.\n\n# Example usage\nextraction_prompt = \"\"\"\nExtract the information from this British Library card into structured data (JSON format).\n\nRead each field on the card and extract the following information:\n- department: The division name (e.g., \"MANUSCRIPTS\")\n- shelfmark: The catalog number (e.g., \"SLOANE 3972.C. (VOL 1)\")\n- order: The SCH NO reference number\n- author: The author name, or null if blank\n- title: The full title of the work\n- place_and_date_of_publication: Publication info, or null if blank\n- reduction: The reduction number (as integer) at bottom of card\n\nReturn the exact text as shown on the card. For empty fields with diagonal lines or no text, use null.\n\"\"\"\nresult = query_image_structured(index_image, extraction_prompt, BritishLibraryReprographicCard)\nrprint(result)\n\nBritishLibraryReprographicCard(\n department='MANUSCRIPTS',\n shelfmark='SLOANE 3972.C. (VOL 1)',\n order='SCH NO 98876',\n author=None,\n title='CATALOGUE OF SIR HANS SLOANES LIBRARY',\n place_and_date_of_publication=None,\n reduction=12\n)\n\n\n\n\nrprint(result)\n\nBritishLibraryReprographicCard(\n department='MANUSCRIPTS',\n shelfmark='SLOANE 3972.C. (VOL 1)',\n order='SCH NO 98876',\n author=None,\n title='CATALOGUE OF SIR HANS SLOANES LIBRARY',\n place_and_date_of_publication=None,\n reduction=12\n)\n\n\n\n\nindex_image",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
},
{
"objectID": "patterns/structured-generation/vlm-structured-generation.html#appendix-using-a-local-model",
"href": "patterns/structured-generation/vlm-structured-generation.html#appendix-using-a-local-model",
"title": "3  Structured Information Extraction with Vision Language Models",
"section": "3.8 Appendix: Using a Local Model",
"text": "3.8 Appendix: Using a Local Model\nAll the code in this chapter uses the OpenAI-compatible API, which means you can swap in a local model server with a single change to the client setup. Everything else — schemas, prompts, .parse() calls — works identically.\n\n# Replace the HF Inference client with a local server\nfrom openai import OpenAI\n\nclient = OpenAI(\n base_url=\"http://localhost:1234/v1\", # LM Studio default port\n api_key=\"lm-studio\" # Default API key\n)\n\nPopular local options:\n\n\n\nTool\nBest for\nNotes\n\n\n\n\nLM Studio\nGetting started quickly\nGUI-based, MLX acceleration on Mac, built-in model browser\n\n\nOllama\nCLI workflows\nSimple ollama run commands, runs on port 11434\n\n\nvLLM\nProduction & batch processing\nGPU-optimized, highest throughput, best for large-scale extraction\n\n\n\n\n\n\n\n\n\nNote\n\n\n\nSmaller local models (2B-4B parameters) work well for simpler tasks like classification, but for accurate structured extraction you’ll generally want 8B+ parameter models. The trade-off is between running costs/speed (local, smaller models) and extraction quality (API or larger models).",
"crumbs": [
"Structured Information Extraction",
"<span class='chapter-number'>3</span>  <span class='chapter-title'>Structured Information Extraction with Vision Language Models</span>"
]
}
]