Transformers documentation

GlmImage

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.0.0rc2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2026-01-10 and added to Hugging Face Transformers on 2026-01-10.

GlmImage

Overview

GLM-Image is an image generation model adopts a hybrid autoregressive + diffusion decoder architecture, effectively pushing the upper bound of visual fidelity and fine-grained details. In general image generation quality, it aligns with industry-standard LDM-based approaches, while demonstrating significant advantages in knowledge-intensive image generation scenarios.

Model architecture: a hybrid autoregressive + diffusion decoder design、

  • Autoregressive generator: a 9B-parameter model initialized from GLM-4-9B-0414, with an expanded vocabulary to incorporate visual tokens. The model first generates a compact encoding of approximately 256 tokens, then expands to 1K–4K tokens, corresponding to 1K–2K high-resolution image outputs.
  • Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space image decoding. It is equipped with a Glyph Encoder text module, significantly improving accurate text rendering within images.

Post-training with decoupled reinforcement learning: the model introduces a fine-grained, modular feedback strategy using the GRPO algorithm, substantially enhancing both semantic understanding and visual detail quality.

  • Autoregressive module: provides low-frequency feedback signals focused on aesthetics and semantic alignment, improving instruction following and artistic expressiveness.
  • Decoder module: delivers high-frequency feedback targeting detail fidelity and text accuracy, resulting in highly realistic textures, lighting, and color reproduction, as well as more precise text rendering.

GLM-Image supports both text-to-image and image-to-image generation within a single model

  • Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.

  • Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.

  • GlmImageForConditionalGeneration is the AR part of GLM-Image model, and for full image generation pipeline, please refer to here.

This model was contributed by Raushan Turganbay and Yuxuan Zhang.

Usage examples

Using GLM-Image with image input to generate vision token for DIT using.

from transformers import GlmImageForConditionalGeneration, AutoProcessor
import torch

model = GlmImageForConditionalGeneration.from_pretrained(
    pretrained_model_name_or_path="zai-org/GLM-Image/vision_language_encoder",
    dtype=torch.bfloat16,
    device_map="cuda:0"
)
processor = AutoProcessor.from_pretrained(
    pretrained_model_name_or_path="zai-org/GLM-Image/processor",
    use_fast=True
)

# Case1 T2I
prompt = "现代美食杂志风格的甜点制作教程图,主题为覆盆子慕斯蛋糕。整体布局干净明亮,分为四个主要区域:顶部左侧是黑色粗体标题“覆盆子慕斯蛋糕制作指南”,右侧搭配光线柔和的成品蛋糕特写照片,蛋糕呈淡粉色,表面点缀新鲜覆盆子与薄荷叶;左下方为配料清单区域,标题“配料”使用简洁字体,下方列有“面粉 150g”“鸡蛋 3个”“细砂糖 120g”“覆盆子果泥 200g”“明胶片 10g”“淡奶油 300ml”“新鲜覆盆子”等配料,每种配料旁配有简约线图标(如面粉袋、鸡蛋、糖罐等);右下方是四个等大的步骤方框,每个方框内含高清微距实拍图及对应操作说明,从上到下依次为:步骤1展示打蛋器打发白色泡沫(对应说明“打发蛋白至干性发泡”),步骤2展示红白相间的混合物被刮刀翻拌(对应说明“轻柔翻拌果泥与面糊”),步骤3展示粉色液体被倒入圆形模具(对应说明“倒入模具并冷藏4小时”),步骤4展示成品蛋糕表面装饰覆盆子与薄荷叶(对应说明“用覆盆子和薄荷装饰”);底部边缘设浅棕色信息条,左侧图标分别代表“准备时间:30分钟”“烹饪时间:20分钟”“份量:8人份”。整体色调以奶油白、淡粉色为主,背景带轻微纸质纹理,图文排版紧凑有序,信息层级分明。"
target_h, target_w = 1152, 768
use_reference_images = False
reference_image_paths = None

# ## Case2
# prompt = "Replace the background of the snow forest with an underground station featuring an automatic escalator."
# cond_0 = "cond.jpg"
# target_h, target_w = 1152, 768
# use_reference_images = True
# reference_image_paths = [cond_0]

## Case3
# prompt = "Make the man in the first figure and the child from the second image bow at the same time in a respectful KTV."
# cond_0 = "cond_0.jpg"
# cond_1 = "cond_1.jpg"
# target_h, target_w = 1152, 768
# use_reference_images = True
# reference_image_paths = [cond_0, cond_1]


def build_messages(prompt, use_reference_images, reference_image_paths):
    content = []
    if use_reference_images:
        for img_path in reference_image_paths:
            content.append({"type": "image", "url": img_path})
    content.append({"type": "text", "text": prompt})
    return [{"role": "user", "content": content}]


def compute_generation_params(image_grid_thw, use_reference_images):
    grid_sizes = []
    for i in range(image_grid_thw.shape[0]):
        t, h, w = image_grid_thw[i].tolist()
        grid_sizes.append(int(h * w))

    target_output_length = grid_sizes[0]

    if use_reference_images:
        max_new_tokens = grid_sizes[-1] + 1
        output_start_offset = 0
        output_length = grid_sizes[-1]
    else:
        total_tokens = sum(grid_sizes)
        max_new_tokens = total_tokens + 1
        output_start_offset = sum(grid_sizes[1:])
        output_length = target_output_length

    return max_new_tokens, output_start_offset, output_length


messages = build_messages(prompt, use_reference_images, reference_image_paths if use_reference_images else None)

inputs = processor.apply_chat_template(
    messages,
    target_h=target_h,
    target_w=target_w,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

image_grid_thw = inputs.get('image_grid_thw')
print(f"image_grid_thw: {image_grid_thw}")

max_new_tokens, output_start_offset, output_length = compute_generation_params(
    image_grid_thw, use_reference_images
)

print(f"use_reference_images: {use_reference_images}")
print(f"max_new_tokens: {max_new_tokens}")
print(f"output_start_offset: {output_start_offset}")
print(f"output_length: {output_length}")

outputs = model.generate(
    **inputs,
    max_new_tokens=max_new_tokens,
    do_sample=True
)

input_length = inputs["input_ids"].shape[-1]
output_tokens = outputs[0][input_length:][output_start_offset:output_start_offset + output_length]
print(f"Input length: {input_length}")
print(f"Total generated tokens: {outputs[0].shape[-1] - input_length}")
print(f"Extracted output tokens shape: {output_tokens.shape}")
print(f"Output tokens: {output_tokens}")

GlmImageConfig

class transformers.GlmImageConfig

< >

( text_config = None vision_config = None vq_config = None image_token_id = 167855 image_start_token_id = 16384 image_end_token_id = 16385 **kwargs )

Parameters

  • text_config (Union[PreTrainedConfig, dict], optional, defaults to GlmImageTextConfig) — The config object or dictionary of the text backbone.
  • vision_config (Union[PreTrainedConfig, dict], optional, defaults to GlmImageVisionConfig) — The config object or dictionary of the vision backbone.
  • vq_config (Union[Dict, GlmImageVQVAEConfig], optional) — GlmImageVQVAEConfig instance containing the configuration for the VQ-VAE model.
  • image_token_id (int, optional, defaults to 167855) — The image token index to encode the image prompt.
  • image_start_token_id (int, optional, defaults to 16384) — The image start token index to encode the start of image.
  • image_end_token_id (int, optional, defaults to 16385) — The image end token index to encode the end of image.

This is the configuration class to store the configuration of a GlmImageModel. It is used to instantiate a GLM-Image model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of GLM-Image zai-org/GLM-Image architecture.

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

>>> from transformers import Glm4vForConditionalGeneration, Glm4vConfig

>>> # Initializing a GLM-Image style configuration
>>> configuration = Glm4vConfig()

>>> # Initializing a model from the GLM-Image style configuration
>>> model = Glm4vForConditionalGeneration(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

GlmImageVisionConfig

class transformers.GlmImageVisionConfig

< >

( depth = 40 hidden_size = 1536 hidden_act = 'gelu' attention_bias = True attention_dropout = 0.0 num_heads = 16 in_channels = 3 image_size = 2048 patch_size = 16 layer_norm_eps = 1e-06 spatial_merge_size = 1 intermediate_size = 6144 initializer_range = 0.02 **kwargs )

Parameters

  • depth (int, optional, defaults to 40) — Number of layers (depth) in the model.
  • hidden_size (int, optional, defaults to 1536) — Dimensionality of the encoder layers and the pooler layer.
  • hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
  • attention_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.
  • attention_dropout (float, optional, defaults to 0.0) — Dropout probability for attention weights.
  • num_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer architecture.
  • in_channels (int, optional, defaults to 3) — Number of input channels.
  • image_size (int or list[int], optional, defaults to 2048) — The size (resolution) of each image.
  • patch_size (int, optional, defaults to 16) — The size (resolution) of each patch.
  • layer_norm_eps (float, optional, defaults to 1e-06) — The epsilon used by the layer normalization layers.
  • spatial_merge_size (int, optional, defaults to 1) — The size used for merging spatial dimensions.
  • intermediate_size (int, optional, defaults to 6144) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

This is the configuration class to store the configuration of a GlmImageVisionModel. It is used to instantiate an GlmImageVisionModel model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of GLM-Image zai-org/GLM-Image.

GlmImageTextConfig

class transformers.GlmImageTextConfig

< >

( vocab_size: int | None = 168064 hidden_size: int | None = 4096 intermediate_size: int | None = 13696 num_hidden_layers: int | None = 40 num_attention_heads: int | None = 32 num_key_value_heads: int | None = 2 hidden_act: str | None = 'silu' max_position_embeddings: int | None = 32768 initializer_range: float | None = 0.02 rms_norm_eps: int | None = 1e-05 use_cache: bool | None = True tie_word_embeddings: bool | None = False attention_dropout: float | None = 0.0 rope_parameters: transformers.modeling_rope_utils.RopeParameters | dict[str, transformers.modeling_rope_utils.RopeParameters] | None = None vision_vocab_size: int | None = 16512 attention_bias: bool | None = True **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 168064) — Vocabulary size of the GlmImage model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GlmImageModel
  • hidden_size (int, optional, defaults to 4096) — Dimension of the hidden representations.
  • intermediate_size (int, optional, defaults to 13696) — Dimension of the MLP representations.
  • num_hidden_layers (int, optional, defaults to 40) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 32) — Number of attention heads for each attention layer in the Transformer encoder.
  • num_key_value_heads (int, optional, defaults to 2) — This is the number of key_value heads that should be used to implement Grouped Query Attention. If num_key_value_heads=num_attention_heads, the model will use Multi Head Attention (MHA), if num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout this paper. If it is not specified, will default to 32.
  • hidden_act (str or function, optional, defaults to "silu") — The non-linear activation function (function or string) in the decoder.
  • max_position_embeddings (int, optional, defaults to 32768) — The maximum sequence length that this model might ever be used with.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • rms_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the rms normalization layers.
  • use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.
  • tie_word_embeddings (bool, optional, defaults to False) — Whether the model’s input and output word embeddings should be tied.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • rope_parameters (RopeParameters, optional) — Dictionary containing the configuration parameters for the RoPE embeddings. The dictionary should contain a value for rope_theta and optionally parameters used for scaling in case you want to use RoPE with longer max_position_embeddings.
  • vision_vocab_size (int, optional, defaults to 16512) — Vision vocabulary size of the GlmImage model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GlmImageVisionModel
  • attention_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.

This is the configuration class to store the configuration of a GlmImageTextModel. It is used to instantiate a GLM-Image model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of GLM-Image zai-org/GLM-Image.

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

>>> from transformers import GlmImageTextModel, GlmImageConfig

>>> # Initializing a GlmImageConfig style configuration
>>> configuration = GlmImageConfig()

>>> # Initializing a model from the GlmImageConfig style configuration
>>> model = GlmImageTextModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

GlmImageVQVAEConfig

class transformers.GlmImageVQVAEConfig

< >

( embed_dim: int = 2048 num_embeddings: int = 16384 latent_channels: int = 1536 in_channels: int = 3 initializer_range = 0.02 **kwargs )

Parameters

  • embed_dim (int, optional, defaults to 2048) — Dimensionality of each embedding vector.
  • num_embeddings (int, optional, defaults to 16384) — Number of codebook embeddings.
  • latent_channels (int, optional, defaults to 1536) — Number of channels for the latent space.
  • in_channels (int, optional, defaults to 3) — Number of input channels.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

This is the configuration class to store the configuration of a GlmImageVQModel. It is used to instantiate a GlmImageVQModel according to the specified arguments, defining the model architecture. Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information. Instantiating a configuration with the defaults will yield a similar configuration to the VQModel of the zai-org/GLM-Image architecture.

GlmImageImageProcessor

class transformers.GlmImageImageProcessor

< >

( do_resize: bool = True size: dict[str, int] | None = None resample: Resampling = <Resampling.BICUBIC: 3> do_rescale: bool = True rescale_factor: int | float = 0.00392156862745098 do_normalize: bool = True image_mean: float | list[float] | None = None image_std: float | list[float] | None = None do_convert_rgb: bool = True min_pixels: int | None = None max_pixels: int | None = None patch_size: int = 14 temporal_patch_size: int = 2 merge_size: int = 2 **kwargs )

Parameters

  • do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions.
  • size (dict[str, int], optional, defaults to {"shortest_edge" -- 56 * 56, "longest_edge": 28 * 28 * 1280}): Size of the image after resizing. shortest_edge and longest_edge keys must be present.
  • resample (PILImageResampling, optional, defaults to Resampling.BICUBIC) — Resampling filter to use when resizing the image.
  • do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor.
  • rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image.
  • do_normalize (bool, optional, defaults to True) — Whether to normalize the image.
  • image_mean (float or list[float], optional, defaults to [0.48145466, 0.4578275, 0.40821073]) — Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
  • image_std (float or list[float], optional, defaults to [0.26862954, 0.26130258, 0.27577711]) — Standard deviation to use if normalizing the image. This is a float or list of floats for each channel in the image.
  • do_convert_rgb (bool, optional, defaults to True) — Whether to convert the image to RGB.
  • min_pixels (int, optional, defaults to 56 * 56) — The min pixels of the image to resize the image.
  • max_pixels (int, optional, defaults to 28 * 28 * 1280) — The max pixels of the image to resize the image.
  • patch_size (int, optional, defaults to 14) — The spatial patch size of the vision encoder.
  • temporal_patch_size (int, optional, defaults to 2) — The temporal patch size of the vision encoder.
  • merge_size (int, optional, defaults to 2) — The merge size of the vision encoder to llm encoder.

Constructs a Qwen2-VL image processor that dynamically resizes images based on the original images.

preprocess

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] do_resize: bool | None = None size: dict[str, int] | None = None min_pixels: int | None = None max_pixels: int | None = None resample: PIL.Image.Resampling | None = None do_rescale: bool | None = None rescale_factor: float | None = None do_normalize: bool | None = None image_mean: float | list[float] | None = None image_std: float | list[float] | None = None patch_size: int | None = None temporal_patch_size: int | None = None merge_size: int | None = None do_convert_rgb: bool | None = None return_tensors: str | transformers.utils.generic.TensorType | None = None data_format: transformers.image_utils.ChannelDimension | None = <ChannelDimension.FIRST: 'channels_first'> input_data_format: str | transformers.image_utils.ChannelDimension | None = None )

Parameters

  • images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image.
  • size (dict[str, int], optional, defaults to self.size) — Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio.
  • resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True.
  • do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image.
  • rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True.
  • do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image.
  • image_mean (float or list[float], optional, defaults to self.image_mean) — Image mean to use for normalization. Only has an effect if do_normalize is set to True.
  • image_std (float or list[float], optional, defaults to self.image_std) — Image standard deviation to use for normalization. Only has an effect if do_normalize is set to True.
  • min_pixels (int, optional, defaults to self.min_pixels) — The min pixels of the image to resize the image.
  • max_pixels (int, optional, defaults to self.max_pixels) — The max pixels of the image to resize the image.
  • patch_size (int, optional, defaults to self.patch_size) — The spatial patch size of the vision encoder.
  • temporal_patch_size (int, optional, defaults to self.temporal_patch_size) — The temporal patch size of the vision encoder.
  • merge_size (int, optional, defaults to self.merge_size) — The merge size of the vision encoder to llm encoder.
  • do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) — Whether to convert the image to RGB.
  • return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of:
    • Unset: Return a list of np.ndarray.
    • TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
    • TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
  • data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of:
    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
    • Unset: Use the channel dimension format of the input image.
  • input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
    • "none" or ChannelDimension.NONE: image in (height, width) format.

GlmImageImageProcessorFast

class transformers.GlmImageImageProcessorFast

< >

( **kwargs: typing_extensions.Unpack[transformers.models.glm_image.image_processing_glm_image.GlmImageImageProcessorKwargs] )

Constructs a fast Glm Image image processor.

preprocess

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] **kwargs: typing_extensions.Unpack[transformers.models.glm_image.image_processing_glm_image.GlmImageImageProcessorKwargs] ) <class 'transformers.feature_extraction_utils.BatchFeature'>

Parameters

  • images (Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list, list, list]) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • do_convert_rgb (bool | None.do_convert_rgb) — Whether to convert the image to RGB.
  • do_resize (bool | None.do_resize) — Whether to resize the image.
  • size (Annotated[int | list[int] | tuple[int, ...] | dict[str, int] | None, None]) — Describes the maximum input dimensions to the model.
  • crop_size (Annotated[int | list[int] | tuple[int, ...] | dict[str, int] | None, None]) — Size of the output image after applying center_crop.
  • resample (Annotated[Union[PILImageResampling, int, NoneType], None]) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True.
  • do_rescale (bool | None.do_rescale) — Whether to rescale the image.
  • rescale_factor (float | None.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True.
  • do_normalize (bool | None.do_normalize) — Whether to normalize the image.
  • image_mean (float | list[float] | tuple[float, ...] | None.image_mean) — Image mean to use for normalization. Only has an effect if do_normalize is set to True.
  • image_std (float | list[float] | tuple[float, ...] | None.image_std) — Image standard deviation to use for normalization. Only has an effect if do_normalize is set to True.
  • do_pad (bool | None.do_pad) — Whether to pad the image. Padding is done either to the largest size in the batch or to a fixed square size per image. The exact padding strategy depends on the model.
  • pad_size (Annotated[int | list[int] | tuple[int, ...] | dict[str, int] | None, None]) — The size in {"height": int, "width" int} to pad the images to. Must be larger than any image size provided for preprocessing. If pad_size is not provided, images will be padded to the largest height and width in the batch. Applied only when do_pad=True.
  • do_center_crop (bool | None.do_center_crop) — Whether to center crop the image.
  • data_format (str | ~image_utils.ChannelDimension | None.data_format) — Only ChannelDimension.FIRST is supported. Added for compatibility with slow processors.
  • input_data_format (str | ~image_utils.ChannelDimension | None.input_data_format) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:
    • "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
    • "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
    • "none" or ChannelDimension.NONE: image in (height, width) format.
  • device (Annotated[Union[str, torch.device, NoneType], None]) — The device to process the images on. If unset, the device is inferred from the input images.
  • return_tensors (Annotated[str | ~utils.generic.TensorType | None, None]) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors.
  • disable_grouping (bool | None.disable_grouping) — Whether to disable grouping of images by size to process them individually and not in batches. If None, will be set to True if the images are on CPU, and False otherwise. This choice is based on empirical observations, as detailed here: https://github.com/huggingface/transformers/pull/38157
  • image_seq_length (int | None.image_seq_length) — The number of image tokens to be used for each image in the input. Added for backward compatibility but this should be set as a processor attribute in future models.
  • min_pixels (int, optional, defaults to 56 * 56) — The min pixels of the image to resize the image.
  • max_pixels (int, optional, defaults to 28 * 28 * 1280) — The max pixels of the image to resize the image.
  • patch_size (int, optional, defaults to 14) — The spatial patch size of the vision encoder.
  • temporal_patch_size (int, optional, defaults to 2) — The temporal patch size of the vision encoder.
  • merge_size (int, optional, defaults to 2) — The merge size of the vision encoder to llm encoder.

Returns

<class 'transformers.feature_extraction_utils.BatchFeature'>

  • data (dict, optional) — Dictionary of lists/arrays/tensors returned by the call/pad methods (‘input_values’, ‘attention_mask’, etc.).
  • tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at initialization.
  • skip_tensor_conversion (list[str] or set[str], optional) — List or set of keys that should NOT be converted to tensors, even when tensor_type is specified.

GlmImageProcessor

class transformers.GlmImageProcessor

< >

( image_processor = None tokenizer = None chat_template = None **kwargs )

Parameters

  • image_processor (GlmImageProcessor, optional) — The image processor is a required input.
  • tokenizer (PreTrainedTokenizerFast, optional) — The tokenizer is a required input.
  • chat_template (str, optional) — A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.

Constructs a GLM-Image processor which wraps a GLM-Image image processor and a GLM-Image tokenizer into a single processor. __call__() and decode() for more information.

GlmImageVisionModel

class transformers.GlmImageVisionModel

< >

( config: GlmImageVisionConfig )

forward

< >

( pixel_values: Tensor grid_thw: Tensor **kwargs ) torch.Tensor of shape (total_patches, hidden_size)

Parameters

  • pixel_values (torch.Tensor of shape (total_patches, num_channels * patch_size * patch_size)) — Packed pixel values.
  • grid_thw (torch.Tensor of shape (num_images, 3)) — The temporal, height and width of feature shape of each image.

Returns

torch.Tensor of shape (total_patches, hidden_size)

Hidden states.

GlmImageTextModel

class transformers.GlmImageTextModel

< >

( config: GlmImageTextConfig )

Parameters

  • config (GlmImageTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Glm Image Text Model outputting raw hidden-states without any specific head on to.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None past_key_values: transformers.cache_utils.Cache | None = None inputs_embeds: torch.FloatTensor | None = None use_cache: bool | None = None cache_position: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.

Returns

transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (None) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The GlmImageTextModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

GlmImageVQVAE

class transformers.GlmImageVQVAE

< >

( config: GlmImageVQVAEConfig )

Parameters

  • config (GlmImageVQVAEConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The VQ-VAE model used in GlmImage for encoding/decoding images into discrete tokens. This model follows the “Make-a-scene: Scene-based text-to-image generation with human priors” paper from Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

_forward_unimplemented

< >

( *input: typing.Any )

Define the computation performed at every call.

Should be overridden by all subclasses.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

GlmImageModel

class transformers.GlmImageModel

< >

( config )

Parameters

  • config (GlmImageModel) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Glm Image Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None past_key_values: transformers.cache_utils.Cache | None = None inputs_embeds: torch.FloatTensor | None = None pixel_values: torch.Tensor | None = None image_grid_thw: torch.LongTensor | None = None rope_deltas: torch.LongTensor | None = None cache_position: torch.LongTensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) transformers.models.glm_image.modeling_glm_image.GlmImageModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • pixel_values (torch.Tensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using image_processor_class. See image_processor_class.__call__ for details (processor_class uses image_processor_class for processing images).
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.
  • rope_deltas (torch.LongTensor of shape (batch_size, ), optional) — The rope index difference between sequence length and multimodal rope.
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.

Returns

transformers.models.glm_image.modeling_glm_image.GlmImageModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.models.glm_image.modeling_glm_image.GlmImageModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (None) and inputs.

  • last_hidden_state (torch.FloatTensor | None.last_hidden_state of shape (batch_size, sequence_length, hidden_size), defaults to None) — Sequence of hidden-states at the output of the last layer of the model.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple[torch.FloatTensor] | None.hidden_states, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple[torch.FloatTensor] | None.attentions, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

  • rope_deltas (torch.LongTensor of shape (batch_size, ), optional) — The rope index difference between sequence length and multimodal rope.

The GlmImageModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

GlmImageForConditionalGeneration

class transformers.GlmImageForConditionalGeneration

< >

( config )

forward

< >

( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None past_key_values: transformers.cache_utils.Cache | None = None inputs_embeds: torch.FloatTensor | None = None labels: torch.LongTensor | None = None pixel_values: torch.Tensor | None = None image_grid_thw: torch.LongTensor | None = None cache_position: torch.LongTensor | None = None logits_to_keep: int | torch.Tensor = 0 **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] )

labels (torch.LongTensor of shape (batch_size, sequence_length), optional): Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. image_grid_thw (torch.LongTensor of shape (num_images, 3), optional): The temporal, height and width of feature shape of each image in LLM.

Example:

>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, GlmImageForConditionalGeneration

>>> model = GlmImageForConditionalGeneration.from_pretrained("zai-org/GLM-Image")
>>> processor = AutoProcessor.from_pretrained("zai-org/GLM-Image")

>>> messages = [
    {
        "role": "user",
        "content": [
            {"type": "image"},
            {"type": "text", "text": "Add a truck of this photo.<sop>28 40<eop>"},
        ],
    },
]
>>> url = "https://www.ilankelman.org/stopsigns/australia.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
>>> inputs = processor(text=[text], images=[image], vision_infos=[vision_infos])

>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"The image shows a street scene with a red stop sign in the foreground. In the background, there is a large red gate with Chinese characters ..."
Update on GitHub