repo
stringlengths
7
90
file_url
stringlengths
81
315
file_path
stringlengths
4
228
content
stringlengths
0
32.8k
language
stringclasses
1 value
license
stringclasses
7 values
commit_sha
stringlengths
40
40
retrieved_at
stringdate
2026-01-04 14:38:15
2026-01-05 02:33:18
truncated
bool
2 classes
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/distributed_training_with_jax.py
guides/distributed_training_with_jax.py
""" Title: Multi-GPU distributed training with JAX Author: [fchollet](https://twitter.com/fchollet) Date created: 2023/07/11 Last modified: 2023/07/11 Description: Guide to multi-GPU/TPU training for Keras models with JAX. Accelerator: GPU """ """ ## Introduction There are generally two ways to distribute computation across multiple devices: **Data parallelism**, where a single model gets replicated on multiple devices or multiple machines. Each of them processes different batches of data, then they merge their results. There exist many variants of this setup, that differ in how the different model replicas merge results, in whether they stay in sync at every batch or whether they are more loosely coupled, etc. **Model parallelism**, where different parts of a single model run on different devices, processing a single batch of data together. This works best with models that have a naturally-parallel architecture, such as models that feature multiple branches. This guide focuses on data parallelism, in particular **synchronous data parallelism**, where the different replicas of the model stay in sync after each batch they process. Synchronicity keeps the model convergence behavior identical to what you would see for single-device training. Specifically, this guide teaches you how to use `jax.sharding` APIs to train Keras models, with minimal changes to your code, on multiple GPUs or TPUS (typically 2 to 16) installed on a single machine (single host, multi-device training). This is the most common setup for researchers and small-scale industry workflows. """ """ ## Setup Let's start by defining the function that creates the model that we will train, and the function that creates the dataset we will train on (MNIST in this case). """ import os os.environ["KERAS_BACKEND"] = "jax" import jax import numpy as np import tensorflow as tf import keras from jax.experimental import mesh_utils from jax.sharding import Mesh from jax.sharding import NamedSharding from jax.sharding import PartitionSpec as P def get_model(): # Make a simple convnet with batch normalization and dropout. inputs = keras.Input(shape=(28, 28, 1)) x = keras.layers.Rescaling(1.0 / 255.0)(inputs) x = keras.layers.Conv2D( filters=12, kernel_size=3, padding="same", use_bias=False )(x) x = keras.layers.BatchNormalization(scale=False, center=True)(x) x = keras.layers.ReLU()(x) x = keras.layers.Conv2D( filters=24, kernel_size=6, use_bias=False, strides=2, )(x) x = keras.layers.BatchNormalization(scale=False, center=True)(x) x = keras.layers.ReLU()(x) x = keras.layers.Conv2D( filters=32, kernel_size=6, padding="same", strides=2, name="large_k", )(x) x = keras.layers.BatchNormalization(scale=False, center=True)(x) x = keras.layers.ReLU()(x) x = keras.layers.GlobalAveragePooling2D()(x) x = keras.layers.Dense(256, activation="relu")(x) x = keras.layers.Dropout(0.5)(x) outputs = keras.layers.Dense(10)(x) model = keras.Model(inputs, outputs) return model def get_datasets(): # Load the data and split it between train and test sets (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Scale images to the [0, 1] range x_train = x_train.astype("float32") x_test = x_test.astype("float32") # Make sure images have shape (28, 28, 1) x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1) print("x_train shape:", x_train.shape) print(x_train.shape[0], "train samples") print(x_test.shape[0], "test samples") # Create TF Datasets train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train)) eval_data = tf.data.Dataset.from_tensor_slices((x_test, y_test)) return train_data, eval_data """ ## Single-host, multi-device synchronous training In this setup, you have one machine with several GPUs or TPUs on it (typically 2 to 16). Each device will run a copy of your model (called a **replica**). For simplicity, in what follows, we'll assume we're dealing with 8 GPUs, at no loss of generality. **How it works** At each step of training: - The current batch of data (called **global batch**) is split into 8 different sub-batches (called **local batches**). For instance, if the global batch has 512 samples, each of the 8 local batches will have 64 samples. - Each of the 8 replicas independently processes a local batch: they run a forward pass, then a backward pass, outputting the gradient of the weights with respect to the loss of the model on the local batch. - The weight updates originating from local gradients are efficiently merged across the 8 replicas. Because this is done at the end of every step, the replicas always stay in sync. In practice, the process of synchronously updating the weights of the model replicas is handled at the level of each individual weight variable. This is done through a using a `jax.sharding.NamedSharding` that is configured to replicate the variables. **How to use it** To do single-host, multi-device synchronous training with a Keras model, you would use the `jax.sharding` features. Here's how it works: - We first create a device mesh using `mesh_utils.create_device_mesh`. - We use `jax.sharding.Mesh`, `jax.sharding.NamedSharding` and `jax.sharding.PartitionSpec` to define how to partition JAX arrays. - We specify that we want to replicate the model and optimizer variables across all devices by using a spec with no axis. - We specify that we want to shard the data across devices by using a spec that splits along the batch dimension. - We use `jax.device_put` to replicate the model and optimizer variables across devices. This happens once at the beginning. - In the training loop, for each batch that we process, we use `jax.device_put` to split the batch across devices before invoking the train step. Here's the flow, where each step is split into its own utility function: """ # Config num_epochs = 2 batch_size = 64 train_data, eval_data = get_datasets() train_data = train_data.batch(batch_size, drop_remainder=True) model = get_model() optimizer = keras.optimizers.Adam(1e-3) loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Initialize all state with .build() (one_batch, one_batch_labels) = next(iter(train_data)) model.build(one_batch) optimizer.build(model.trainable_variables) # This is the loss function that will be differentiated. # Keras provides a pure functional forward pass: model.stateless_call def compute_loss(trainable_variables, non_trainable_variables, x, y): y_pred, updated_non_trainable_variables = model.stateless_call( trainable_variables, non_trainable_variables, x ) loss_value = loss(y, y_pred) return loss_value, updated_non_trainable_variables # Function to compute gradients compute_gradients = jax.value_and_grad(compute_loss, has_aux=True) # Training step, Keras provides a pure functional optimizer.stateless_apply @jax.jit def train_step(train_state, x, y): ( trainable_variables, non_trainable_variables, optimizer_variables, ) = train_state (loss_value, non_trainable_variables), grads = compute_gradients( trainable_variables, non_trainable_variables, x, y ) trainable_variables, optimizer_variables = optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) return loss_value, ( trainable_variables, non_trainable_variables, optimizer_variables, ) # Replicate the model and optimizer variable on all devices def get_replicated_train_state(devices): # All variables will be replicated on all devices var_mesh = Mesh(devices, axis_names=("_")) # In NamedSharding, axes not mentioned are replicated (all axes here) var_replication = NamedSharding(var_mesh, P()) # Apply the distribution settings to the model variables trainable_variables = jax.device_put( model.trainable_variables, var_replication ) non_trainable_variables = jax.device_put( model.non_trainable_variables, var_replication ) optimizer_variables = jax.device_put(optimizer.variables, var_replication) # Combine all state in a tuple return (trainable_variables, non_trainable_variables, optimizer_variables) num_devices = len(jax.local_devices()) print(f"Running on {num_devices} devices: {jax.local_devices()}") devices = mesh_utils.create_device_mesh((num_devices,)) # Data will be split along the batch axis data_mesh = Mesh(devices, axis_names=("batch",)) # naming axes of the mesh data_sharding = NamedSharding( data_mesh, P( "batch", ), ) # naming axes of the sharded partition # Display data sharding x, y = next(iter(train_data)) sharded_x = jax.device_put(x.numpy(), data_sharding) print("Data sharding") jax.debug.visualize_array_sharding(jax.numpy.reshape(sharded_x, [-1, 28 * 28])) train_state = get_replicated_train_state(devices) # Custom training loop for epoch in range(num_epochs): data_iter = iter(train_data) loss_value = None # default for data in data_iter: x, y = data sharded_x = jax.device_put(x.numpy(), data_sharding) loss_value, train_state = train_step(train_state, sharded_x, y.numpy()) print("Epoch", epoch, "loss:", loss_value) # Post-processing model state update to write them back into the model trainable_variables, non_trainable_variables, optimizer_variables = train_state for variable, value in zip(model.trainable_variables, trainable_variables): variable.assign(value) for variable, value in zip( model.non_trainable_variables, non_trainable_variables ): variable.assign(value) """ That's it! """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/transfer_learning.py
guides/transfer_learning.py
""" Title: Transfer learning & fine-tuning Author: [fchollet](https://twitter.com/fchollet) Date created: 2020/04/15 Last modified: 2023/06/25 Description: Complete guide to transfer learning & fine-tuning in Keras. Accelerator: GPU """ """ ## Setup """ import numpy as np import keras from keras import layers import tensorflow_datasets as tfds import matplotlib.pyplot as plt """ ## Introduction **Transfer learning** consists of taking features learned on one problem, and leveraging them on a new, similar problem. For instance, features from a model that has learned to identify raccoons may be useful to kick-start a model meant to identify tanukis. Transfer learning is usually done for tasks where your dataset has too little data to train a full-scale model from scratch. The most common incarnation of transfer learning in the context of deep learning is the following workflow: 1. Take layers from a previously trained model. 2. Freeze them, so as to avoid destroying any of the information they contain during future training rounds. 3. Add some new, trainable layers on top of the frozen layers. They will learn to turn the old features into predictions on a new dataset. 4. Train the new layers on your dataset. A last, optional step, is **fine-tuning**, which consists of unfreezing the entire model you obtained above (or part of it), and re-training it on the new data with a very low learning rate. This can potentially achieve meaningful improvements, by incrementally adapting the pretrained features to the new data. First, we will go over the Keras `trainable` API in detail, which underlies most transfer learning & fine-tuning workflows. Then, we'll demonstrate the typical workflow by taking a model pretrained on the ImageNet dataset, and retraining it on the Kaggle "cats vs dogs" classification dataset. This is adapted from [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python) and the 2016 blog post ["building powerful image classification models using very little data"](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html). """ """ ## Freezing layers: understanding the `trainable` attribute Layers & models have three weight attributes: - `weights` is the list of all weights variables of the layer. - `trainable_weights` is the list of those that are meant to be updated (via gradient descent) to minimize the loss during training. - `non_trainable_weights` is the list of those that aren't meant to be trained. Typically they are updated by the model during the forward pass. **Example: the `Dense` layer has 2 trainable weights (kernel & bias)** """ layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ In general, all weights are trainable weights. The only built-in layer that has non-trainable weights is the `BatchNormalization` layer. It uses non-trainable weights to keep track of the mean and variance of its inputs during training. To learn how to use non-trainable weights in your own custom layers, see the [guide to writing new layers from scratch](https://keras.io/guides/making_new_layers_and_models_via_subclassing/). **Example: the `BatchNormalization` layer has 2 trainable weights and 2 non-trainable weights** """ layer = keras.layers.BatchNormalization() layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ Layers & models also feature a boolean attribute `trainable`. Its value can be changed. Setting `layer.trainable` to `False` moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when training with `fit()` or when training with any custom loop that relies on `trainable_weights` to apply gradient updates). **Example: setting `trainable` to `False`** """ layer = keras.layers.Dense(3) layer.build((None, 4)) # Create the weights layer.trainable = False # Freeze the layer print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights)) """ When a trainable weight becomes non-trainable, its value is no longer updated during training. """ # Make a model with 2 layers layer1 = keras.layers.Dense(3, activation="relu") layer2 = keras.layers.Dense(3, activation="sigmoid") model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2]) # Freeze the first layer layer1.trainable = False # Keep a copy of the weights of layer1 for later reference initial_layer1_weights_values = layer1.get_weights() # Train the model model.compile(optimizer="adam", loss="mse") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) # Check that the weights of layer1 have not changed during training final_layer1_weights_values = layer1.get_weights() np.testing.assert_allclose( initial_layer1_weights_values[0], final_layer1_weights_values[0] ) np.testing.assert_allclose( initial_layer1_weights_values[1], final_layer1_weights_values[1] ) """ Do not confuse the `layer.trainable` attribute with the argument `training` in `layer.__call__()` (which controls whether the layer should run its forward pass in inference mode or training mode). For more information, see the [Keras FAQ]( https://keras.io/getting_started/faq/#whats-the-difference-between-the-training-argument-in-call-and-the-trainable-attribute). """ """ ## Recursive setting of the `trainable` attribute If you set `trainable = False` on a model or on any layer that has sublayers, all children layers become non-trainable as well. **Example:** """ inner_model = keras.Sequential( [ keras.Input(shape=(3,)), keras.layers.Dense(3, activation="relu"), keras.layers.Dense(3, activation="relu"), ] ) model = keras.Sequential( [ keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"), ] ) model.trainable = False # Freeze the outer model assert inner_model.trainable == False # All layers in `model` are now frozen assert ( inner_model.layers[0].trainable == False ) # `trainable` is propagated recursively """ ## The typical transfer-learning workflow This leads us to how a typical transfer learning workflow can be implemented in Keras: 1. Instantiate a base model and load pre-trained weights into it. 2. Freeze all layers in the base model by setting `trainable = False`. 3. Create a new model on top of the output of one (or several) layers from the base model. 4. Train your new model on your new dataset. Note that an alternative, more lightweight workflow could also be: 1. Instantiate a base model and load pre-trained weights into it. 2. Run your new dataset through it and record the output of one (or several) layers from the base model. This is called **feature extraction**. 3. Use that output as input data for a new, smaller model. A key advantage of that second workflow is that you only run the base model once on your data, rather than once per epoch of training. So it's a lot faster & cheaper. An issue with that second workflow, though, is that it doesn't allow you to dynamically modify the input data of your new model during training, which is required when doing data augmentation, for instance. Transfer learning is typically used for tasks when your new dataset has too little data to train a full-scale model from scratch, and in such scenarios data augmentation is very important. So in what follows, we will focus on the first workflow. Here's what the first workflow looks like in Keras: First, instantiate a base model with pre-trained weights. ```python base_model = keras.applications.Xception( weights='imagenet', # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False) # Do not include the ImageNet classifier at the top. ``` Then, freeze the base model. ```python base_model.trainable = False ``` Create a new model on top. ```python inputs = keras.Input(shape=(150, 150, 3)) # We make sure that the base_model is running in inference mode here, # by passing `training=False`. This is important for fine-tuning, as you will # learn in a few paragraphs. x = base_model(inputs, training=False) # Convert features of shape `base_model.output_shape[1:]` to vectors x = keras.layers.GlobalAveragePooling2D()(x) # A Dense classifier with a single unit (binary classification) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) ``` Train the model on new data. ```python model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...) ``` """ """ ## Fine-tuning Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate. This is an optional last step that can potentially give you incremental improvements. It could also potentially lead to quick overfitting -- keep that in mind. It is critical to only do this step *after* the model with frozen layers has been trained to convergence. If you mix randomly-initialized trainable layers with trainable layers that hold pre-trained features, the randomly-initialized layers will cause very large gradient updates during training, which will destroy your pre-trained features. It's also critical to use a very low learning rate at this stage, because you are training a much larger model than in the first round of training, on a dataset that is typically very small. As a result, you are at risk of overfitting very quickly if you apply large weight updates. Here, you only want to readapt the pretrained weights in an incremental way. This is how to implement fine-tuning of the whole base model: ```python # Unfreeze the base model base_model.trainable = True # It's important to recompile your model after you make any changes # to the `trainable` attribute of any inner layer, so that your changes # are take into account model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()]) # Train end-to-end. Be careful to stop before you overfit! model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...) ``` **Important note about `compile()` and `trainable`** Calling `compile()` on a model is meant to "freeze" the behavior of that model. This implies that the `trainable` attribute values at the time the model is compiled should be preserved throughout the lifetime of that model, until `compile` is called again. Hence, if you change any `trainable` value, make sure to call `compile()` again on your model for your changes to be taken into account. **Important notes about `BatchNormalization` layer** Many image models contain `BatchNormalization` layers. That layer is a special case on every imaginable count. Here are a few things to keep in mind. - `BatchNormalization` contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs. - When you set `bn_layer.trainable = False`, the `BatchNormalization` layer will run in inference mode, and will not update its mean & variance statistics. This is not the case for other layers in general, as [weight trainability & inference/training modes are two orthogonal concepts]( https://keras.io/getting_started/faq/#whats-the-difference-between-the-training-argument-in-call-and-the-trainable-attribute). But the two are tied in the case of the `BatchNormalization` layer. - When you unfreeze a model for finetuning by setting `base_model.trainable=True` that contains `BatchNormalization` layers, then all layers of the base model become trainable along with `BatchNormalization` layers. It's a good idea to keep `BatchNormalization` either frozen during fine-tuning, or running in inference mode, so remember to set `layer.trainable = False` on those layers specifically after unfreezing the outer model, or otherwise call the model with `training=False` to keep it inference mode. You'll see this pattern in action in the end-to-end example at the end of this guide. """ """ ## An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset To solidify these concepts, let's walk you through a concrete end-to-end transfer learning & fine-tuning example. We will load the Xception model, pre-trained on ImageNet, and use it on the Kaggle "cats vs. dogs" classification dataset. """ """ ### Getting the data First, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset, you'll probably want to use the utility `keras.utils.image_dataset_from_directory` to generate similar labeled dataset objects from a set of images on disk filed into class-specific folders. Transfer learning is most useful when working with very small datasets. To keep our dataset small, we will use 40% of the original training data (25,000 images) for training, 10% for validation, and 10% for testing. """ tfds.disable_progress_bar() train_ds, validation_ds, test_ds = tfds.load( "cats_vs_dogs", # Reserve 10% for validation and 10% for test split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"], as_supervised=True, # Include labels ) print(f"Number of training samples: {train_ds.cardinality()}") print(f"Number of validation samples: {validation_ds.cardinality()}") print(f"Number of test samples: {test_ds.cardinality()}") """ These are the first 9 images in the training dataset -- as you can see, they're all different sizes. """ plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(train_ds.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis("off") """ We can also see that label 1 is "dog" and label 0 is "cat". """ """ ### Standardizing the data Our raw images have a variety of sizes. In addition, each pixel consists of 3 integer values between 0 and 255 (RGB level values). This isn't a great fit for feeding a neural network. We need to do 2 things: - Standardize to a fixed image size. We pick 150x150. - Normalize pixel values between -1 and 1. We'll do this using a `Normalization` layer as part of the model itself. In general, it's a good practice to develop models that take raw data as input, as opposed to models that take already-preprocessed data. The reason being that, if your model expects preprocessed data, any time you export your model to use it elsewhere (in a web browser, in a mobile app), you'll need to reimplement the exact same preprocessing pipeline. This gets very tricky very quickly. So we should do the least possible amount of preprocessing before hitting the model. Here, we'll do image resizing in the data pipeline (because a deep neural network can only process contiguous batches of data), and we'll do the input value scaling as part of the model, when we create it. Let's resize images to 150x150: """ resize_fn = keras.layers.Resizing(150, 150) train_ds = train_ds.map(lambda x, y: (resize_fn(x), y)) validation_ds = validation_ds.map(lambda x, y: (resize_fn(x), y)) test_ds = test_ds.map(lambda x, y: (resize_fn(x), y)) """ ### Using random data augmentation When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting. """ augmentation_layers = [ layers.RandomFlip("horizontal"), layers.RandomRotation(0.1), ] def data_augmentation(x): for layer in augmentation_layers: x = layer(x) return x train_ds = train_ds.map(lambda x, y: (data_augmentation(x), y)) """ Let's batch the data and use prefetching to optimize loading speed. """ from tensorflow import data as tf_data batch_size = 64 train_ds = train_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache() validation_ds = ( validation_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache() ) test_ds = test_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache() """ Let's visualize what the first image of the first batch looks like after various random transformations: """ for images, labels in train_ds.take(1): plt.figure(figsize=(10, 10)) first_image = images[0] for i in range(9): ax = plt.subplot(3, 3, i + 1) augmented_image = data_augmentation(np.expand_dims(first_image, 0)) plt.imshow(np.array(augmented_image[0]).astype("int32")) plt.title(int(labels[0])) plt.axis("off") """ ## Build a model Now let's built a model that follows the blueprint we've explained earlier. Note that: - We add a `Rescaling` layer to scale input values (initially in the `[0, 255]` range) to the `[-1, 1]` range. - We add a `Dropout` layer before the classification layer, for regularization. - We make sure to pass `training=False` when calling the base model, so that it runs in inference mode, so that batchnorm statistics don't get updated even after we unfreeze the base model for fine-tuning. """ base_model = keras.applications.Xception( weights="imagenet", # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False, ) # Do not include the ImageNet classifier at the top. # Freeze the base_model base_model.trainable = False # Create new model on top inputs = keras.Input(shape=(150, 150, 3)) # Pre-trained Xception weights requires that input be scaled # from (0, 255) to a range of (-1., +1.), the rescaling layer # outputs: `(inputs * scale) + offset` scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1) x = scale_layer(inputs) # The base model contains batchnorm layers. We want to keep them in inference mode # when we unfreeze the base model for fine-tuning, so we make sure that the # base_model is running in inference mode here. x = base_model(x, training=False) x = keras.layers.GlobalAveragePooling2D()(x) x = keras.layers.Dropout(0.2)(x) # Regularize with dropout outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) model.summary(show_trainable=True) """ ## Train the top layer """ model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 2 print("Fitting the top layer of the model") model.fit(train_ds, epochs=epochs, validation_data=validation_ds) """ ## Do a round of fine-tuning of the entire model Finally, let's unfreeze the base model and train the entire model end-to-end with a low learning rate. Importantly, although the base model becomes trainable, it is still running in inference mode since we passed `training=False` when calling it when we built the model. This means that the batch normalization layers inside won't update their batch statistics. If they did, they would wreck havoc on the representations learned by the model so far. """ # Unfreeze the base_model. Note that it keeps running in inference mode # since we passed `training=False` when calling it. This means that # the batchnorm layers will not update their batch statistics. # This prevents the batchnorm layers from undoing all the training # we've done so far. base_model.trainable = True model.summary(show_trainable=True) model.compile( optimizer=keras.optimizers.Adam(1e-5), # Low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()], ) epochs = 1 print("Fitting the end-to-end model") model.fit(train_ds, epochs=epochs, validation_data=validation_ds) """ After 10 epochs, fine-tuning gains us a nice improvement here. Let's evaluate the model on the test dataset: """ print("Test dataset evaluation") model.evaluate(test_ds)
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/writing_your_own_callbacks.py
guides/writing_your_own_callbacks.py
""" Title: Writing your own callbacks Authors: Rick Chao, Francois Chollet Date created: 2019/03/20 Last modified: 2023/06/25 Description: Complete guide to writing new Keras callbacks. Accelerator: GPU """ """ ## Introduction A callback is a powerful tool to customize the behavior of a Keras model during training, evaluation, or inference. Examples include `keras.callbacks.TensorBoard` to visualize training progress and results with TensorBoard, or `keras.callbacks.ModelCheckpoint` to periodically save your model during training. In this guide, you will learn what a Keras callback is, what it can do, and how you can build your own. We provide a few demos of simple callback applications to get you started. """ """ ## Setup """ import numpy as np import keras """ ## Keras callbacks overview All callbacks subclass the `keras.callbacks.Callback` class, and override a set of methods called at various stages of training, testing, and predicting. Callbacks are useful to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument `callbacks`) to the following model methods: - `keras.Model.fit()` - `keras.Model.evaluate()` - `keras.Model.predict()` """ """ ## An overview of callback methods ### Global methods #### `on_(train|test|predict)_begin(self, logs=None)` Called at the beginning of `fit`/`evaluate`/`predict`. #### `on_(train|test|predict)_end(self, logs=None)` Called at the end of `fit`/`evaluate`/`predict`. ### Batch-level methods for training/testing/predicting #### `on_(train|test|predict)_batch_begin(self, batch, logs=None)` Called right before processing a batch during training/testing/predicting. #### `on_(train|test|predict)_batch_end(self, batch, logs=None)` Called at the end of training/testing/predicting a batch. Within this method, `logs` is a dict containing the metrics results. ### Epoch-level methods (training only) #### `on_epoch_begin(self, epoch, logs=None)` Called at the beginning of an epoch during training. #### `on_epoch_end(self, epoch, logs=None)` Called at the end of an epoch during training. """ """ ## A basic example Let's take a look at a concrete example. To get started, let's import tensorflow and define a simple Sequential Keras model: """ # Define the Keras model to add callbacks to def get_model(): model = keras.Sequential() model.add(keras.layers.Dense(1)) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=0.1), loss="mean_squared_error", metrics=["mean_absolute_error"], ) return model """ Then, load the MNIST data for training and testing from Keras datasets API: """ # Load example MNIST data and pre-process it (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(-1, 784).astype("float32") / 255.0 x_test = x_test.reshape(-1, 784).astype("float32") / 255.0 # Limit the data to 1000 samples x_train = x_train[:1000] y_train = y_train[:1000] x_test = x_test[:1000] y_test = y_test[:1000] """ Now, define a simple custom callback that logs: - When `fit`/`evaluate`/`predict` starts & ends - When each epoch starts & ends - When each training batch starts & ends - When each evaluation (test) batch starts & ends - When each inference (prediction) batch starts & ends """ class CustomCallback(keras.callbacks.Callback): def on_train_begin(self, logs=None): keys = list(logs.keys()) print("Starting training; got log keys: {}".format(keys)) def on_train_end(self, logs=None): keys = list(logs.keys()) print("Stop training; got log keys: {}".format(keys)) def on_epoch_begin(self, epoch, logs=None): keys = list(logs.keys()) print( "Start epoch {} of training; got log keys: {}".format(epoch, keys) ) def on_epoch_end(self, epoch, logs=None): keys = list(logs.keys()) print("End epoch {} of training; got log keys: {}".format(epoch, keys)) def on_test_begin(self, logs=None): keys = list(logs.keys()) print("Start testing; got log keys: {}".format(keys)) def on_test_end(self, logs=None): keys = list(logs.keys()) print("Stop testing; got log keys: {}".format(keys)) def on_predict_begin(self, logs=None): keys = list(logs.keys()) print("Start predicting; got log keys: {}".format(keys)) def on_predict_end(self, logs=None): keys = list(logs.keys()) print("Stop predicting; got log keys: {}".format(keys)) def on_train_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print( "...Training: start of batch {}; got log keys: {}".format( batch, keys ) ) def on_train_batch_end(self, batch, logs=None): keys = list(logs.keys()) print( "...Training: end of batch {}; got log keys: {}".format(batch, keys) ) def on_test_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print( "...Evaluating: start of batch {}; got log keys: {}".format( batch, keys ) ) def on_test_batch_end(self, batch, logs=None): keys = list(logs.keys()) print( "...Evaluating: end of batch {}; got log keys: {}".format( batch, keys ) ) def on_predict_batch_begin(self, batch, logs=None): keys = list(logs.keys()) print( "...Predicting: start of batch {}; got log keys: {}".format( batch, keys ) ) def on_predict_batch_end(self, batch, logs=None): keys = list(logs.keys()) print( "...Predicting: end of batch {}; got log keys: {}".format( batch, keys ) ) """ Let's try it out: """ model = get_model() model.fit( x_train, y_train, batch_size=128, epochs=1, verbose=0, validation_split=0.5, callbacks=[CustomCallback()], ) res = model.evaluate( x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()] ) res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()]) """ ### Usage of `logs` dict The `logs` dict contains the loss value, and all the metrics at the end of a batch or epoch. Example includes the loss and mean absolute error. """ class LossAndErrorPrintingCallback(keras.callbacks.Callback): def on_train_batch_end(self, batch, logs=None): print( "Up to batch {}, the average loss is {:7.2f}.".format( batch, logs["loss"] ) ) def on_test_batch_end(self, batch, logs=None): print( "Up to batch {}, the average loss is {:7.2f}.".format( batch, logs["loss"] ) ) def on_epoch_end(self, epoch, logs=None): print( "The average loss for epoch {} is {:7.2f} " "and mean absolute error is {:7.2f}.".format( epoch, logs["loss"], logs["mean_absolute_error"] ) ) model = get_model() model.fit( x_train, y_train, batch_size=128, epochs=2, verbose=0, callbacks=[LossAndErrorPrintingCallback()], ) res = model.evaluate( x_test, y_test, batch_size=128, verbose=0, callbacks=[LossAndErrorPrintingCallback()], ) """ ## Usage of `self.model` attribute In addition to receiving log information when one of their methods is called, callbacks have access to the model associated with the current round of training/evaluation/inference: `self.model`. Here are a few of the things you can do with `self.model` in a callback: - Set `self.model.stop_training = True` to immediately interrupt training. - Mutate hyperparameters of the optimizer (available as `self.model.optimizer`), such as `self.model.optimizer.learning_rate`. - Save the model at period intervals. - Record the output of `model.predict()` on a few test samples at the end of each epoch, to use as a sanity check during training. - Extract visualizations of intermediate features at the end of each epoch, to monitor what the model is learning over time. - etc. Let's see this in action in a couple of examples. """ """ ## Examples of Keras callback applications ### Early stopping at minimum loss This first example shows the creation of a `Callback` that stops training when the minimum of loss has been reached, by setting the attribute `self.model.stop_training` (boolean). Optionally, you can provide an argument `patience` to specify how many epochs we should wait before stopping after having reached a local minimum. `keras.callbacks.EarlyStopping` provides a more complete and general implementation. """ class EarlyStoppingAtMinLoss(keras.callbacks.Callback): """Stop training when the loss is at its min, i.e. the loss stops decreasing. Arguments: patience: Number of epochs to wait after min has been hit. After this number of no improvement, training stops. """ def __init__(self, patience=0): super().__init__() self.patience = patience # best_weights to store the weights at which the minimum loss occurs. self.best_weights = None def on_train_begin(self, logs=None): # The number of epoch it has waited when loss is no longer minimum. self.wait = 0 # The epoch the training stops at. self.stopped_epoch = 0 # Initialize the best as infinity. self.best = np.inf def on_epoch_end(self, epoch, logs=None): current = logs.get("loss") if np.less(current, self.best): self.best = current self.wait = 0 # Record the best weights if current results is better (less). self.best_weights = self.model.get_weights() else: self.wait += 1 if self.wait >= self.patience: self.stopped_epoch = epoch self.model.stop_training = True print("Restoring model weights from the end of the best epoch.") self.model.set_weights(self.best_weights) def on_train_end(self, logs=None): if self.stopped_epoch > 0: print(f"Epoch {self.stopped_epoch + 1}: early stopping") model = get_model() model.fit( x_train, y_train, batch_size=64, epochs=30, verbose=0, callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()], ) """ ### Learning rate scheduling In this example, we show how a custom Callback can be used to dynamically change the learning rate of the optimizer during the course of training. See `callbacks.LearningRateScheduler` for a more general implementations. """ class CustomLearningRateScheduler(keras.callbacks.Callback): """Learning rate scheduler which sets the learning rate according to schedule. Arguments: schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate as inputs and returns a new learning rate as output (float). """ def __init__(self, schedule): super().__init__() self.schedule = schedule def on_epoch_begin(self, epoch, logs=None): if not hasattr(self.model.optimizer, "learning_rate"): raise ValueError('Optimizer must have a "learning_rate" attribute.') # Get the current learning rate from model's optimizer. lr = self.model.optimizer.learning_rate # Call schedule function to get the scheduled learning rate. scheduled_lr = self.schedule(epoch, lr) # Set the value back to the optimizer before this epoch starts self.model.optimizer.learning_rate = scheduled_lr print( f"\nEpoch {epoch}: Learning rate is {float(np.array(scheduled_lr))}." ) LR_SCHEDULE = [ # (epoch to start, learning rate) tuples (3, 0.05), (6, 0.01), (9, 0.005), (12, 0.001), ] def lr_schedule(epoch, lr): """Helper function to retrieve the scheduled learning rate based on epoch.""" if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]: return lr for i in range(len(LR_SCHEDULE)): if epoch == LR_SCHEDULE[i][0]: return LR_SCHEDULE[i][1] return lr model = get_model() model.fit( x_train, y_train, batch_size=64, epochs=15, verbose=0, callbacks=[ LossAndErrorPrintingCallback(), CustomLearningRateScheduler(lr_schedule), ], ) """ ### Built-in Keras callbacks Be sure to check out the existing Keras callbacks by reading the [API docs](https://keras.io/api/callbacks/). Applications include logging to CSV, saving the model, visualizing metrics in TensorBoard, and a lot more! """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/custom_train_step_in_tensorflow.py
guides/custom_train_step_in_tensorflow.py
""" Title: Customizing what happens in `fit()` with TensorFlow Author: [fchollet](https://twitter.com/fchollet) Date created: 2020/04/15 Last modified: 2023/06/27 Description: Overriding the training step of the Model class with TensorFlow. Accelerator: GPU """ """ ## Introduction When you're doing supervised learning, you can use `fit()` and everything works smoothly. When you need to take control of every little detail, you can write your own training loop entirely from scratch. But what if you need a custom training algorithm, but you still want to benefit from the convenient features of `fit()`, such as callbacks, built-in distribution support, or step fusing? A core principle of Keras is **progressive disclosure of complexity**. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience. When you need to customize what `fit()` does, you should **override the training step function of the `Model` class**. This is the function that is called by `fit()` for every batch of data. You will then be able to call `fit()` as usual -- and it will be running your own learning algorithm. Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you're building `Sequential` models, Functional API models, or subclassed models. Let's see how that works. """ """ ## Setup """ import os # This guide can only be run with the TF backend. os.environ["KERAS_BACKEND"] = "tensorflow" import tensorflow as tf import keras from keras import layers import numpy as np """ ## A first simple example Let's start from a simple example: - We create a new class that subclasses `keras.Model`. - We just override the method `train_step(self, data)`. - We return a dictionary mapping metric names (including the loss) to their current value. The input argument `data` is what gets passed to fit as training data: - If you pass NumPy arrays, by calling `fit(x, y, ...)`, then `data` will be the tuple `(x, y)` - If you pass a `tf.data.Dataset`, by calling `fit(dataset, ...)`, then `data` will be what gets yielded by `dataset` at each batch. In the body of the `train_step()` method, we implement a regular training update, similar to what you are already familiar with. Importantly, **we compute the loss via `self.compute_loss()`**, which wraps the loss(es) function(s) that were passed to `compile()`. Similarly, we call `metric.update_state(y, y_pred)` on metrics from `self.metrics`, to update the state of the metrics that were passed in `compile()`, and we query results from `self.metrics` at the end to retrieve their current value. """ class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compute_loss(y=y, y_pred=y_pred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply(gradients, trainable_vars) # Update metrics (includes the metric that tracks the loss) for metric in self.metrics: if metric.name == "loss": metric.update_state(loss) else: metric.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} """ Let's try this out: """ # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # Just use `fit` as usual x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=3) """ ## Going lower-level Naturally, you could just skip passing a loss function in `compile()`, and instead do everything *manually* in `train_step`. Likewise for metrics. Here's a lower-level example, that only uses `compile()` to configure the optimizer: - We start by creating `Metric` instances to track our loss and a MAE score (in `__init__()`). - We implement a custom `train_step()` that updates the state of these metrics (by calling `update_state()` on them), then query them (via `result()`) to return their current average value, to be displayed by the progress bar and to be pass to any callback. - Note that we would need to call `reset_states()` on our metrics between each epoch! Otherwise calling `result()` would return an average since the start of training, whereas we usually work with per-epoch averages. Thankfully, the framework can do that for us: just list any metric you want to reset in the `metrics` property of the model. The model will call `reset_states()` on any object listed here at the beginning of each `fit()` epoch or at the beginning of a call to `evaluate()`. """ class CustomModel(keras.Model): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.loss_tracker = keras.metrics.Mean(name="loss") self.mae_metric = keras.metrics.MeanAbsoluteError(name="mae") self.loss_fn = keras.losses.MeanSquaredError() def train_step(self, data): x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute our own loss loss = self.loss_fn(y, y_pred) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply(gradients, trainable_vars) # Compute our own metrics self.loss_tracker.update_state(loss) self.mae_metric.update_state(y, y_pred) return { "loss": self.loss_tracker.result(), "mae": self.mae_metric.result(), } @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. return [self.loss_tracker, self.mae_metric] # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) # We don't pass a loss or metrics here. model.compile(optimizer="adam") # Just use `fit` as usual -- you can use callbacks, etc. x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=5) """ ## Supporting `sample_weight` & `class_weight` You may have noticed that our first basic example didn't make any mention of sample weighting. If you want to support the `fit()` arguments `sample_weight` and `class_weight`, you'd simply do the following: - Unpack `sample_weight` from the `data` argument - Pass it to `compute_loss` & `update_state` (of course, you could also just apply it manually if you don't rely on `compile()` for losses & metrics) - That's it. """ class CustomModel(keras.Model): def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. if len(data) == 3: x, y, sample_weight = data else: sample_weight = None x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value. # The loss function is configured in `compile()`. loss = self.compute_loss( y=y, y_pred=y_pred, sample_weight=sample_weight, ) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply(gradients, trainable_vars) # Update the metrics. # Metrics are configured in `compile()`. for metric in self.metrics: if metric.name == "loss": metric.update_state(loss) else: metric.update_state(y, y_pred, sample_weight=sample_weight) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # You can now use sample_weight argument x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) sw = np.random.random((1000, 1)) model.fit(x, y, sample_weight=sw, epochs=3) """ ## Providing your own evaluation step What if you want to do the same for calls to `model.evaluate()`? Then you would override `test_step` in exactly the same way. Here's what it looks like: """ class CustomModel(keras.Model): def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_pred = self(x, training=False) # Updates the metrics tracking the loss loss = self.compute_loss(y=y, y_pred=y_pred) # Update the metrics. for metric in self.metrics: if metric.name == "loss": metric.update_state(loss) else: metric.update_state(y, y_pred) # Return a dict mapping metric names to current value. # Note that it will include the loss (tracked in self.metrics). return {m.name: m.result() for m in self.metrics} # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(loss="mse", metrics=["mae"]) # Evaluate with our custom test_step x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.evaluate(x, y) """ ## Wrapping up: an end-to-end GAN example Let's walk through an end-to-end example that leverages everything you just learned. Let's consider: - A generator network meant to generate 28x28x1 images. - A discriminator network meant to classify 28x28x1 images into two classes ("fake" and "real"). - One optimizer for each. - A loss function to train the discriminator. """ # Create the discriminator discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(negative_slope=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(negative_slope=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) # Create the generator latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(negative_slope=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(negative_slope=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(negative_slope=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) """ Here's a feature-complete GAN class, overriding `compile()` to use its own signature, and implementing the entire GAN algorithm in 17 lines in `train_step`: """ class GAN(keras.Model): def __init__(self, discriminator, generator, latent_dim): super().__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.d_loss_tracker = keras.metrics.Mean(name="d_loss") self.g_loss_tracker = keras.metrics.Mean(name="g_loss") self.seed_generator = keras.random.SeedGenerator(1337) @property def metrics(self): return [self.d_loss_tracker, self.g_loss_tracker] def compile(self, d_optimizer, g_optimizer, loss_fn): super().compile() self.d_optimizer = d_optimizer self.g_optimizer = g_optimizer self.loss_fn = loss_fn def train_step(self, real_images): if isinstance(real_images, tuple): real_images = real_images[0] # Sample random points in the latent space batch_size = tf.shape(real_images)[0] random_latent_vectors = keras.random.normal( shape=(batch_size, self.latent_dim), seed=self.seed_generator ) # Decode them to fake images generated_images = self.generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * keras.random.uniform( tf.shape(labels), seed=self.seed_generator ) # Train the discriminator with tf.GradientTape() as tape: predictions = self.discriminator(combined_images) d_loss = self.loss_fn(labels, predictions) grads = tape.gradient(d_loss, self.discriminator.trainable_weights) self.d_optimizer.apply(grads, self.discriminator.trainable_weights) # Sample random points in the latent space random_latent_vectors = keras.random.normal( shape=(batch_size, self.latent_dim), seed=self.seed_generator ) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = self.discriminator( self.generator(random_latent_vectors) ) g_loss = self.loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, self.generator.trainable_weights) self.g_optimizer.apply(grads, self.generator.trainable_weights) # Update metrics and return their value. self.d_loss_tracker.update_state(d_loss) self.g_loss_tracker.update_state(g_loss) return { "d_loss": self.d_loss_tracker.result(), "g_loss": self.g_loss_tracker.result(), } """ Let's test-drive it: """ # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) gan = GAN( discriminator=discriminator, generator=generator, latent_dim=latent_dim ) gan.compile( d_optimizer=keras.optimizers.Adam(learning_rate=0.0003), g_optimizer=keras.optimizers.Adam(learning_rate=0.0003), loss_fn=keras.losses.BinaryCrossentropy(from_logits=True), ) # To limit the execution time, we only train on 100 batches. You can train on # the entire dataset. You will need about 20 epochs to get nice results. gan.fit(dataset.take(100), epochs=1) """ The ideas behind deep learning are simple, so why should their implementation be painful? """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/training_with_built_in_methods.py
guides/training_with_built_in_methods.py
""" Title: Training & evaluation with the built-in methods Author: [fchollet](https://twitter.com/fchollet) Date created: 2019/03/01 Last modified: 2023/03/25 Description: Complete guide to training & evaluation with `fit()` and `evaluate()`. Accelerator: GPU """ """ ## Setup """ # We import torch & TF so as to use torch Dataloaders & tf.data.Datasets. import torch import tensorflow as tf import os import numpy as np import keras from keras import layers from keras import ops """ ## Introduction This guide covers training, evaluation, and prediction (inference) models when using built-in APIs for training & validation (such as `Model.fit()`, `Model.evaluate()` and `Model.predict()`). If you are interested in leveraging `fit()` while specifying your own training step function, see the [Customizing what happens in `fit()` guide](/guides/customizing_what_happens_in_fit/). If you are interested in writing your own training & evaluation loops from scratch, see the guide ["writing a training loop from scratch"](/guides/writing_a_training_loop_from_scratch/). In general, whether you are using built-in loops or writing your own, model training & evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing. This guide doesn't cover distributed training, which is covered in our [guide to multi-GPU & distributed training](https://keras.io/guides/distributed_training/). """ """ ## API overview: a first end-to-end example When passing data to the built-in training loops of a model, you should either use: - NumPy arrays (if your data is small and fits in memory) - Subclasses of `keras.utils.PyDataset` - `tf.data.Dataset` objects - PyTorch `DataLoader` instances In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, in order to demonstrate how to use optimizers, losses, and metrics. Afterwards, we'll take a close look at each of the other options. Let's consider the following model (here, we build in with the Functional API, but it could be a Sequential model or a subclassed model as well): """ inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, activation="softmax", name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) """ Here's what the typical end-to-end workflow looks like, consisting of: - Training - Validation on a holdout set generated from the original training data - Evaluation on the test data We'll use MNIST data for this example. """ (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data (these are NumPy arrays) x_train = x_train.reshape(60000, 784).astype("float32") / 255 x_test = x_test.reshape(10000, 784).astype("float32") / 255 y_train = y_train.astype("float32") y_test = y_test.astype("float32") # Reserve 10,000 samples for validation x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] """ We specify the training configuration (optimizer, loss, metrics): """ model.compile( optimizer=keras.optimizers.RMSprop(), # Optimizer # Loss function to minimize loss=keras.losses.SparseCategoricalCrossentropy(), # List of metrics to monitor metrics=[keras.metrics.SparseCategoricalAccuracy()], ) """ We call `fit()`, which will train the model by slicing the data into "batches" of size `batch_size`, and repeatedly iterating over the entire dataset for a given number of `epochs`. """ print("Fit model on training data") history = model.fit( x_train, y_train, batch_size=64, epochs=2, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch validation_data=(x_val, y_val), ) """ The returned `history` object holds a record of the loss values and metric values during training: """ history.history """ We evaluate the model on the test data via `evaluate()`: """ # Evaluate the model on the test data using `evaluate` print("Evaluate on test data") results = model.evaluate(x_test, y_test, batch_size=128) print("test loss, test acc:", results) # Generate predictions (probabilities -- the output of the last layer) # on new data using `predict` print("Generate predictions for 3 samples") predictions = model.predict(x_test[:3]) print("predictions shape:", predictions.shape) """ Now, let's review each piece of this workflow in detail. """ """ ## The `compile()` method: specifying a loss, metrics, and an optimizer To train a model with `fit()`, you need to specify a loss function, an optimizer, and optionally, some metrics to monitor. You pass these to the model as arguments to the `compile()` method: """ model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) """ The `metrics` argument should be a list -- your model can have any number of metrics. If your model has multiple outputs, you can specify different losses and metrics for each output, and you can modulate the contribution of each output to the total loss of the model. You will find more details about this in the **Passing data to multi-input, multi-output models** section. Note that if you're satisfied with the default settings, in many cases the optimizer, loss, and metrics can be specified via string identifiers as a shortcut: """ model.compile( optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"], ) """ For later reuse, let's put our model definition and compile step in functions; we will call them several times across different examples in this guide. """ def get_uncompiled_model(): inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, activation="softmax", name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) return model def get_compiled_model(): model = get_uncompiled_model() model.compile( optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"], ) return model """ ### Many built-in optimizers, losses, and metrics are available In general, you won't have to create your own losses, metrics, or optimizers from scratch, because what you need is likely to be already part of the Keras API: Optimizers: - `SGD()` (with or without momentum) - `RMSprop()` - `Adam()` - etc. Losses: - `MeanSquaredError()` - `KLDivergence()` - `CosineSimilarity()` - etc. Metrics: - `AUC()` - `Precision()` - `Recall()` - etc. """ """ ### Custom losses If you need to create a custom loss, Keras provides three ways to do so. The first method involves creating a function that accepts inputs `y_true` and `y_pred`. The following example shows a loss function that computes the mean squared error between the real data and the predictions: """ def custom_mean_squared_error(y_true, y_pred): return ops.mean(ops.square(y_true - y_pred), axis=-1) model = get_uncompiled_model() model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error) # We need to one-hot encode the labels to use MSE y_train_one_hot = ops.one_hot(y_train, num_classes=10) model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) """ If you need a loss function that takes in parameters beside `y_true` and `y_pred`, you can subclass the `keras.losses.Loss` class and implement the following two methods: - `__init__(self)`: accept parameters to pass during the call of your loss function - `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions (y_pred) to compute the model's loss Let's say you want to use mean squared error, but with an added term that will de-incentivize prediction values far from 0.5 (we assume that the categorical targets are one-hot encoded and take values between 0 and 1). This creates an incentive for the model not to be too confident, which may help reduce overfitting (we won't know if it works until we try!). Here's how you would do it: """ class CustomMSE(keras.losses.Loss): def __init__(self, regularization_factor=0.1, name="custom_mse"): super().__init__(name=name) self.regularization_factor = regularization_factor def call(self, y_true, y_pred): mse = ops.mean(ops.square(y_true - y_pred), axis=-1) reg = ops.mean(ops.square(0.5 - y_pred), axis=-1) return mse + reg * self.regularization_factor model = get_uncompiled_model() model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE()) y_train_one_hot = ops.one_hot(y_train, num_classes=10) model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) """ ### Custom metrics If you need a metric that isn't part of the API, you can easily create custom metrics by subclassing the `keras.metrics.Metric` class. You will need to implement 4 methods: - `__init__(self)`, in which you will create state variables for your metric. - `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targets y_true and the model predictions y_pred to update the state variables. - `result(self)`, which uses the state variables to compute the final results. - `reset_state(self)`, which reinitializes the state of the metric. State update and results computation are kept separate (in `update_state()` and `result()`, respectively) because in some cases, the results computation might be very expensive and would only be done periodically. Here's a simple example showing how to implement a `CategoricalTruePositives` metric that counts how many samples were correctly classified as belonging to a given class: """ class CategoricalTruePositives(keras.metrics.Metric): def __init__(self, name="categorical_true_positives", **kwargs): super().__init__(name=name, **kwargs) self.true_positives = self.add_variable( shape=(), name="ctp", initializer="zeros" ) def update_state(self, y_true, y_pred, sample_weight=None): y_pred = ops.reshape(ops.argmax(y_pred, axis=1), (-1, 1)) values = ops.cast(y_true, "int32") == ops.cast(y_pred, "int32") values = ops.cast(values, "float32") if sample_weight is not None: sample_weight = ops.cast(sample_weight, "float32") values = ops.multiply(values, sample_weight) self.true_positives.assign_add(ops.sum(values)) def result(self): return self.true_positives def reset_state(self): # The state of the metric will be reset at the start of each epoch. self.true_positives.assign(0) model = get_uncompiled_model() model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[CategoricalTruePositives()], ) model.fit(x_train, y_train, batch_size=64, epochs=3) """ ### Handling losses and metrics that don't fit the standard signature The overwhelming majority of losses and metrics can be computed from `y_true` and `y_pred`, where `y_pred` is an output of your model -- but not all of them. For instance, a regularization loss may only require the activation of a layer (there are no targets in this case), and this activation may not be a model output. In such cases, you can call `self.add_loss(loss_value)` from inside the call method of a custom layer. Losses added in this way get added to the "main" loss during training (the one passed to `compile()`). Here's a simple example that adds activity regularization (note that activity regularization is built-in in all Keras layers -- this layer is just for the sake of providing a concrete example): """ class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(ops.sum(inputs) * 0.1) return inputs # Pass-through layer. inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) # The displayed loss will be much higher than before # due to the regularization component. model.fit(x_train, y_train, batch_size=64, epochs=1) """ Note that when you pass losses via `add_loss()`, it becomes possible to call `compile()` without a loss function, since the model already has a loss to minimize. Consider the following `LogisticEndpoint` layer: it takes as inputs targets & logits, and it tracks a crossentropy loss via `add_loss()`. """ class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super().__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) def call(self, targets, logits, sample_weights=None): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. loss = self.loss_fn(targets, logits, sample_weights) self.add_loss(loss) # Return the inference-time prediction tensor (for `.predict()`). return ops.softmax(logits) """ You can use it in a model with two inputs (input data & targets), compiled without a `loss` argument, like this: """ inputs = keras.Input(shape=(3,), name="inputs") targets = keras.Input(shape=(10,), name="targets") logits = keras.layers.Dense(10)(inputs) predictions = LogisticEndpoint(name="predictions")(targets, logits) model = keras.Model(inputs=[inputs, targets], outputs=predictions) model.compile(optimizer="adam") # No loss argument! data = { "inputs": np.random.random((3, 3)), "targets": np.random.random((3, 10)), } model.fit(data) """ For more information about training multi-input models, see the section **Passing data to multi-input, multi-output models**. """ """ ### Automatically setting apart a validation holdout set In the first end-to-end example you saw, we used the `validation_data` argument to pass a tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation loss and validation metrics at the end of each epoch. Here's another option: the argument `validation_split` allows you to automatically reserve part of your training data for validation. The argument value represents the fraction of the data to be reserved for validation, so it should be set to a number higher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% of the data for validation", and `validation_split=0.6` means "use 60% of the data for validation". The way the validation is computed is by taking the last x% samples of the arrays received by the `fit()` call, before any shuffling. Note that you can only use `validation_split` when training with NumPy data. """ model = get_compiled_model() model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1) """ ## Training & evaluation using `tf.data` Datasets In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the `validation_data` and `validation_split` arguments in `fit()`, when your data is passed as NumPy arrays. Another option is to use an iterator-like, such as a `tf.data.Dataset`, a PyTorch `DataLoader`, or a Keras `PyDataset`. Let's take look at the former. The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable. For a complete guide about creating `Datasets`, see the [tf.data documentation](https://www.tensorflow.org/guide/data). **You can use `tf.data` to train your Keras models regardless of the backend you're using -- whether it's JAX, PyTorch, or TensorFlow.** You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and `predict()`: """ model = get_compiled_model() # First, let's create a training Dataset instance. # For the sake of our example, we'll use the same MNIST data as before. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Now we get a test dataset. test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) test_dataset = test_dataset.batch(64) # Since the dataset already takes care of batching, # we don't pass a `batch_size` argument. model.fit(train_dataset, epochs=3) # You can also evaluate or predict on a dataset. print("Evaluate") result = model.evaluate(test_dataset) dict(zip(model.metrics_names, result)) """ Note that the Dataset is reset at the end of each epoch, so it can be reused of the next epoch. If you want to run training only on a specific number of batches from this Dataset, you can pass the `steps_per_epoch` argument, which specifies how many training steps the model should run using this Dataset before moving on to the next epoch. """ model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Only use the 100 batches per epoch (that's 64 * 100 samples) model.fit(train_dataset, epochs=3, steps_per_epoch=100) """ You can also pass a `Dataset` instance as the `validation_data` argument in `fit()`: """ model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit(train_dataset, epochs=1, validation_data=val_dataset) """ At the end of each epoch, the model will iterate over the validation dataset and compute the validation loss and validation metrics. If you want to run validation only on a specific number of batches from this dataset, you can pass the `validation_steps` argument, which specifies how many validation steps the model should run with the validation dataset before interrupting validation and moving on to the next epoch: """ model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit( train_dataset, epochs=1, # Only run validation using the first 10 batches of the dataset # using the `validation_steps` argument validation_data=val_dataset, validation_steps=10, ) """ Note that the validation dataset will be reset after each use (so that you will always be evaluating on the same samples from epoch to epoch). The argument `validation_split` (generating a holdout set from the training data) is not supported when training from `Dataset` objects, since this feature requires the ability to index the samples of the datasets, which is not possible in general with the `Dataset` API. """ """ ## Training & evaluation using `PyDataset` instances `keras.utils.PyDataset` is a utility that you can subclass to obtain a Python generator with two important properties: - It works well with multiprocessing. - It can be shuffled (e.g. when passing `shuffle=True` in `fit()`). A `PyDataset` must implement two methods: - `__getitem__` - `__len__` The method `__getitem__` should return a complete batch. If you want to modify your dataset between epochs, you may implement `on_epoch_end`. You may also implement `on_epoch_begin` to be called at the start of each epoch. Here's a quick example: """ class ExamplePyDataset(keras.utils.PyDataset): def __init__(self, x, y, batch_size, **kwargs): super().__init__(**kwargs) self.x = x self.y = y self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.x) / float(self.batch_size))) def __getitem__(self, idx): batch_x = self.x[idx * self.batch_size : (idx + 1) * self.batch_size] batch_y = self.y[idx * self.batch_size : (idx + 1) * self.batch_size] return batch_x, batch_y train_py_dataset = ExamplePyDataset(x_train, y_train, batch_size=32) val_py_dataset = ExamplePyDataset(x_val, y_val, batch_size=32) """ To fit the model, pass the dataset instead as the `x` argument (no need for a `y` argument since the dataset includes the targets), and pass the validation dataset as the `validation_data` argument. And no need for the `validation_batch_size` argument, since the dataset is already batched! """ model = get_compiled_model() model.fit( train_py_dataset, batch_size=64, validation_data=val_py_dataset, epochs=1 ) """ Evaluating the model is just as easy: """ model.evaluate(val_py_dataset) """ Importantly, `PyDataset` objects support three common constructor arguments that handle the parallel processing configuration: - `workers`: Number of workers to use in multithreading or multiprocessing. Typically, you'd set it to the number of cores on your CPU. - `use_multiprocessing`: Whether to use Python multiprocessing for parallelism. Setting this to `True` means that your dataset will be replicated in multiple forked processes. This is necessary to gain compute-level (rather than I/O level) benefits from parallelism. However it can only be set to `True` if your dataset can be safely pickled. - `max_queue_size`: Maximum number of batches to keep in the queue when iterating over the dataset in a multithreaded or multiprocessed setting. You can reduce this value to reduce the CPU memory consumption of your dataset. It defaults to 10. By default, multiprocessing is disabled (`use_multiprocessing=False`) and only one thread is used. You should make sure to only turn on `use_multiprocessing` if your code is running inside a Python `if __name__ == "__main__":` block in order to avoid issues. Here's a 4-thread, non-multiprocessed example: """ train_py_dataset = ExamplePyDataset(x_train, y_train, batch_size=32, workers=4) val_py_dataset = ExamplePyDataset(x_val, y_val, batch_size=32, workers=4) model = get_compiled_model() model.fit( train_py_dataset, batch_size=64, validation_data=val_py_dataset, epochs=1 ) """ ## Training & evaluation using PyTorch `DataLoader` objects All built-in training and evaluation APIs are also compatible with `torch.utils.data.Dataset` and `torch.utils.data.DataLoader` objects -- regardless of whether you're using the PyTorch backend, or the JAX or TensorFlow backends. Let's take a look at a simple example. Unlike `PyDataset` which are batch-centric, PyTorch `Dataset` objects are sample-centric: the `__len__` method returns the number of samples, and the `__getitem__` method returns a specific sample. """ class ExampleTorchDataset(torch.utils.data.Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, idx): return self.x[idx], self.y[idx] train_torch_dataset = ExampleTorchDataset(x_train, y_train) val_torch_dataset = ExampleTorchDataset(x_val, y_val) """ To use a PyTorch Dataset, you need to wrap it into a `Dataloader` which takes care of batching and shuffling: """ train_dataloader = torch.utils.data.DataLoader( train_torch_dataset, batch_size=32, shuffle=True ) val_dataloader = torch.utils.data.DataLoader( val_torch_dataset, batch_size=32, shuffle=True ) """ Now you can use them in the Keras API just like any other iterator: """ model = get_compiled_model() model.fit( train_dataloader, batch_size=64, validation_data=val_dataloader, epochs=1 ) model.evaluate(val_dataloader) """ ## Using sample weighting and class weighting With the default settings the weight of a sample is decided by its frequency in the dataset. There are two methods to weight the data, independent of sample frequency: * Class weights * Sample weights """ """ ### Class weights This is set by passing a dictionary to the `class_weight` argument to `Model.fit()`. This dictionary maps class indices to the weight that should be used for samples belonging to this class. This can be used to balance classes without resampling, or to train a model that gives more importance to a particular class. For instance, if class "0" is half as represented as class "1" in your data, you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`. """ """ Here's a NumPy example where we use class weights or sample weights to give more importance to the correct classification of class #5 (which is the digit "5" in the MNIST dataset). """ class_weight = { 0: 1.0, 1: 1.0, 2: 1.0, 3: 1.0, 4: 1.0, # Set weight "2" for class "5", # making this class 2x more important 5: 2.0, 6: 1.0, 7: 1.0, 8: 1.0, 9: 1.0, } print("Fit with class weight") model = get_compiled_model() model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1) """ ### Sample weights For fine grained control, or if you are not building a classifier, you can use "sample weights". - When training from NumPy data: Pass the `sample_weight` argument to `Model.fit()`. - When training from `tf.data` or any other sort of iterator: Yield `(input_batch, label_batch, sample_weight_batch)` tuples. A "sample weights" array is an array of numbers that specify how much weight each sample in a batch should have in computing the total loss. It is commonly used in imbalanced classification problems (the idea being to give more weight to rarely-seen classes). When the weights used are ones and zeros, the array can be used as a *mask* for the loss function (entirely discarding the contribution of certain samples to the total loss). """ sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2.0 print("Fit with sample weight") model = get_compiled_model() model.fit( x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1 ) """ Here's a matching `Dataset` example: """ sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2.0 # Create a Dataset that includes sample weights # (3rd element in the return tuple). train_dataset = tf.data.Dataset.from_tensor_slices( (x_train, y_train, sample_weight) ) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model = get_compiled_model() model.fit(train_dataset, epochs=1) """ ## Passing data to multi-input, multi-output models In the previous examples, we were considering a model with a single input (a tensor of shape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But what about models that have multiple inputs or outputs? Consider the following model, which has an image input of shape `(32, 32, 3)` (that's `(height, width, channels)`) and a time series input of shape `(None, 10)` (that's `(timesteps, features)`). Our model will have two outputs computed from the combination of these inputs: a "score" (of shape `(1,)`) and a probability distribution over five classes (of shape `(5,)`). """ image_input = keras.Input(shape=(32, 32, 3), name="img_input") timeseries_input = keras.Input(shape=(None, 10), name="ts_input") x1 = layers.Conv2D(3, 3)(image_input) x1 = layers.GlobalMaxPooling2D()(x1) x2 = layers.Conv1D(3, 3)(timeseries_input) x2 = layers.GlobalMaxPooling1D()(x2) x = layers.concatenate([x1, x2]) score_output = layers.Dense(1, name="score_output")(x) class_output = layers.Dense(5, name="class_output")(x) model = keras.Model( inputs=[image_input, timeseries_input], outputs=[score_output, class_output] ) """ Let's plot this model, so you can clearly see what we're doing here (note that the shapes shown in the plot are batch shapes, rather than per-sample shapes). """ keras.utils.plot_model( model, "multi_input_and_output_model.png", show_shapes=True ) """ At compilation time, we can specify different losses to different outputs, by passing the loss functions as a list: """ model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[ keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy(), ], ) """ If we only passed a single loss function to the model, the same loss function would be applied to every output (which is not appropriate here). Likewise for metrics: """ model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[ keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy(), ], metrics=[ [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], [keras.metrics.CategoricalAccuracy()], ], ) """ Since we gave names to our output layers, we could also specify per-output losses and metrics via a dict: """ model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={ "score_output": keras.losses.MeanSquaredError(), "class_output": keras.losses.CategoricalCrossentropy(), }, metrics={ "score_output": [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], "class_output": [keras.metrics.CategoricalAccuracy()], }, ) """ We recommend the use of explicit names and dicts if you have more than 2 outputs. It's possible to give different weights to different output-specific losses (for instance, one might wish to privilege the "score" loss in our example, by giving to 2x the importance of the class loss), using the `loss_weights` argument: """ model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={ "score_output": keras.losses.MeanSquaredError(), "class_output": keras.losses.CategoricalCrossentropy(), }, metrics={ "score_output": [ keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError(), ], "class_output": [keras.metrics.CategoricalAccuracy()], }, loss_weights={"score_output": 2.0, "class_output": 1.0}, ) """ You could also choose not to compute a loss for certain outputs, if these outputs are meant for prediction but not for training: """ # List loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[None, keras.losses.CategoricalCrossentropy()], ) # Or dict loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={"class_output": keras.losses.CategoricalCrossentropy()}, ) """ Passing data to a multi-input or multi-output model in `fit()` works in a similar way as specifying a loss function in compile: you can pass **lists of NumPy arrays** (with 1:1 mapping to the outputs that received a loss function) or **dicts mapping output names to NumPy arrays**. """ model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[ keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy(), ], ) # Generate dummy NumPy data img_data = np.random.random_sample(size=(100, 32, 32, 3)) ts_data = np.random.random_sample(size=(100, 20, 10)) score_targets = np.random.random_sample(size=(100, 1)) class_targets = np.random.random_sample(size=(100, 5)) # Fit on lists model.fit( [img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1 ) # Alternatively, fit on dicts model.fit( {"img_input": img_data, "ts_input": ts_data}, {"score_output": score_targets, "class_output": class_targets}, batch_size=32, epochs=1, ) """ Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset` should return a tuple of dicts. """ train_dataset = tf.data.Dataset.from_tensor_slices( ( {"img_input": img_data, "ts_input": ts_data}, {"score_output": score_targets, "class_output": class_targets}, ) ) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model.fit(train_dataset, epochs=1) """ ## Using callbacks
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
true
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/distributed_training_with_tensorflow.py
guides/distributed_training_with_tensorflow.py
""" Title: Multi-GPU distributed training with TensorFlow Author: [fchollet](https://twitter.com/fchollet) Date created: 2020/04/28 Last modified: 2023/06/29 Description: Guide to multi-GPU training for Keras models with TensorFlow. Accelerator: GPU """ """ ## Introduction There are generally two ways to distribute computation across multiple devices: **Data parallelism**, where a single model gets replicated on multiple devices or multiple machines. Each of them processes different batches of data, then they merge their results. There exist many variants of this setup, that differ in how the different model replicas merge results, in whether they stay in sync at every batch or whether they are more loosely coupled, etc. **Model parallelism**, where different parts of a single model run on different devices, processing a single batch of data together. This works best with models that have a naturally-parallel architecture, such as models that feature multiple branches. This guide focuses on data parallelism, in particular **synchronous data parallelism**, where the different replicas of the model stay in sync after each batch they process. Synchronicity keeps the model convergence behavior identical to what you would see for single-device training. Specifically, this guide teaches you how to use the `tf.distribute` API to train Keras models on multiple GPUs, with minimal changes to your code, on multiple GPUs (typically 2 to 16) installed on a single machine (single host, multi-device training). This is the most common setup for researchers and small-scale industry workflows. """ """ ## Setup """ import os os.environ["KERAS_BACKEND"] = "tensorflow" import tensorflow as tf import keras """ ## Single-host, multi-device synchronous training In this setup, you have one machine with several GPUs on it (typically 2 to 16). Each device will run a copy of your model (called a **replica**). For simplicity, in what follows, we'll assume we're dealing with 8 GPUs, at no loss of generality. **How it works** At each step of training: - The current batch of data (called **global batch**) is split into 8 different sub-batches (called **local batches**). For instance, if the global batch has 512 samples, each of the 8 local batches will have 64 samples. - Each of the 8 replicas independently processes a local batch: they run a forward pass, then a backward pass, outputting the gradient of the weights with respect to the loss of the model on the local batch. - The weight updates originating from local gradients are efficiently merged across the 8 replicas. Because this is done at the end of every step, the replicas always stay in sync. In practice, the process of synchronously updating the weights of the model replicas is handled at the level of each individual weight variable. This is done through a **mirrored variable** object. **How to use it** To do single-host, multi-device synchronous training with a Keras model, you would use the [`tf.distribute.MirroredStrategy` API]( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy). Here's how it works: - Instantiate a `MirroredStrategy`, optionally configuring which specific devices you want to use (by default the strategy will use all GPUs available). - Use the strategy object to open a scope, and within this scope, create all the Keras objects you need that contain variables. Typically, that means **creating & compiling the model** inside the distribution scope. In some cases, the first call to `fit()` may also create variables, so it's a good idea to put your `fit()` call in the scope as well. - Train the model via `fit()` as usual. Importantly, we recommend that you use `tf.data.Dataset` objects to load data in a multi-device or distributed workflow. Schematically, it looks like this: ```python # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) # Open a strategy scope. with strategy.scope(): # Everything that creates variables should be under the strategy scope. # In general this is only model construction & `compile()`. model = Model(...) model.compile(...) # Train the model on all available devices. model.fit(train_dataset, validation_data=val_dataset, ...) # Test the model on all available devices. model.evaluate(test_dataset) ``` Here's a simple end-to-end runnable example: """ def get_compiled_model(): # Make a simple 2-layer densely-connected neural network. inputs = keras.Input(shape=(784,)) x = keras.layers.Dense(256, activation="relu")(inputs) x = keras.layers.Dense(256, activation="relu")(x) outputs = keras.layers.Dense(10)(x) model = keras.Model(inputs, outputs) model.compile( optimizer=keras.optimizers.Adam(), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) return model def get_dataset(): batch_size = 32 num_val_samples = 10000 # Return the MNIST dataset in the form of a `tf.data.Dataset`. (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data (these are Numpy arrays) x_train = x_train.reshape(-1, 784).astype("float32") / 255 x_test = x_test.reshape(-1, 784).astype("float32") / 255 y_train = y_train.astype("float32") y_test = y_test.astype("float32") # Reserve num_val_samples samples for validation x_val = x_train[-num_val_samples:] y_val = y_train[-num_val_samples:] x_train = x_train[:-num_val_samples] y_train = y_train[:-num_val_samples] return ( tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch( batch_size ), tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(batch_size), tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size), ) # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() print("Number of devices: {}".format(strategy.num_replicas_in_sync)) # Open a strategy scope. with strategy.scope(): # Everything that creates variables should be under the strategy scope. # In general this is only model construction & `compile()`. model = get_compiled_model() # Train the model on all available devices. train_dataset, val_dataset, test_dataset = get_dataset() model.fit(train_dataset, epochs=2, validation_data=val_dataset) # Test the model on all available devices. model.evaluate(test_dataset) """ ## Using callbacks to ensure fault tolerance When using distributed training, you should always make sure you have a strategy to recover from failure (fault tolerance). The simplest way to handle this is to pass `ModelCheckpoint` callback to `fit()`, to save your model at regular intervals (e.g. every 100 batches or every epoch). You can then restart training from your saved model. Here's a simple example: """ # Prepare a directory to store all the checkpoints. checkpoint_dir = "./ckpt" if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) def make_or_restore_model(): # Either restore the latest model, or create a fresh one # if there is no checkpoint available. checkpoints = [ os.path.join(checkpoint_dir, name) for name in os.listdir(checkpoint_dir) ] if checkpoints: latest_checkpoint = max(checkpoints, key=os.path.getctime) print("Restoring from", latest_checkpoint) return keras.models.load_model(latest_checkpoint) print("Creating a new model") return get_compiled_model() def run_training(epochs=1): # Create a MirroredStrategy. strategy = tf.distribute.MirroredStrategy() # Open a strategy scope and create/restore the model with strategy.scope(): model = make_or_restore_model() callbacks = [ # This callback saves a SavedModel every epoch # We include the current epoch in the folder name. keras.callbacks.ModelCheckpoint( filepath=os.path.join(checkpoint_dir, "ckpt-{epoch}.keras"), save_freq="epoch", ) ] model.fit( train_dataset, epochs=epochs, callbacks=callbacks, validation_data=val_dataset, verbose=2, ) # Running the first time creates the model run_training(epochs=1) # Calling the same function again will resume from where we left off run_training(epochs=1) """ ## `tf.data` performance tips When doing distributed training, the efficiency with which you load data can often become critical. Here are a few tips to make sure your `tf.data` pipelines run as fast as possible. **Note about dataset batching** When creating your dataset, make sure it is batched with the global batch size. For instance, if each of your 8 GPUs is capable of running a batch of 64 samples, you call use a global batch size of 512. **Calling `dataset.cache()`** If you call `.cache()` on a dataset, its data will be cached after running through the first iteration over the data. Every subsequent iteration will use the cached data. The cache can be in memory (default) or to a local file you specify. This can improve performance when: - Your data is not expected to change from iteration to iteration - You are reading data from a remote distributed filesystem - You are reading data from local disk, but your data would fit in memory and your workflow is significantly IO-bound (e.g. reading & decoding image files). **Calling `dataset.prefetch(buffer_size)`** You should almost always call `.prefetch(buffer_size)` after creating a dataset. It means your data pipeline will run asynchronously from your model, with new samples being preprocessed and stored in a buffer while the current batch samples are used to train the model. The next batch will be prefetched in GPU memory by the time the current batch is over. """ """ That's it! """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/writing_a_custom_training_loop_in_jax.py
guides/writing_a_custom_training_loop_in_jax.py
""" Title: Writing a training loop from scratch in JAX Author: [fchollet](https://twitter.com/fchollet) Date created: 2023/06/25 Last modified: 2023/06/25 Description: Writing low-level training & evaluation loops in JAX. Accelerator: None """ """ ## Setup """ import os # This guide can only be run with the jax backend. os.environ["KERAS_BACKEND"] = "jax" import jax # We import TF so we can use tf.data. import tensorflow as tf import keras import numpy as np """ ## Introduction Keras provides default training and evaluation loops, `fit()` and `evaluate()`. Their usage is covered in the guide [Training & evaluation with the built-in methods](https://keras.io/guides/training_with_built_in_methods/). If you want to customize the learning algorithm of your model while still leveraging the convenience of `fit()` (for instance, to train a GAN using `fit()`), you can subclass the `Model` class and implement your own `train_step()` method, which is called repeatedly during `fit()`. Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about. """ """ ## A first end-to-end example To write a custom training loop, we need the following ingredients: - A model to train, of course. - An optimizer. You could either use an optimizer from `keras.optimizers`, or one from the `optax` package. - A loss function. - A dataset. The standard in the JAX ecosystem is to load data via `tf.data`, so that's what we'll use. Let's line them up. First, let's get the model and the MNIST dataset: """ def get_model(): inputs = keras.Input(shape=(784,), name="digits") x1 = keras.layers.Dense(64, activation="relu")(inputs) x2 = keras.layers.Dense(64, activation="relu")(x1) outputs = keras.layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs) return model model = get_model() # Prepare the training dataset. batch_size = 32 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)).astype("float32") x_test = np.reshape(x_test, (-1, 784)).astype("float32") y_train = keras.utils.to_categorical(y_train) y_test = keras.utils.to_categorical(y_test) # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] # Prepare the training dataset. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(batch_size) """ Next, here's the loss function and the optimizer. We'll use a Keras optimizer in this case. """ # Instantiate a loss function. loss_fn = keras.losses.CategoricalCrossentropy(from_logits=True) # Instantiate an optimizer. optimizer = keras.optimizers.Adam(learning_rate=1e-3) """ ### Getting gradients in JAX Let's train our model using mini-batch gradient with a custom training loop. In JAX, gradients are computed via *metaprogramming*: you call the `jax.grad` (or `jax.value_and_grad` on a function in order to create a gradient-computing function for that first function. So the first thing we need is a function that returns the loss value. That's the function we'll use to generate the gradient function. Something like this: ```python def compute_loss(x, y): ... return loss ``` Once you have such a function, you can compute gradients via metaprogramming as such: ```python grad_fn = jax.grad(compute_loss) grads = grad_fn(x, y) ``` Typically, you don't just want to get the gradient values, you also want to get the loss value. You can do this by using `jax.value_and_grad` instead of `jax.grad`: ```python grad_fn = jax.value_and_grad(compute_loss) loss, grads = grad_fn(x, y) ``` ### JAX computation is purely stateless In JAX, everything must be a stateless function -- so our loss computation function must be stateless as well. That means that all Keras variables (e.g. weight tensors) must be passed as function inputs, and any variable that has been updated during the forward pass must be returned as function output. The function have no side effect. During the forward pass, the non-trainable variables of a Keras model might get updated. These variables could be, for instance, RNG seed state variables or BatchNormalization statistics. We're going to need to return those. So we need something like this: ```python def compute_loss_and_updates(trainable_variables, non_trainable_variables, x, y): ... return loss, non_trainable_variables ``` Once you have such a function, you can get the gradient function by specifying `hax_aux` in `value_and_grad`: it tells JAX that the loss computation function returns more outputs than just the loss. Note that the loss should always be the first output. ```python grad_fn = jax.value_and_grad(compute_loss_and_updates, has_aux=True) (loss, non_trainable_variables), grads = grad_fn( trainable_variables, non_trainable_variables, x, y ) ``` Now that we have established the basics, let's implement this `compute_loss_and_updates` function. Keras models have a `stateless_call` method which will come in handy here. It works just like `model.__call__`, but it requires you to explicitly pass the value of all the variables in the model, and it returns not just the `__call__` outputs but also the (potentially updated) non-trainable variables. """ def compute_loss_and_updates( trainable_variables, non_trainable_variables, x, y ): y_pred, non_trainable_variables = model.stateless_call( trainable_variables, non_trainable_variables, x ) loss = loss_fn(y, y_pred) return loss, non_trainable_variables """ Let's get the gradient function: """ grad_fn = jax.value_and_grad(compute_loss_and_updates, has_aux=True) """ ### The training step function Next, let's implement the end-to-end training step, the function that will both run the forward pass, compute the loss, compute the gradients, but also use the optimizer to update the trainable variables. This function also needs to be stateless, so it will get as input a `state` tuple that includes every state element we're going to use: - `trainable_variables` and `non_trainable_variables`: the model's variables. - `optimizer_variables`: the optimizer's state variables, such as momentum accumulators. To update the trainable variables, we use the optimizer's stateless method `stateless_apply`. It's equivalent to `optimizer.apply()`, but it requires always passing `trainable_variables` and `optimizer_variables`. It returns both the updated trainable variables and the updated optimizer_variables. """ def train_step(state, data): trainable_variables, non_trainable_variables, optimizer_variables = state x, y = data (loss, non_trainable_variables), grads = grad_fn( trainable_variables, non_trainable_variables, x, y ) trainable_variables, optimizer_variables = optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) # Return updated state return loss, ( trainable_variables, non_trainable_variables, optimizer_variables, ) """ ### Make it fast with `jax.jit` By default, JAX operations run eagerly, just like in TensorFlow eager mode and PyTorch eager mode. And just like TensorFlow eager mode and PyTorch eager mode, it's pretty slow -- eager mode is better used as a debugging environment, not as a way to do any actual work. So let's make our `train_step` fast by compiling it. When you have a stateless JAX function, you can compile it to XLA via the `@jax.jit` decorator. It will get traced during its first execution, and in subsequent executions you will be executing the traced graph (this is just like `@tf.function(jit_compile=True)`. Let's try it: """ @jax.jit def train_step(state, data): trainable_variables, non_trainable_variables, optimizer_variables = state x, y = data (loss, non_trainable_variables), grads = grad_fn( trainable_variables, non_trainable_variables, x, y ) trainable_variables, optimizer_variables = optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) # Return updated state return loss, ( trainable_variables, non_trainable_variables, optimizer_variables, ) """ We're now ready to train our model. The training loop itself is trivial: we just repeatedly call `loss, state = train_step(state, data)`. Note: - We convert the TF tensors yielded by the `tf.data.Dataset` to NumPy before passing them to our JAX function. - All variables must be built beforehand: the model must be built and the optimizer must be built. Since we're using a Functional API model, it's already built, but if it were a subclassed model you'd need to call it on a batch of data to build it. """ # Build optimizer variables. optimizer.build(model.trainable_variables) trainable_variables = model.trainable_variables non_trainable_variables = model.non_trainable_variables optimizer_variables = optimizer.variables state = trainable_variables, non_trainable_variables, optimizer_variables # Training loop for step, data in enumerate(train_dataset): data = (data[0].numpy(), data[1].numpy()) loss, state = train_step(state, data) # Log every 100 batches. if step % 100 == 0: print(f"Training loss (for 1 batch) at step {step}: {float(loss):.4f}") print(f"Seen so far: {(step + 1) * batch_size} samples") """ A key thing to notice here is that the loop is entirely stateless -- the variables attached to the model (`model.weights`) are never getting updated during the loop. Their new values are only stored in the `state` tuple. That means that at some point, before saving the model, you should be attaching the new variable values back to the model. Just call `variable.assign(new_value)` on each model variable you want to update: """ trainable_variables, non_trainable_variables, optimizer_variables = state for variable, value in zip(model.trainable_variables, trainable_variables): variable.assign(value) for variable, value in zip( model.non_trainable_variables, non_trainable_variables ): variable.assign(value) """ ## Low-level handling of metrics Let's add metrics monitoring to this basic training loop. You can readily reuse built-in Keras metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: - Instantiate the metric at the start of the loop - Include `metric_variables` in the `train_step` arguments and `compute_loss_and_updates` arguments. - Call `metric.stateless_update_state()` in the `compute_loss_and_updates` function. It's equivalent to `update_state()` -- only stateless. - When you need to display the current value of the metric, outside the `train_step` (in the eager scope), attach the new metric variable values to the metric object and vall `metric.result()`. - Call `metric.reset_state()` when you need to clear the state of the metric (typically at the end of an epoch) Let's use this knowledge to compute `CategoricalAccuracy` on training and validation data at the end of training: """ # Get a fresh model model = get_model() # Instantiate an optimizer to train the model. optimizer = keras.optimizers.Adam(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.CategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.CategoricalAccuracy() val_acc_metric = keras.metrics.CategoricalAccuracy() def compute_loss_and_updates( trainable_variables, non_trainable_variables, metric_variables, x, y ): y_pred, non_trainable_variables = model.stateless_call( trainable_variables, non_trainable_variables, x ) loss = loss_fn(y, y_pred) metric_variables = train_acc_metric.stateless_update_state( metric_variables, y, y_pred ) return loss, (non_trainable_variables, metric_variables) grad_fn = jax.value_and_grad(compute_loss_and_updates, has_aux=True) @jax.jit def train_step(state, data): ( trainable_variables, non_trainable_variables, optimizer_variables, metric_variables, ) = state x, y = data (loss, (non_trainable_variables, metric_variables)), grads = grad_fn( trainable_variables, non_trainable_variables, metric_variables, x, y ) trainable_variables, optimizer_variables = optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) # Return updated state return loss, ( trainable_variables, non_trainable_variables, optimizer_variables, metric_variables, ) """ We'll also prepare an evaluation step function: """ @jax.jit def eval_step(state, data): trainable_variables, non_trainable_variables, metric_variables = state x, y = data y_pred, non_trainable_variables = model.stateless_call( trainable_variables, non_trainable_variables, x ) loss = loss_fn(y, y_pred) metric_variables = val_acc_metric.stateless_update_state( metric_variables, y, y_pred ) return loss, ( trainable_variables, non_trainable_variables, metric_variables, ) """ Here are our loops: """ # Build optimizer variables. optimizer.build(model.trainable_variables) trainable_variables = model.trainable_variables non_trainable_variables = model.non_trainable_variables optimizer_variables = optimizer.variables metric_variables = train_acc_metric.variables state = ( trainable_variables, non_trainable_variables, optimizer_variables, metric_variables, ) # Training loop for step, data in enumerate(train_dataset): data = (data[0].numpy(), data[1].numpy()) loss, state = train_step(state, data) # Log every 100 batches. if step % 100 == 0: print(f"Training loss (for 1 batch) at step {step}: {float(loss):.4f}") _, _, _, metric_variables = state for variable, value in zip( train_acc_metric.variables, metric_variables ): variable.assign(value) print(f"Training accuracy: {train_acc_metric.result()}") print(f"Seen so far: {(step + 1) * batch_size} samples") metric_variables = val_acc_metric.variables ( trainable_variables, non_trainable_variables, optimizer_variables, metric_variables, ) = state state = trainable_variables, non_trainable_variables, metric_variables # Eval loop for step, data in enumerate(val_dataset): data = (data[0].numpy(), data[1].numpy()) loss, state = eval_step(state, data) # Log every 100 batches. if step % 100 == 0: print( f"Validation loss (for 1 batch) at step {step}: {float(loss):.4f}" ) _, _, metric_variables = state for variable, value in zip(val_acc_metric.variables, metric_variables): variable.assign(value) print(f"Validation accuracy: {val_acc_metric.result()}") print(f"Seen so far: {(step + 1) * batch_size} samples") """ ## Low-level handling of losses tracked by the model Layers & models recursively track any losses created during the forward pass by layers that call `self.add_loss(value)`. The resulting list of scalar loss values are available via the property `model.losses` at the end of the forward pass. If you want to be using these loss components, you should sum them and add them to the main loss in your training step. Consider this layer, that creates an activity regularization loss: """ class ActivityRegularizationLayer(keras.layers.Layer): def call(self, inputs): self.add_loss(1e-2 * jax.numpy.sum(inputs)) return inputs """ Let's build a really simple model that uses it: """ inputs = keras.Input(shape=(784,), name="digits") x = keras.layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = keras.layers.Dense(64, activation="relu")(x) outputs = keras.layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) """ Here's what our `compute_loss_and_updates` function should look like now: - Pass `return_losses=True` to `model.stateless_call()`. - Sum the resulting `losses` and add them to the main loss. """ def compute_loss_and_updates( trainable_variables, non_trainable_variables, metric_variables, x, y ): y_pred, non_trainable_variables, losses = model.stateless_call( trainable_variables, non_trainable_variables, x, return_losses=True ) loss = loss_fn(y, y_pred) if losses: loss += jax.numpy.sum(losses) metric_variables = train_acc_metric.stateless_update_state( metric_variables, y, y_pred ) return loss, non_trainable_variables, metric_variables """ That's it! """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/making_new_layers_and_models_via_subclassing.py
guides/making_new_layers_and_models_via_subclassing.py
""" Title: Making new layers and models via subclassing Author: [fchollet](https://twitter.com/fchollet) Date created: 2019/03/01 Last modified: 2023/06/25 Description: Complete guide to writing `Layer` and `Model` objects from scratch. Accelerator: None """ """ ## Introduction This guide will cover everything you need to know to build your own subclassed layers and models. In particular, you'll learn about the following features: - The `Layer` class - The `add_weight()` method - Trainable and non-trainable weights - The `build()` method - Making sure your layers can be used with any backend - The `add_loss()` method - The `training` argument in `call()` - The `mask` argument in `call()` - Making sure your layers can be serialized Let's dive in. """ """ ## Setup """ import numpy as np import keras from keras import ops from keras import layers """ ## The `Layer` class: the combination of state (weights) and some computation One of the central abstractions in Keras is the `Layer` class. A layer encapsulates both a state (the layer's "weights") and a transformation from inputs to outputs (a "call", the layer's forward pass). Here's a densely-connected layer. It has two state variables: the variables `w` and `b`. """ class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super().__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(units,), initializer="zeros", trainable=True ) def call(self, inputs): return ops.matmul(inputs, self.w) + self.b """ You would use a layer by calling it on some tensor input(s), much like a Python function. """ x = ops.ones((2, 2)) linear_layer = Linear(4, 2) y = linear_layer(x) print(y) """ Note that the weights `w` and `b` are automatically tracked by the layer upon being set as layer attributes: """ assert linear_layer.weights == [linear_layer.w, linear_layer.b] """ ## Layers can have non-trainable weights Besides trainable weights, you can add non-trainable weights to a layer as well. Such weights are meant not to be taken into account during backpropagation, when you are training the layer. Here's how to add and use a non-trainable weight: """ class ComputeSum(keras.layers.Layer): def __init__(self, input_dim): super().__init__() self.total = self.add_weight( initializer="zeros", shape=(input_dim,), trainable=False ) def call(self, inputs): self.total.assign_add(ops.sum(inputs, axis=0)) return self.total x = ops.ones((2, 2)) my_sum = ComputeSum(2) y = my_sum(x) print(y.numpy()) y = my_sum(x) print(y.numpy()) """ It's part of `layer.weights`, but it gets categorized as a non-trainable weight: """ print("weights:", len(my_sum.weights)) print("non-trainable weights:", len(my_sum.non_trainable_weights)) # It's not included in the trainable weights: print("trainable_weights:", my_sum.trainable_weights) """ ## Best practice: deferring weight creation until the shape of the inputs is known Our `Linear` layer above took an `input_dim` argument that was used to compute the shape of the weights `w` and `b` in `__init__()`: """ class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super().__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(units,), initializer="zeros", trainable=True ) def call(self, inputs): return ops.matmul(inputs, self.w) + self.b """ In many cases, you may not know in advance the size of your inputs, and you would like to lazily create weights when that value becomes known, some time after instantiating the layer. In the Keras API, we recommend creating layer weights in the `build(self, inputs_shape)` method of your layer. Like this: """ class Linear(keras.layers.Layer): def __init__(self, units=32): super().__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return ops.matmul(inputs, self.w) + self.b """ The `__call__()` method of your layer will automatically run build the first time it is called. You now have a layer that's lazy and thus easier to use: """ # At instantiation, we don't know on what inputs this is going to get called linear_layer = Linear(32) # The layer's weights are created dynamically the first time the layer is called y = linear_layer(x) """ Implementing `build()` separately as shown above nicely separates creating weights only once from using weights in every call. """ """ ## Layers are recursively composable If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights created by the inner layer. We recommend creating such sublayers in the `__init__()` method and leave it to the first `__call__()` to trigger building their weights. """ class MLPBlock(keras.layers.Layer): def __init__(self): super().__init__() self.linear_1 = Linear(32) self.linear_2 = Linear(32) self.linear_3 = Linear(1) def call(self, inputs): x = self.linear_1(inputs) x = keras.activations.relu(x) x = self.linear_2(x) x = keras.activations.relu(x) return self.linear_3(x) mlp = MLPBlock() y = mlp( ops.ones(shape=(3, 64)) ) # The first call to the `mlp` will create the weights print("weights:", len(mlp.weights)) print("trainable weights:", len(mlp.trainable_weights)) """ ## Backend-agnostic layers and backend-specific layers As long as a layer only uses APIs from the `keras.ops` namespace (or other Keras namespaces such as `keras.activations`, `keras.random`, or `keras.layers`), then it can be used with any backend -- TensorFlow, JAX, or PyTorch. All layers you've seen so far in this guide work with all Keras backends. The `keras.ops` namespace gives you access to: - The NumPy API, e.g. `ops.matmul`, `ops.sum`, `ops.reshape`, `ops.stack`, etc. - Neural networks-specific APIs such as `ops.softmax`, `ops.conv`, `ops.binary_crossentropy`, `ops.relu`, etc. You can also use backend-native APIs in your layers (such as `tf.nn` functions), but if you do this, then your layer will only be usable with the backend in question. For instance, you could write the following JAX-specific layer using `jax.numpy`: ```python import jax class Linear(keras.layers.Layer): ... def call(self, inputs): return jax.numpy.matmul(inputs, self.w) + self.b ``` This would be the equivalent TensorFlow-specific layer: ```python import tensorflow as tf class Linear(keras.layers.Layer): ... def call(self, inputs): return tf.matmul(inputs, self.w) + self.b ``` And this would be the equivalent PyTorch-specific layer: ```python import torch class Linear(keras.layers.Layer): ... def call(self, inputs): return torch.matmul(inputs, self.w) + self.b ``` Because cross-backend compatibility is a tremendously useful property, we strongly recommend that you seek to always make your layers backend-agnostic by leveraging only Keras APIs. """ """ ## The `add_loss()` method When writing the `call()` method of a layer, you can create loss tensors that you will want to use later, when writing your training loop. This is doable by calling `self.add_loss(value)`: """ # A layer that creates an activity regularization loss class ActivityRegularizationLayer(keras.layers.Layer): def __init__(self, rate=1e-2): super().__init__() self.rate = rate def call(self, inputs): self.add_loss(self.rate * ops.mean(inputs)) return inputs """ These losses (including those created by any inner layer) can be retrieved via `layer.losses`. This property is reset at the start of every `__call__()` to the top-level layer, so that `layer.losses` always contains the loss values created during the last forward pass. """ class OuterLayer(keras.layers.Layer): def __init__(self): super().__init__() self.activity_reg = ActivityRegularizationLayer(1e-2) def call(self, inputs): return self.activity_reg(inputs) layer = OuterLayer() assert ( len(layer.losses) == 0 ) # No losses yet since the layer has never been called _ = layer(ops.zeros((1, 1))) assert len(layer.losses) == 1 # We created one loss value # `layer.losses` gets reset at the start of each __call__ _ = layer(ops.zeros((1, 1))) assert len(layer.losses) == 1 # This is the loss created during the call above """ In addition, the `loss` property also contains regularization losses created for the weights of any inner layer: """ class OuterLayerWithKernelRegularizer(keras.layers.Layer): def __init__(self): super().__init__() self.dense = keras.layers.Dense( 32, kernel_regularizer=keras.regularizers.l2(1e-3) ) def call(self, inputs): return self.dense(inputs) layer = OuterLayerWithKernelRegularizer() _ = layer(ops.zeros((1, 1))) # This is `1e-3 * sum(layer.dense.kernel ** 2)`, # created by the `kernel_regularizer` above. print(layer.losses) """ These losses are meant to be taken into account when writing custom training loops. They also work seamlessly with `fit()` (they get automatically summed and added to the main loss, if any): """ inputs = keras.Input(shape=(3,)) outputs = ActivityRegularizationLayer()(inputs) model = keras.Model(inputs, outputs) # If there is a loss passed in `compile`, the regularization # losses get added to it model.compile(optimizer="adam", loss="mse") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) # It's also possible not to pass any loss in `compile`, # since the model already has a loss to minimize, via the `add_loss` # call during the forward pass! model.compile(optimizer="adam") model.fit(np.random.random((2, 3)), np.random.random((2, 3))) """ ## You can optionally enable serialization on your layers If you need your custom layers to be serializable as part of a [Functional model](/guides/functional_api/), you can optionally implement a `get_config()` method: """ class Linear(keras.layers.Layer): def __init__(self, units=32): super().__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return ops.matmul(inputs, self.w) + self.b def get_config(self): return {"units": self.units} # Now you can recreate the layer from its config: layer = Linear(64) config = layer.get_config() print(config) new_layer = Linear.from_config(config) """ Note that the `__init__()` method of the base `Layer` class takes some keyword arguments, in particular a `name` and a `dtype`. It's good practice to pass these arguments to the parent class in `__init__()` and to include them in the layer config: """ class Linear(keras.layers.Layer): def __init__(self, units=32, **kwargs): super().__init__(**kwargs) self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return ops.matmul(inputs, self.w) + self.b def get_config(self): config = super().get_config() config.update({"units": self.units}) return config layer = Linear(64) config = layer.get_config() print(config) new_layer = Linear.from_config(config) """ If you need more flexibility when deserializing the layer from its config, you can also override the `from_config()` class method. This is the base implementation of `from_config()`: ```python def from_config(cls, config): return cls(**config) ``` To learn more about serialization and saving, see the complete [guide to saving and serializing models](/guides/serialization_and_saving/). """ """ ## Privileged `training` argument in the `call()` method Some layers, in particular the `BatchNormalization` layer and the `Dropout` layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a `training` (boolean) argument in the `call()` method. By exposing this argument in `call()`, you enable the built-in training and evaluation loops (e.g. `fit()`) to correctly use the layer in training and inference. """ class CustomDropout(keras.layers.Layer): def __init__(self, rate, **kwargs): super().__init__(**kwargs) self.rate = rate def call(self, inputs, training=None): if training: return keras.random.dropout(inputs, rate=self.rate) return inputs """ ## Privileged `mask` argument in the `call()` method The other privileged argument supported by `call()` is the `mask` argument. You will find it in all Keras RNN layers. A mask is a boolean tensor (one boolean value per timestep in the input) used to skip certain input timesteps when processing timeseries data. Keras will automatically pass the correct `mask` argument to `__call__()` for layers that support it, when a mask is generated by a prior layer. Mask-generating layers are the `Embedding` layer configured with `mask_zero=True`, and the `Masking` layer. """ """ ## The `Model` class In general, you will use the `Layer` class to define inner computation blocks, and will use the `Model` class to define the outer model -- the object you will train. For instance, in a ResNet50 model, you would have several ResNet blocks subclassing `Layer`, and a single `Model` encompassing the entire ResNet50 network. The `Model` class has the same API as `Layer`, with the following differences: - It exposes built-in training, evaluation, and prediction loops (`model.fit()`, `model.evaluate()`, `model.predict()`). - It exposes the list of its inner layers, via the `model.layers` property. - It exposes saving and serialization APIs (`save()`, `save_weights()`...) Effectively, the `Layer` class corresponds to what we refer to in the literature as a "layer" (as in "convolution layer" or "recurrent layer") or as a "block" (as in "ResNet block" or "Inception block"). Meanwhile, the `Model` class corresponds to what is referred to in the literature as a "model" (as in "deep learning model") or as a "network" (as in "deep neural network"). So if you're wondering, "should I use the `Layer` class or the `Model` class?", ask yourself: will I need to call `fit()` on it? Will I need to call `save()` on it? If so, go with `Model`. If not (either because your class is just a block in a bigger system, or because you are writing training & saving code yourself), use `Layer`. For instance, we could take our mini-resnet example above, and use it to build a `Model` that we could train with `fit()`, and that we could save with `save_weights()`: """ """ ```python class ResNet(keras.Model): def __init__(self, num_classes=1000): super().__init__() self.block_1 = ResNetBlock() self.block_2 = ResNetBlock() self.global_pool = layers.GlobalAveragePooling2D() self.classifier = Dense(num_classes) def call(self, inputs): x = self.block_1(inputs) x = self.block_2(x) x = self.global_pool(x) return self.classifier(x) resnet = ResNet() dataset = ... resnet.fit(dataset, epochs=10) resnet.save(filepath.keras) ``` """ """ ## Putting it all together: an end-to-end example Here's what you've learned so far: - A `Layer` encapsulate a state (created in `__init__()` or `build()`) and some computation (defined in `call()`). - Layers can be recursively nested to create new, bigger computation blocks. - Layers are backend-agnostic as long as they only use Keras APIs. You can use backend-native APIs (such as `jax.numpy`, `torch.nn` or `tf.nn`), but then your layer will only be usable with that specific backend. - Layers can create and track losses (typically regularization losses) via `add_loss()`. - The outer container, the thing you want to train, is a `Model`. A `Model` is just like a `Layer`, but with added training and serialization utilities. Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE) in a backend-agnostic fashion -- so that it runs the same with TensorFlow, JAX, and PyTorch. We'll train it on MNIST digits. Our VAE will be a subclass of `Model`, built as a nested composition of layers that subclass `Layer`. It will feature a regularization loss (KL divergence). """ class Sampling(layers.Layer): """Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.""" def call(self, inputs): z_mean, z_log_var = inputs batch = ops.shape(z_mean)[0] dim = ops.shape(z_mean)[1] epsilon = keras.random.normal(shape=(batch, dim)) return z_mean + ops.exp(0.5 * z_log_var) * epsilon class Encoder(layers.Layer): """Maps MNIST digits to a triplet (z_mean, z_log_var, z).""" def __init__( self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs ): super().__init__(name=name, **kwargs) self.dense_proj = layers.Dense(intermediate_dim, activation="relu") self.dense_mean = layers.Dense(latent_dim) self.dense_log_var = layers.Dense(latent_dim) self.sampling = Sampling() def call(self, inputs): x = self.dense_proj(inputs) z_mean = self.dense_mean(x) z_log_var = self.dense_log_var(x) z = self.sampling((z_mean, z_log_var)) return z_mean, z_log_var, z class Decoder(layers.Layer): """Converts z, the encoded digit vector, back into a readable digit.""" def __init__( self, original_dim, intermediate_dim=64, name="decoder", **kwargs ): super().__init__(name=name, **kwargs) self.dense_proj = layers.Dense(intermediate_dim, activation="relu") self.dense_output = layers.Dense(original_dim, activation="sigmoid") def call(self, inputs): x = self.dense_proj(inputs) return self.dense_output(x) class VariationalAutoEncoder(keras.Model): """Combines the encoder and decoder into an end-to-end model for training.""" def __init__( self, original_dim, intermediate_dim=64, latent_dim=32, name="autoencoder", **kwargs, ): super().__init__(name=name, **kwargs) self.original_dim = original_dim self.encoder = Encoder( latent_dim=latent_dim, intermediate_dim=intermediate_dim ) self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim) def call(self, inputs): z_mean, z_log_var, z = self.encoder(inputs) reconstructed = self.decoder(z) # Add KL divergence regularization loss. kl_loss = -0.5 * ops.mean( z_log_var - ops.square(z_mean) - ops.exp(z_log_var) + 1 ) self.add_loss(kl_loss) return reconstructed """ Let's train it on MNIST using the `fit()` API: """ (x_train, _), _ = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype("float32") / 255 original_dim = 784 vae = VariationalAutoEncoder(784, 64, 32) optimizer = keras.optimizers.Adam(learning_rate=1e-3) vae.compile(optimizer, loss=keras.losses.MeanSquaredError()) vae.fit(x_train, x_train, epochs=2, batch_size=64)
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/custom_train_step_in_jax.py
guides/custom_train_step_in_jax.py
""" Title: Customizing what happens in `fit()` with JAX Author: [fchollet](https://twitter.com/fchollet) Date created: 2023/06/27 Last modified: 2023/06/27 Description: Overriding the training step of the Model class with JAX. Accelerator: GPU """ """ ## Introduction When you're doing supervised learning, you can use `fit()` and everything works smoothly. When you need to take control of every little detail, you can write your own training loop entirely from scratch. But what if you need a custom training algorithm, but you still want to benefit from the convenient features of `fit()`, such as callbacks, built-in distribution support, or step fusing? A core principle of Keras is **progressive disclosure of complexity**. You should always be able to get into lower-level workflows in a gradual way. You shouldn't fall off a cliff if the high-level functionality doesn't exactly match your use case. You should be able to gain more control over the small details while retaining a commensurate amount of high-level convenience. When you need to customize what `fit()` does, you should **override the training step function of the `Model` class**. This is the function that is called by `fit()` for every batch of data. You will then be able to call `fit()` as usual -- and it will be running your own learning algorithm. Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you're building `Sequential` models, Functional API models, or subclassed models. Let's see how that works. """ """ ## Setup """ import os # This guide can only be run with the JAX backend. os.environ["KERAS_BACKEND"] = "jax" import jax import keras import numpy as np """ ## A first simple example Let's start from a simple example: - We create a new class that subclasses `keras.Model`. - We implement a fully-stateless `compute_loss_and_updates()` method to compute the loss as well as the updated values for the non-trainable variables of the model. Internally, it calls `stateless_call()` and the built-in `compute_loss()`. - We implement a fully-stateless `train_step()` method to compute current metric values (including the loss) as well as updated values for the trainable variables, the optimizer variables, and the metric variables. Note that you can also take into account the `sample_weight` argument by: - Unpacking the data as `x, y, sample_weight = data` - Passing `sample_weight` to `compute_loss()` - Passing `sample_weight` alongside `y` and `y_pred` to metrics in `stateless_update_state()` """ class CustomModel(keras.Model): def compute_loss_and_updates( self, trainable_variables, non_trainable_variables, x, y, training=False, ): y_pred, non_trainable_variables = self.stateless_call( trainable_variables, non_trainable_variables, x, training=training, ) loss = self.compute_loss(x, y, y_pred) return loss, (y_pred, non_trainable_variables) def train_step(self, state, data): ( trainable_variables, non_trainable_variables, optimizer_variables, metrics_variables, ) = state x, y = data # Get the gradient function. grad_fn = jax.value_and_grad( self.compute_loss_and_updates, has_aux=True ) # Compute the gradients. (loss, (y_pred, non_trainable_variables)), grads = grad_fn( trainable_variables, non_trainable_variables, x, y, training=True, ) # Update trainable variables and optimizer variables. ( trainable_variables, optimizer_variables, ) = self.optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) # Update metrics. new_metrics_vars, logs = [], [] for metric in self.metrics: this_metric_vars = metrics_variables[ len(new_metrics_vars) : len(new_metrics_vars) + len(metric.variables) ] if metric.name == "loss": this_metric_vars = metric.stateless_update_state( this_metric_vars, loss ) else: this_metric_vars = metric.stateless_update_state( this_metric_vars, y, y_pred ) logs = metric.stateless_result(this_metric_vars) new_metrics_vars += this_metric_vars # Return metric logs and updated state variables. state = ( trainable_variables, non_trainable_variables, optimizer_variables, new_metrics_vars, ) return logs, state """ Let's try this out: """ # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss="mse", metrics=["mae"]) # Just use `fit` as usual x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=3) """ ## Going lower-level Naturally, you could just skip passing a loss function in `compile()`, and instead do everything *manually* in `train_step`. Likewise for metrics. Here's a lower-level example, that only uses `compile()` to configure the optimizer: """ class CustomModel(keras.Model): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.loss_tracker = keras.metrics.Mean(name="loss") self.mae_metric = keras.metrics.MeanAbsoluteError(name="mae") self.loss_fn = keras.losses.MeanSquaredError() def compute_loss_and_updates( self, trainable_variables, non_trainable_variables, x, y, training=False, ): y_pred, non_trainable_variables = self.stateless_call( trainable_variables, non_trainable_variables, x, training=training, ) loss = self.loss_fn(y, y_pred) return loss, (y_pred, non_trainable_variables) def train_step(self, state, data): ( trainable_variables, non_trainable_variables, optimizer_variables, metrics_variables, ) = state x, y = data # Get the gradient function. grad_fn = jax.value_and_grad( self.compute_loss_and_updates, has_aux=True ) # Compute the gradients. (loss, (y_pred, non_trainable_variables)), grads = grad_fn( trainable_variables, non_trainable_variables, x, y, training=True, ) # Update trainable variables and optimizer variables. ( trainable_variables, optimizer_variables, ) = self.optimizer.stateless_apply( optimizer_variables, grads, trainable_variables ) # Update metrics. loss_tracker_vars = metrics_variables[ : len(self.loss_tracker.variables) ] mae_metric_vars = metrics_variables[len(self.loss_tracker.variables) :] loss_tracker_vars = self.loss_tracker.stateless_update_state( loss_tracker_vars, loss ) mae_metric_vars = self.mae_metric.stateless_update_state( mae_metric_vars, y, y_pred ) logs = {} logs[self.loss_tracker.name] = self.loss_tracker.stateless_result( loss_tracker_vars ) logs[self.mae_metric.name] = self.mae_metric.stateless_result( mae_metric_vars ) new_metrics_vars = loss_tracker_vars + mae_metric_vars # Return metric logs and updated state variables. state = ( trainable_variables, non_trainable_variables, optimizer_variables, new_metrics_vars, ) return logs, state @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. return [self.loss_tracker, self.mae_metric] # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) # We don't pass a loss or metrics here. model.compile(optimizer="adam") # Just use `fit` as usual -- you can use callbacks, etc. x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.fit(x, y, epochs=5) """ ## Providing your own evaluation step What if you want to do the same for calls to `model.evaluate()`? Then you would override `test_step` in exactly the same way. Here's what it looks like: """ class CustomModel(keras.Model): def test_step(self, state, data): # Unpack the data. x, y = data ( trainable_variables, non_trainable_variables, metrics_variables, ) = state # Compute predictions and loss. y_pred, non_trainable_variables = self.stateless_call( trainable_variables, non_trainable_variables, x, training=False, ) loss = self.compute_loss(x, y, y_pred) # Update metrics. new_metrics_vars, logs = [], [] for metric in self.metrics: this_metric_vars = metrics_variables[ len(new_metrics_vars) : len(new_metrics_vars) + len(metric.variables) ] if metric.name == "loss": this_metric_vars = metric.stateless_update_state( this_metric_vars, loss ) else: this_metric_vars = metric.stateless_update_state( this_metric_vars, y, y_pred ) logs = metric.stateless_result(this_metric_vars) new_metrics_vars += this_metric_vars # Return metric logs and updated state variables. state = ( trainable_variables, non_trainable_variables, new_metrics_vars, ) return logs, state # Construct an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(loss="mse", metrics=["mae"]) # Evaluate with our custom test_step x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) model.evaluate(x, y) """ That's it! """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
keras-team/keras
https://github.com/keras-team/keras/blob/c67eddb4ff8b615886893ca996dc216bc923d598/guides/sequential_model.py
guides/sequential_model.py
""" Title: The Sequential model Author: [fchollet](https://twitter.com/fchollet) Date created: 2020/04/12 Last modified: 2023/06/25 Description: Complete guide to the Sequential model. Accelerator: GPU """ """ ## Setup """ import keras from keras import layers from keras import ops """ ## When to use a Sequential model A `Sequential` model is appropriate for **a plain stack of layers** where each layer has **exactly one input tensor and one output tensor**. Schematically, the following `Sequential` model: """ # Define Sequential model with 3 layers model = keras.Sequential( [ layers.Dense(2, activation="relu", name="layer1"), layers.Dense(3, activation="relu", name="layer2"), layers.Dense(4, name="layer3"), ] ) # Call model on a test input x = ops.ones((3, 3)) y = model(x) """ is equivalent to this function: """ # Create 3 layers layer1 = layers.Dense(2, activation="relu", name="layer1") layer2 = layers.Dense(3, activation="relu", name="layer2") layer3 = layers.Dense(4, name="layer3") # Call layers on a test input x = ops.ones((3, 3)) y = layer3(layer2(layer1(x))) """ A Sequential model is **not appropriate** when: - Your model has multiple inputs or multiple outputs - Any of your layers has multiple inputs or multiple outputs - You need to do layer sharing - You want non-linear topology (e.g. a residual connection, a multi-branch model) """ """ ## Creating a Sequential model You can create a Sequential model by passing a list of layers to the Sequential constructor: """ model = keras.Sequential( [ layers.Dense(2, activation="relu"), layers.Dense(3, activation="relu"), layers.Dense(4), ] ) """ Its layers are accessible via the `layers` attribute: """ model.layers """ You can also create a Sequential model incrementally via the `add()` method: """ model = keras.Sequential() model.add(layers.Dense(2, activation="relu")) model.add(layers.Dense(3, activation="relu")) model.add(layers.Dense(4)) """ Note that there's also a corresponding `pop()` method to remove layers: a Sequential model behaves very much like a list of layers. """ model.pop() print(len(model.layers)) # 2 """ Also note that the Sequential constructor accepts a `name` argument, just like any layer or model in Keras. This is useful to annotate TensorBoard graphs with semantically meaningful names. """ model = keras.Sequential(name="my_sequential") model.add(layers.Dense(2, activation="relu", name="layer1")) model.add(layers.Dense(3, activation="relu", name="layer2")) model.add(layers.Dense(4, name="layer3")) """ ## Specifying the input shape in advance Generally, all layers in Keras need to know the shape of their inputs in order to be able to create their weights. So when you create a layer like this, initially, it has no weights: """ layer = layers.Dense(3) layer.weights # Empty """ It creates its weights the first time it is called on an input, since the shape of the weights depends on the shape of the inputs: """ # Call layer on a test input x = ops.ones((1, 4)) y = layer(x) layer.weights # Now it has weights, of shape (4, 3) and (3,) """ Naturally, this also applies to Sequential models. When you instantiate a Sequential model without an input shape, it isn't "built": it has no weights (and calling `model.weights` results in an error stating just this). The weights are created when the model first sees some input data: """ model = keras.Sequential( [ layers.Dense(2, activation="relu"), layers.Dense(3, activation="relu"), layers.Dense(4), ] ) # No weights at this stage! # At this point, you can't do this: # model.weights # You also can't do this: # model.summary() # Call the model on a test input x = ops.ones((1, 4)) y = model(x) print("Number of weights after calling the model:", len(model.weights)) # 6 """ Once a model is "built", you can call its `summary()` method to display its contents: """ model.summary() """ However, it can be very useful when building a Sequential model incrementally to be able to display the summary of the model so far, including the current output shape. In this case, you should start your model by passing an `Input` object to your model, so that it knows its input shape from the start: """ model = keras.Sequential() model.add(keras.Input(shape=(4,))) model.add(layers.Dense(2, activation="relu")) model.summary() """ Note that the `Input` object is not displayed as part of `model.layers`, since it isn't a layer: """ model.layers """ Models built with a predefined input shape like this always have weights (even before seeing any data) and always have a defined output shape. In general, it's a recommended best practice to always specify the input shape of a Sequential model in advance if you know what it is. """ """ ## A common debugging workflow: `add()` + `summary()` When building a new Sequential architecture, it's useful to incrementally stack layers with `add()` and frequently print model summaries. For instance, this enables you to monitor how a stack of `Conv2D` and `MaxPooling2D` layers is downsampling image feature maps: """ model = keras.Sequential() model.add(keras.Input(shape=(250, 250, 3))) # 250x250 RGB images model.add(layers.Conv2D(32, 5, strides=2, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(3)) # Can you guess what the current output shape is at this point? Probably not. # Let's just print it: model.summary() # The answer was: (40, 40, 32), so we can keep downsampling... model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(3)) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.Conv2D(32, 3, activation="relu")) model.add(layers.MaxPooling2D(2)) # And now? model.summary() # Now that we have 4x4 feature maps, time to apply global max pooling. model.add(layers.GlobalMaxPooling2D()) # Finally, we add a classification layer. model.add(layers.Dense(10)) """ Very practical, right? """ """ ## What to do once you have a model Once your model architecture is ready, you will want to: - Train your model, evaluate it, and run inference. See our [guide to training & evaluation with the built-in loops]( /guides/training_with_built_in_methods/) - Save your model to disk and restore it. See our [guide to serialization & saving](/guides/serialization_and_saving/). - Speed up model training by leveraging multiple GPUs. See our [guide to multi-GPU and distributed training](https://keras.io/guides/distributed_training/). """ """ ## Feature extraction with a Sequential model Once a Sequential model has been built, it behaves like a [Functional API model](/guides/functional_api/). This means that every layer has an `input` and `output` attribute. These attributes can be used to do neat things, like quickly creating a model that extracts the outputs of all intermediate layers in a Sequential model: """ initial_model = keras.Sequential( [ keras.Input(shape=(250, 250, 3)), layers.Conv2D(32, 5, strides=2, activation="relu"), layers.Conv2D(32, 3, activation="relu"), layers.Conv2D(32, 3, activation="relu"), ] ) feature_extractor = keras.Model( inputs=initial_model.inputs, outputs=[layer.output for layer in initial_model.layers], ) # Call feature extractor on test input. x = ops.ones((1, 250, 250, 3)) features = feature_extractor(x) """ Here's a similar example that only extract features from one layer: """ initial_model = keras.Sequential( [ keras.Input(shape=(250, 250, 3)), layers.Conv2D(32, 5, strides=2, activation="relu"), layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"), layers.Conv2D(32, 3, activation="relu"), ] ) feature_extractor = keras.Model( inputs=initial_model.inputs, outputs=initial_model.get_layer(name="my_intermediate_layer").output, ) # Call feature extractor on test input. x = ops.ones((1, 250, 250, 3)) features = feature_extractor(x) """ ## Transfer learning with a Sequential model Transfer learning consists of freezing the bottom layers in a model and only training the top layers. If you aren't familiar with it, make sure to read our [guide to transfer learning](/guides/transfer_learning/). Here are two common transfer learning blueprint involving Sequential models. First, let's say that you have a Sequential model, and you want to freeze all layers except the last one. In this case, you would simply iterate over `model.layers` and set `layer.trainable = False` on each layer, except the last one. Like this: ```python model = keras.Sequential([ keras.Input(shape=(784)), layers.Dense(32, activation='relu'), layers.Dense(32, activation='relu'), layers.Dense(32, activation='relu'), layers.Dense(10), ]) # Presumably you would want to first load pre-trained weights. model.load_weights(...) # Freeze all layers except the last one. for layer in model.layers[:-1]: layer.trainable = False # Recompile and train (this will only update the weights of the last layer). model.compile(...) model.fit(...) ``` Another common blueprint is to use a Sequential model to stack a pre-trained model and some freshly initialized classification layers. Like this: ```python # Load a convolutional base with pre-trained weights base_model = keras.applications.Xception( weights='imagenet', include_top=False, pooling='avg') # Freeze the base model base_model.trainable = False # Use a Sequential model to add a trainable classifier on top model = keras.Sequential([ base_model, layers.Dense(1000), ]) # Compile & train model.compile(...) model.fit(...) ``` If you do transfer learning, you will probably find yourself frequently using these two patterns. """ """ That's about all you need to know about Sequential models! To find out more about building models in Keras, see: - [Guide to the Functional API](/guides/functional_api/) - [Guide to making new Layers & Models via subclassing]( /guides/making_new_layers_and_models_via_subclassing/) """
python
Apache-2.0
c67eddb4ff8b615886893ca996dc216bc923d598
2026-01-04T14:38:29.819962Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/conftest.py
conftest.py
from __future__ import annotations import importlib from pathlib import Path from typing import TYPE_CHECKING import pytest from twisted.web.http import H2_ENABLED from scrapy.utils.reactor import set_asyncio_event_loop_policy from tests.keys import generate_keys from tests.mockserver.http import MockServer if TYPE_CHECKING: from collections.abc import Generator def _py_files(folder): return (str(p) for p in Path(folder).rglob("*.py")) collect_ignore = [ # may need extra deps "docs/_ext", # not a test, but looks like a test "scrapy/utils/testproc.py", "scrapy/utils/testsite.py", "tests/ftpserver.py", "tests/mockserver.py", "tests/pipelines.py", "tests/spiders.py", # contains scripts to be run by tests/test_crawler.py::AsyncCrawlerProcessSubprocess *_py_files("tests/AsyncCrawlerProcess"), # contains scripts to be run by tests/test_crawler.py::AsyncCrawlerRunnerSubprocess *_py_files("tests/AsyncCrawlerRunner"), # contains scripts to be run by tests/test_crawler.py::CrawlerProcessSubprocess *_py_files("tests/CrawlerProcess"), # contains scripts to be run by tests/test_crawler.py::CrawlerRunnerSubprocess *_py_files("tests/CrawlerRunner"), ] base_dir = Path(__file__).parent ignore_file_path = base_dir / "tests" / "ignores.txt" with ignore_file_path.open(encoding="utf-8") as reader: for line in reader: file_path = line.strip() if file_path and file_path[0] != "#": collect_ignore.append(file_path) if not H2_ENABLED: collect_ignore.extend( ( "scrapy/core/downloader/handlers/http2.py", *_py_files("scrapy/core/http2"), ) ) @pytest.fixture(scope="session") def mockserver() -> Generator[MockServer]: with MockServer() as mockserver: yield mockserver @pytest.fixture(scope="session") def reactor_pytest(request) -> str: return request.config.getoption("--reactor") def pytest_configure(config): if config.getoption("--reactor") == "asyncio": # Needed on Windows to switch from proactor to selector for Twisted reactor compatibility. # If we decide to run tests with both, we will need to add a new option and check it here. set_asyncio_event_loop_policy() def pytest_runtest_setup(item): # Skip tests based on reactor markers reactor = item.config.getoption("--reactor") if item.get_closest_marker("only_asyncio") and reactor != "asyncio": pytest.skip("This test is only run with --reactor=asyncio") if item.get_closest_marker("only_not_asyncio") and reactor == "asyncio": pytest.skip("This test is only run without --reactor=asyncio") # Skip tests requiring optional dependencies optional_deps = [ "uvloop", "botocore", "boto3", "mitmproxy", ] for module in optional_deps: if item.get_closest_marker(f"requires_{module}"): try: importlib.import_module(module) except ImportError: pytest.skip(f"{module} is not installed") # Generate localhost certificate files, needed by some tests generate_keys()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/robotstxt.py
scrapy/robotstxt.py
from __future__ import annotations import logging import sys from abc import ABCMeta, abstractmethod from typing import TYPE_CHECKING from urllib.robotparser import RobotFileParser from protego import Protego from scrapy.utils.python import to_unicode if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Spider from scrapy.crawler import Crawler logger = logging.getLogger(__name__) def decode_robotstxt( robotstxt_body: bytes, spider: Spider | None, to_native_str_type: bool = False ) -> str: try: if to_native_str_type: body_decoded = to_unicode(robotstxt_body) else: body_decoded = robotstxt_body.decode("utf-8-sig", errors="ignore") except UnicodeDecodeError: # If we found garbage or robots.txt in an encoding other than UTF-8, disregard it. # Switch to 'allow all' state. logger.warning( "Failure while parsing robots.txt. File either contains garbage or " "is in an encoding other than UTF-8, treating it as an empty file.", exc_info=sys.exc_info(), extra={"spider": spider}, ) body_decoded = "" return body_decoded class RobotParser(metaclass=ABCMeta): @classmethod @abstractmethod def from_crawler(cls, crawler: Crawler, robotstxt_body: bytes) -> Self: """Parse the content of a robots.txt_ file as bytes. This must be a class method. It must return a new instance of the parser backend. :param crawler: crawler which made the request :type crawler: :class:`~scrapy.crawler.Crawler` instance :param robotstxt_body: content of a robots.txt_ file. :type robotstxt_body: bytes """ @abstractmethod def allowed(self, url: str | bytes, user_agent: str | bytes) -> bool: """Return ``True`` if ``user_agent`` is allowed to crawl ``url``, otherwise return ``False``. :param url: Absolute URL :type url: str or bytes :param user_agent: User agent :type user_agent: str or bytes """ class PythonRobotParser(RobotParser): def __init__(self, robotstxt_body: bytes, spider: Spider | None): self.spider: Spider | None = spider body_decoded = decode_robotstxt(robotstxt_body, spider, to_native_str_type=True) self.rp: RobotFileParser = RobotFileParser() self.rp.parse(body_decoded.splitlines()) @classmethod def from_crawler(cls, crawler: Crawler, robotstxt_body: bytes) -> Self: spider = None if not crawler else crawler.spider return cls(robotstxt_body, spider) def allowed(self, url: str | bytes, user_agent: str | bytes) -> bool: user_agent = to_unicode(user_agent) url = to_unicode(url) return self.rp.can_fetch(user_agent, url) class RerpRobotParser(RobotParser): def __init__(self, robotstxt_body: bytes, spider: Spider | None): from robotexclusionrulesparser import RobotExclusionRulesParser # noqa: PLC0415 self.spider: Spider | None = spider self.rp: RobotExclusionRulesParser = RobotExclusionRulesParser() body_decoded = decode_robotstxt(robotstxt_body, spider) self.rp.parse(body_decoded) @classmethod def from_crawler(cls, crawler: Crawler, robotstxt_body: bytes) -> Self: spider = None if not crawler else crawler.spider return cls(robotstxt_body, spider) def allowed(self, url: str | bytes, user_agent: str | bytes) -> bool: user_agent = to_unicode(user_agent) url = to_unicode(url) return self.rp.is_allowed(user_agent, url) class ProtegoRobotParser(RobotParser): def __init__(self, robotstxt_body: bytes, spider: Spider | None): self.spider: Spider | None = spider body_decoded = decode_robotstxt(robotstxt_body, spider) self.rp = Protego.parse(body_decoded) @classmethod def from_crawler(cls, crawler: Crawler, robotstxt_body: bytes) -> Self: spider = None if not crawler else crawler.spider return cls(robotstxt_body, spider) def allowed(self, url: str | bytes, user_agent: str | bytes) -> bool: user_agent = to_unicode(user_agent) url = to_unicode(url) return self.rp.can_fetch(url, user_agent)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/interfaces.py
scrapy/interfaces.py
# pylint: disable=no-method-argument,no-self-argument from zope.interface import Interface class ISpiderLoader(Interface): def from_settings(settings): """Return an instance of the class for the given settings""" def load(spider_name): """Return the Spider class for the given spider name. If the spider name is not found, it must raise a KeyError.""" def list(): """Return a list with the names of all spiders available in the project""" def find_by_request(request): """Return the list of spiders names that can handle the given request"""
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/resolver.py
scrapy/resolver.py
from __future__ import annotations from typing import TYPE_CHECKING, Any from twisted.internet import defer from twisted.internet.base import ReactorBase, ThreadedResolver from twisted.internet.interfaces import ( IAddress, IHostnameResolver, IHostResolution, IResolutionReceiver, IResolverSimple, ) from zope.interface.declarations import implementer, provider from scrapy.utils.datatypes import LocalCache if TYPE_CHECKING: from collections.abc import Sequence from twisted.internet.defer import Deferred # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler # TODO: cache misses dnscache: LocalCache[str, Any] = LocalCache(10000) @implementer(IResolverSimple) class CachingThreadedResolver(ThreadedResolver): """ Default caching resolver. IPv4 only, supports setting a timeout value for DNS requests. """ def __init__(self, reactor: ReactorBase, cache_size: int, timeout: float): super().__init__(reactor) dnscache.limit = cache_size self.timeout = timeout @classmethod def from_crawler(cls, crawler: Crawler, reactor: ReactorBase) -> Self: if crawler.settings.getbool("DNSCACHE_ENABLED"): cache_size = crawler.settings.getint("DNSCACHE_SIZE") else: cache_size = 0 return cls(reactor, cache_size, crawler.settings.getfloat("DNS_TIMEOUT")) def install_on_reactor(self) -> None: self.reactor.installResolver(self) def getHostByName(self, name: str, timeout: Sequence[int] = ()) -> Deferred[str]: if name in dnscache: return defer.succeed(dnscache[name]) # in Twisted<=16.6, getHostByName() is always called with # a default timeout of 60s (actually passed as (1, 3, 11, 45) tuple), # so the input argument above is simply overridden # to enforce Scrapy's DNS_TIMEOUT setting's value # The timeout arg is typed as Sequence[int] but supports floats. timeout = (self.timeout,) # type: ignore[assignment] d = super().getHostByName(name, timeout) if dnscache.limit: d.addCallback(self._cache_result, name) return d def _cache_result(self, result: Any, name: str) -> Any: dnscache[name] = result return result @implementer(IHostResolution) class HostResolution: def __init__(self, name: str): self.name: str = name def cancel(self) -> None: raise NotImplementedError @provider(IResolutionReceiver) class _CachingResolutionReceiver: def __init__(self, resolutionReceiver: IResolutionReceiver, hostName: str): self.resolutionReceiver: IResolutionReceiver = resolutionReceiver self.hostName: str = hostName self.addresses: list[IAddress] = [] def resolutionBegan(self, resolution: IHostResolution) -> None: self.resolutionReceiver.resolutionBegan(resolution) self.resolution = resolution def addressResolved(self, address: IAddress) -> None: self.resolutionReceiver.addressResolved(address) self.addresses.append(address) def resolutionComplete(self) -> None: self.resolutionReceiver.resolutionComplete() if self.addresses: dnscache[self.hostName] = self.addresses @implementer(IHostnameResolver) class CachingHostnameResolver: """ Experimental caching resolver. Resolves IPv4 and IPv6 addresses, does not support setting a timeout value for DNS requests. """ def __init__(self, reactor: ReactorBase, cache_size: int): self.reactor: ReactorBase = reactor self.original_resolver: IHostnameResolver = reactor.nameResolver dnscache.limit = cache_size @classmethod def from_crawler(cls, crawler: Crawler, reactor: ReactorBase) -> Self: if crawler.settings.getbool("DNSCACHE_ENABLED"): cache_size = crawler.settings.getint("DNSCACHE_SIZE") else: cache_size = 0 return cls(reactor, cache_size) def install_on_reactor(self) -> None: self.reactor.installNameResolver(self) def resolveHostName( self, resolutionReceiver: IResolutionReceiver, hostName: str, portNumber: int = 0, addressTypes: Sequence[type[IAddress]] | None = None, transportSemantics: str = "TCP", ) -> IHostResolution: try: addresses = dnscache[hostName] except KeyError: return self.original_resolver.resolveHostName( _CachingResolutionReceiver(resolutionReceiver, hostName), hostName, portNumber, addressTypes, transportSemantics, ) resolutionReceiver.resolutionBegan(HostResolution(hostName)) for addr in addresses: resolutionReceiver.addressResolved(addr) resolutionReceiver.resolutionComplete() return resolutionReceiver
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/shell.py
scrapy/shell.py
"""Scrapy Shell See documentation in docs/topics/shell.rst """ from __future__ import annotations import contextlib import os import signal from typing import TYPE_CHECKING, Any from itemadapter import is_item from twisted.internet import defer, threads from twisted.python import threadable from w3lib.url import any_to_uri import scrapy from scrapy.crawler import Crawler from scrapy.exceptions import IgnoreRequest from scrapy.http import Request, Response from scrapy.settings import Settings from scrapy.spiders import Spider from scrapy.utils.conf import get_config from scrapy.utils.console import DEFAULT_PYTHON_SHELLS, start_python_console from scrapy.utils.datatypes import SequenceExclude from scrapy.utils.defer import _schedule_coro, deferred_f_from_coro_f from scrapy.utils.misc import load_object from scrapy.utils.reactor import is_asyncio_reactor_installed, set_asyncio_event_loop from scrapy.utils.response import open_in_browser if TYPE_CHECKING: from collections.abc import Callable class Shell: relevant_classes: tuple[type, ...] = (Crawler, Spider, Request, Response, Settings) def __init__( self, crawler: Crawler, update_vars: Callable[[dict[str, Any]], None] | None = None, code: str | None = None, ): self.crawler: Crawler = crawler self.update_vars: Callable[[dict[str, Any]], None] = update_vars or ( lambda x: None ) self.item_class: type = load_object(crawler.settings["DEFAULT_ITEM_CLASS"]) self.spider: Spider | None = None self.inthread: bool = not threadable.isInIOThread() self.code: str | None = code self.vars: dict[str, Any] = {} def start( self, url: str | None = None, request: Request | None = None, response: Response | None = None, spider: Spider | None = None, redirect: bool = True, ) -> None: # disable accidental Ctrl-C key press from shutting down the engine signal.signal(signal.SIGINT, signal.SIG_IGN) if url: self.fetch(url, spider, redirect=redirect) elif request: self.fetch(request, spider) elif response: request = response.request self.populate_vars(response, request, spider) else: self.populate_vars() if self.code: # pylint: disable-next=eval-used print(eval(self.code, globals(), self.vars)) # noqa: S307 else: # Detect interactive shell setting in scrapy.cfg # e.g.: ~/.config/scrapy.cfg or ~/.scrapy.cfg # [settings] # # shell can be one of ipython, bpython or python; # # to be used as the interactive python console, if available. # # (default is ipython, fallbacks in the order listed above) # shell = python cfg = get_config() section, option = "settings", "shell" env = os.environ.get("SCRAPY_PYTHON_SHELL") shells = [] if env: shells += env.strip().lower().split(",") elif cfg.has_option(section, option): shells += [cfg.get(section, option).strip().lower()] else: # try all by default shells += DEFAULT_PYTHON_SHELLS.keys() # always add standard shell as fallback shells += ["python"] start_python_console( self.vars, shells=shells, banner=self.vars.pop("banner", "") ) def _schedule(self, request: Request, spider: Spider | None) -> defer.Deferred[Any]: if is_asyncio_reactor_installed(): # set the asyncio event loop for the current thread event_loop_path = self.crawler.settings["ASYNCIO_EVENT_LOOP"] set_asyncio_event_loop(event_loop_path) def crawl_request(_): assert self.crawler.engine is not None self.crawler.engine.crawl(request) d2 = self._open_spider(request, spider) d2.addCallback(crawl_request) d = _request_deferred(request) d.addCallback(lambda x: (x, spider)) return d @deferred_f_from_coro_f async def _open_spider(self, request: Request, spider: Spider | None) -> None: if self.spider: return if spider is None: spider = self.crawler.spider or self.crawler._create_spider() self.crawler.spider = spider assert self.crawler.engine await self.crawler.engine.open_spider_async(close_if_idle=False) _schedule_coro(self.crawler.engine._start_request_processing()) self.spider = spider def fetch( self, request_or_url: Request | str, spider: Spider | None = None, redirect: bool = True, **kwargs: Any, ) -> None: from twisted.internet import reactor if isinstance(request_or_url, Request): request = request_or_url else: url = any_to_uri(request_or_url) request = Request(url, dont_filter=True, **kwargs) if redirect: request.meta["handle_httpstatus_list"] = SequenceExclude( range(300, 400) ) else: request.meta["handle_httpstatus_all"] = True response = None with contextlib.suppress(IgnoreRequest): response, spider = threads.blockingCallFromThread( reactor, self._schedule, request, spider ) self.populate_vars(response, request, spider) def populate_vars( self, response: Response | None = None, request: Request | None = None, spider: Spider | None = None, ) -> None: self.vars["scrapy"] = scrapy self.vars["crawler"] = self.crawler self.vars["item"] = self.item_class() self.vars["settings"] = self.crawler.settings self.vars["spider"] = spider self.vars["request"] = request self.vars["response"] = response if self.inthread: self.vars["fetch"] = self.fetch self.vars["view"] = open_in_browser self.vars["shelp"] = self.print_help self.update_vars(self.vars) if not self.code: self.vars["banner"] = self.get_help() def print_help(self) -> None: print(self.get_help()) def get_help(self) -> str: b = [] b.append("Available Scrapy objects:") b.append( " scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)" ) for k, v in sorted(self.vars.items()): if self._is_relevant(v): b.append(f" {k:<10} {v}") b.append("Useful shortcuts:") if self.inthread: b.append( " fetch(url[, redirect=True]) " "Fetch URL and update local objects (by default, redirects are followed)" ) b.append( " fetch(req) " "Fetch a scrapy.Request and update local objects " ) b.append(" shelp() Shell help (print this help)") b.append(" view(response) View response in a browser") return "\n".join(f"[s] {line}" for line in b) + "\n" def _is_relevant(self, value: Any) -> bool: return isinstance(value, self.relevant_classes) or is_item(value) def inspect_response(response: Response, spider: Spider) -> None: """Open a shell to inspect the given response""" # Shell.start removes the SIGINT handler, so save it and re-add it after # the shell has closed sigint_handler = signal.getsignal(signal.SIGINT) Shell(spider.crawler).start(response=response, spider=spider) signal.signal(signal.SIGINT, sigint_handler) def _request_deferred(request: Request) -> defer.Deferred[Any]: """Wrap a request inside a Deferred. This function is harmful, do not use it until you know what you are doing. This returns a Deferred whose first pair of callbacks are the request callback and errback. The Deferred also triggers when the request callback/errback is executed (i.e. when the request is downloaded) WARNING: Do not call request.replace() until after the deferred is called. """ request_callback = request.callback request_errback = request.errback def _restore_callbacks(result: Any) -> Any: request.callback = request_callback request.errback = request_errback return result d: defer.Deferred[Any] = defer.Deferred() d.addBoth(_restore_callbacks) if request.callback: d.addCallback(request.callback) if request.errback: d.addErrback(request.errback) request.callback, request.errback = d.callback, d.errback return d
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/mail.py
scrapy/mail.py
""" Mail sending helpers See documentation in docs/topics/email.rst """ from __future__ import annotations import logging from email import encoders as Encoders from email.mime.base import MIMEBase from email.mime.multipart import MIMEMultipart from email.mime.nonmultipart import MIMENonMultipart from email.mime.text import MIMEText from email.utils import formatdate from io import BytesIO from typing import IO, TYPE_CHECKING, Any from twisted.internet import ssl from twisted.internet.defer import Deferred from scrapy.utils.misc import arg_to_iter from scrapy.utils.python import to_bytes if TYPE_CHECKING: from collections.abc import Callable, Sequence # imports twisted.internet.reactor from twisted.mail.smtp import ESMTPSenderFactory from twisted.python.failure import Failure # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler logger = logging.getLogger(__name__) # Defined in the email.utils module, but undocumented: # https://github.com/python/cpython/blob/v3.9.0/Lib/email/utils.py#L42 COMMASPACE = ", " def _to_bytes_or_none(text: str | bytes | None) -> bytes | None: if text is None: return None return to_bytes(text) class MailSender: def __init__( self, smtphost: str = "localhost", mailfrom: str = "scrapy@localhost", smtpuser: str | None = None, smtppass: str | None = None, smtpport: int = 25, smtptls: bool = False, smtpssl: bool = False, debug: bool = False, ): self.smtphost: str = smtphost self.smtpport: int = smtpport self.smtpuser: bytes | None = _to_bytes_or_none(smtpuser) self.smtppass: bytes | None = _to_bytes_or_none(smtppass) self.smtptls: bool = smtptls self.smtpssl: bool = smtpssl self.mailfrom: str = mailfrom self.debug: bool = debug @classmethod def from_crawler(cls, crawler: Crawler) -> Self: settings = crawler.settings return cls( smtphost=settings["MAIL_HOST"], mailfrom=settings["MAIL_FROM"], smtpuser=settings["MAIL_USER"], smtppass=settings["MAIL_PASS"], smtpport=settings.getint("MAIL_PORT"), smtptls=settings.getbool("MAIL_TLS"), smtpssl=settings.getbool("MAIL_SSL"), ) def send( self, to: str | list[str], subject: str, body: str, cc: str | list[str] | None = None, attachs: Sequence[tuple[str, str, IO[Any]]] = (), mimetype: str = "text/plain", charset: str | None = None, _callback: Callable[..., None] | None = None, ) -> Deferred[None] | None: from twisted.internet import reactor msg: MIMEBase = ( MIMEMultipart() if attachs else MIMENonMultipart(*mimetype.split("/", 1)) ) to = list(arg_to_iter(to)) cc = list(arg_to_iter(cc)) msg["From"] = self.mailfrom msg["To"] = COMMASPACE.join(to) msg["Date"] = formatdate(localtime=True) msg["Subject"] = subject rcpts = to[:] if cc: rcpts.extend(cc) msg["Cc"] = COMMASPACE.join(cc) if attachs: if charset: msg.set_charset(charset) msg.attach(MIMEText(body, "plain", charset or "us-ascii")) for attach_name, attach_mimetype, f in attachs: part = MIMEBase(*attach_mimetype.split("/")) part.set_payload(f.read()) Encoders.encode_base64(part) part.add_header( "Content-Disposition", "attachment", filename=attach_name ) msg.attach(part) else: msg.set_payload(body, charset) if _callback: _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg) if self.debug: logger.debug( "Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s " 'Subject="%(mailsubject)s" Attachs=%(mailattachs)d', { "mailto": to, "mailcc": cc, "mailsubject": subject, "mailattachs": len(attachs), }, ) return None dfd: Deferred[Any] = self._sendmail( rcpts, msg.as_string().encode(charset or "utf-8") ) dfd.addCallback(self._sent_ok, to, cc, subject, len(attachs)) dfd.addErrback(self._sent_failed, to, cc, subject, len(attachs)) reactor.addSystemEventTrigger("before", "shutdown", lambda: dfd) return dfd def _sent_ok( self, result: Any, to: list[str], cc: list[str], subject: str, nattachs: int ) -> None: logger.info( "Mail sent OK: To=%(mailto)s Cc=%(mailcc)s " 'Subject="%(mailsubject)s" Attachs=%(mailattachs)d', { "mailto": to, "mailcc": cc, "mailsubject": subject, "mailattachs": nattachs, }, ) def _sent_failed( self, failure: Failure, to: list[str], cc: list[str], subject: str, nattachs: int, ) -> Failure: errstr = str(failure.value) logger.error( "Unable to send mail: To=%(mailto)s Cc=%(mailcc)s " 'Subject="%(mailsubject)s" Attachs=%(mailattachs)d' "- %(mailerr)s", { "mailto": to, "mailcc": cc, "mailsubject": subject, "mailattachs": nattachs, "mailerr": errstr, }, ) return failure def _sendmail(self, to_addrs: list[str], msg: bytes) -> Deferred[Any]: from twisted.internet import reactor msg_io = BytesIO(msg) d: Deferred[Any] = Deferred() factory = self._create_sender_factory(to_addrs, msg_io, d) if self.smtpssl: reactor.connectSSL( self.smtphost, self.smtpport, factory, ssl.ClientContextFactory() ) else: reactor.connectTCP(self.smtphost, self.smtpport, factory) return d def _create_sender_factory( self, to_addrs: list[str], msg: IO[bytes], d: Deferred[Any] ) -> ESMTPSenderFactory: # imports twisted.internet.reactor from twisted.mail.smtp import ESMTPSenderFactory # noqa: PLC0415 factory_keywords: dict[str, Any] = { "heloFallback": True, "requireAuthentication": False, "requireTransportSecurity": self.smtptls, "hostname": self.smtphost, } factory = ESMTPSenderFactory( self.smtpuser, self.smtppass, self.mailfrom, to_addrs, msg, d, **factory_keywords, ) factory.noisy = False return factory
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/spiderloader.py
scrapy/spiderloader.py
from __future__ import annotations import traceback import warnings from collections import defaultdict from typing import TYPE_CHECKING, Protocol, cast from zope.interface import implementer from zope.interface.verify import verifyClass from scrapy.interfaces import ISpiderLoader from scrapy.utils.misc import load_object, walk_modules from scrapy.utils.spider import iter_spider_classes if TYPE_CHECKING: from types import ModuleType # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Request, Spider from scrapy.settings import BaseSettings def get_spider_loader(settings: BaseSettings) -> SpiderLoaderProtocol: """Get SpiderLoader instance from settings""" cls_path = settings.get("SPIDER_LOADER_CLASS") loader_cls = load_object(cls_path) verifyClass(ISpiderLoader, loader_cls) return cast("SpiderLoaderProtocol", loader_cls.from_settings(settings.frozencopy())) class SpiderLoaderProtocol(Protocol): @classmethod def from_settings(cls, settings: BaseSettings) -> Self: """Return an instance of the class for the given settings""" def load(self, spider_name: str) -> type[Spider]: """Return the Spider class for the given spider name. If the spider name is not found, it must raise a KeyError.""" def list(self) -> list[str]: """Return a list with the names of all spiders available in the project""" def find_by_request(self, request: Request) -> __builtins__.list[str]: """Return the list of spiders names that can handle the given request""" @implementer(ISpiderLoader) class SpiderLoader: """ SpiderLoader is a class which locates and loads spiders in a Scrapy project. """ def __init__(self, settings: BaseSettings): self.spider_modules: list[str] = settings.getlist("SPIDER_MODULES") self.warn_only: bool = settings.getbool("SPIDER_LOADER_WARN_ONLY") self._spiders: dict[str, type[Spider]] = {} self._found: defaultdict[str, list[tuple[str, str]]] = defaultdict(list) self._load_all_spiders() def _check_name_duplicates(self) -> None: dupes = [] for name, locations in self._found.items(): dupes.extend( [ f" {cls} named {name!r} (in {mod})" for mod, cls in locations if len(locations) > 1 ] ) if dupes: dupes_string = "\n\n".join(dupes) warnings.warn( "There are several spiders with the same name:\n\n" f"{dupes_string}\n\n This can cause unexpected behavior.", category=UserWarning, ) def _load_spiders(self, module: ModuleType) -> None: for spcls in iter_spider_classes(module): self._found[spcls.name].append((module.__name__, spcls.__name__)) self._spiders[spcls.name] = spcls def _load_all_spiders(self) -> None: for name in self.spider_modules: try: for module in walk_modules(name): self._load_spiders(module) except (ImportError, SyntaxError): if self.warn_only: warnings.warn( f"\n{traceback.format_exc()}Could not load spiders " f"from module '{name}'. " "See above traceback for details.", category=RuntimeWarning, ) else: raise self._check_name_duplicates() @classmethod def from_settings(cls, settings: BaseSettings) -> Self: return cls(settings) def load(self, spider_name: str) -> type[Spider]: """ Return the Spider class for the given spider name. If the spider name is not found, raise a KeyError. """ try: return self._spiders[spider_name] except KeyError: raise KeyError(f"Spider not found: {spider_name}") def find_by_request(self, request: Request) -> list[str]: """ Return the list of spider names that can handle the given request. """ return [ name for name, cls in self._spiders.items() if cls.handles_request(request) ] def list(self) -> list[str]: """ Return a list with the names of all spiders available in the project. """ return list(self._spiders.keys()) @implementer(ISpiderLoader) class DummySpiderLoader: """A dummy spider loader that does not load any spiders.""" @classmethod def from_settings(cls, settings: BaseSettings) -> Self: return cls() def load(self, spider_name: str) -> type[Spider]: raise KeyError("DummySpiderLoader doesn't load any spiders") def list(self) -> list[str]: return [] def find_by_request(self, request: Request) -> __builtins__.list[str]: return []
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/addons.py
scrapy/addons.py
from __future__ import annotations import logging from typing import TYPE_CHECKING, Any from scrapy.exceptions import NotConfigured from scrapy.utils.conf import build_component_list from scrapy.utils.misc import build_from_crawler, load_object if TYPE_CHECKING: from scrapy.crawler import Crawler from scrapy.settings import BaseSettings, Settings logger = logging.getLogger(__name__) class AddonManager: """This class facilitates loading and storing :ref:`topics-addons`.""" def __init__(self, crawler: Crawler) -> None: self.crawler: Crawler = crawler self.addons: list[Any] = [] def load_settings(self, settings: Settings) -> None: """Load add-ons and configurations from a settings object and apply them. This will load the add-on for every add-on path in the ``ADDONS`` setting and execute their ``update_settings`` methods. :param settings: The :class:`~scrapy.settings.Settings` object from \ which to read the add-on configuration :type settings: :class:`~scrapy.settings.Settings` """ for clspath in build_component_list(settings["ADDONS"]): try: addoncls = load_object(clspath) addon = build_from_crawler(addoncls, self.crawler) if hasattr(addon, "update_settings"): addon.update_settings(settings) self.addons.append(addon) except NotConfigured as e: if e.args: logger.warning( "Disabled %(clspath)s: %(eargs)s", {"clspath": clspath, "eargs": e.args[0]}, extra={"crawler": self.crawler}, ) logger.info( "Enabled addons:\n%(addons)s", { "addons": self.addons, }, extra={"crawler": self.crawler}, ) @classmethod def load_pre_crawler_settings(cls, settings: BaseSettings): """Update early settings that do not require a crawler instance, such as SPIDER_MODULES. Similar to the load_settings method, this loads each add-on configured in the ``ADDONS`` setting and calls their 'update_pre_crawler_settings' class method if present. This method doesn't have access to the crawler instance or the addons list. :param settings: The :class:`~scrapy.settings.BaseSettings` object from \ which to read the early add-on configuration :type settings: :class:`~scrapy.settings.Settings` """ for clspath in build_component_list(settings["ADDONS"]): addoncls = load_object(clspath) if hasattr(addoncls, "update_pre_crawler_settings"): addoncls.update_pre_crawler_settings(settings)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/link.py
scrapy/link.py
""" This module defines the Link object used in Link extractors. For actual link extractors implementation see scrapy.linkextractors, or its documentation in: docs/topics/link-extractors.rst """ class Link: """Link objects represent an extracted link by the LinkExtractor. Using the anchor tag sample below to illustrate the parameters:: <a href="https://example.com/nofollow.html#foo" rel="nofollow">Dont follow this one</a> :param url: the absolute url being linked to in the anchor tag. From the sample, this is ``https://example.com/nofollow.html``. :param text: the text in the anchor tag. From the sample, this is ``Dont follow this one``. :param fragment: the part of the url after the hash symbol. From the sample, this is ``foo``. :param nofollow: an indication of the presence or absence of a nofollow value in the ``rel`` attribute of the anchor tag. """ __slots__ = ["fragment", "nofollow", "text", "url"] def __init__( self, url: str, text: str = "", fragment: str = "", nofollow: bool = False ): if not isinstance(url, str): got = url.__class__.__name__ raise TypeError(f"Link urls must be str objects, got {got}") self.url: str = url self.text: str = text self.fragment: str = fragment self.nofollow: bool = nofollow def __eq__(self, other: object) -> bool: if not isinstance(other, Link): raise NotImplementedError return ( self.url == other.url and self.text == other.text and self.fragment == other.fragment and self.nofollow == other.nofollow ) def __hash__(self) -> int: return ( hash(self.url) ^ hash(self.text) ^ hash(self.fragment) ^ hash(self.nofollow) ) def __repr__(self) -> str: return ( f"Link(url={self.url!r}, text={self.text!r}, " f"fragment={self.fragment!r}, nofollow={self.nofollow!r})" )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/responsetypes.py
scrapy/responsetypes.py
""" This module implements a class which returns the appropriate Response class based on different criteria. """ from __future__ import annotations from io import StringIO from mimetypes import MimeTypes from pkgutil import get_data from typing import TYPE_CHECKING from scrapy.http import Response from scrapy.utils.misc import load_object from scrapy.utils.python import binary_is_text, to_bytes, to_unicode if TYPE_CHECKING: from collections.abc import Mapping class ResponseTypes: CLASSES = { "text/html": "scrapy.http.HtmlResponse", "application/atom+xml": "scrapy.http.XmlResponse", "application/rdf+xml": "scrapy.http.XmlResponse", "application/rss+xml": "scrapy.http.XmlResponse", "application/xhtml+xml": "scrapy.http.HtmlResponse", "application/vnd.wap.xhtml+xml": "scrapy.http.HtmlResponse", "application/xml": "scrapy.http.XmlResponse", "application/json": "scrapy.http.JsonResponse", "application/x-json": "scrapy.http.JsonResponse", "application/json-amazonui-streaming": "scrapy.http.JsonResponse", "application/javascript": "scrapy.http.TextResponse", "application/x-javascript": "scrapy.http.TextResponse", "text/xml": "scrapy.http.XmlResponse", "text/*": "scrapy.http.TextResponse", } def __init__(self) -> None: self.classes: dict[str, type[Response]] = {} self.mimetypes: MimeTypes = MimeTypes() mimedata = get_data("scrapy", "mime.types") if not mimedata: raise ValueError( "The mime.types file is not found in the Scrapy installation" ) self.mimetypes.readfp(StringIO(mimedata.decode("utf8"))) for mimetype, cls in self.CLASSES.items(): self.classes[mimetype] = load_object(cls) def from_mimetype(self, mimetype: str) -> type[Response]: """Return the most appropriate Response class for the given mimetype""" if mimetype is None: return Response if mimetype in self.classes: return self.classes[mimetype] basetype = f"{mimetype.split('/')[0]}/*" return self.classes.get(basetype, Response) def from_content_type( self, content_type: str | bytes, content_encoding: bytes | None = None ) -> type[Response]: """Return the most appropriate Response class from an HTTP Content-Type header""" if content_encoding: return Response mimetype = ( to_unicode(content_type, encoding="latin-1").split(";")[0].strip().lower() ) return self.from_mimetype(mimetype) def from_content_disposition( self, content_disposition: str | bytes ) -> type[Response]: try: filename = ( to_unicode(content_disposition, encoding="latin-1", errors="replace") .split(";")[1] .split("=")[1] .strip("\"'") ) return self.from_filename(filename) except IndexError: return Response def from_headers(self, headers: Mapping[bytes, bytes]) -> type[Response]: """Return the most appropriate Response class by looking at the HTTP headers""" cls = Response if b"Content-Type" in headers: cls = self.from_content_type( content_type=headers[b"Content-Type"], content_encoding=headers.get(b"Content-Encoding"), ) if cls is Response and b"Content-Disposition" in headers: cls = self.from_content_disposition(headers[b"Content-Disposition"]) return cls def from_filename(self, filename: str) -> type[Response]: """Return the most appropriate Response class from a file name""" mimetype, encoding = self.mimetypes.guess_type(filename) if mimetype and not encoding: return self.from_mimetype(mimetype) return Response def from_body(self, body: bytes) -> type[Response]: """Try to guess the appropriate response based on the body content. This method is a bit magic and could be improved in the future, but it's not meant to be used except for special cases where response types cannot be guess using more straightforward methods.""" chunk = body[:5000] chunk = to_bytes(chunk) if not binary_is_text(chunk): return self.from_mimetype("application/octet-stream") lowercase_chunk = chunk.lower() if b"<html>" in lowercase_chunk: return self.from_mimetype("text/html") if b"<?xml" in lowercase_chunk: return self.from_mimetype("text/xml") if b"<!doctype html>" in lowercase_chunk: return self.from_mimetype("text/html") return self.from_mimetype("text") def from_args( self, headers: Mapping[bytes, bytes] | None = None, url: str | None = None, filename: str | None = None, body: bytes | None = None, ) -> type[Response]: """Guess the most appropriate Response class based on the given arguments.""" cls = Response if headers is not None: cls = self.from_headers(headers) if cls is Response and url is not None: cls = self.from_filename(url) if cls is Response and filename is not None: cls = self.from_filename(filename) if cls is Response and body is not None: cls = self.from_body(body) return cls responsetypes = ResponseTypes()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/middleware.py
scrapy/middleware.py
from __future__ import annotations import logging import pprint import warnings from abc import ABC, abstractmethod from collections import defaultdict, deque from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar, cast from scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning from scrapy.utils.defer import ensure_awaitable from scrapy.utils.deprecate import argument_is_required from scrapy.utils.misc import build_from_crawler, load_object from scrapy.utils.python import global_object_name if TYPE_CHECKING: from collections.abc import Callable, Iterable from twisted.internet.defer import Deferred # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Spider from scrapy.crawler import Crawler from scrapy.settings import Settings logger = logging.getLogger(__name__) _T = TypeVar("_T") _P = ParamSpec("_P") class MiddlewareManager(ABC): """Base class for implementing middleware managers""" component_name: str _compat_spider: Spider | None = None def __init__(self, *middlewares: Any, crawler: Crawler | None = None) -> None: self.crawler: Crawler | None = crawler if crawler is None: warnings.warn( f"MiddlewareManager.__init__() was called without the crawler argument" f" when creating {global_object_name(self.__class__)}." f" This is deprecated and the argument will be required in future Scrapy versions.", category=ScrapyDeprecationWarning, stacklevel=2, ) self.middlewares: tuple[Any, ...] = middlewares # Only process_spider_output and process_spider_exception can be None. # Only process_spider_output can be a tuple, and only until _async compatibility methods are removed. self.methods: dict[str, deque[Callable | tuple[Callable, Callable] | None]] = ( defaultdict(deque) ) self._mw_methods_requiring_spider: set[Callable] = set() for mw in middlewares: self._add_middleware(mw) @property def _spider(self) -> Spider: if self.crawler is not None: if self.crawler.spider is None: raise ValueError( f"{type(self).__name__} needs to access self.crawler.spider but it is None." ) return self.crawler.spider if self._compat_spider is not None: return self._compat_spider raise ValueError(f"{type(self).__name__} has no known Spider instance.") def _set_compat_spider(self, spider: Spider | None) -> None: if spider is None or self.crawler is not None: return # printing a deprecation warning is the caller's responsibility if self._compat_spider is None: self._compat_spider = spider elif self._compat_spider is not spider: raise RuntimeError( f"Different instances of Spider were passed to {type(self).__name__}:" f" {self._compat_spider} and {spider}" ) @classmethod @abstractmethod def _get_mwlist_from_settings(cls, settings: Settings) -> list[Any]: raise NotImplementedError @classmethod def from_crawler(cls, crawler: Crawler) -> Self: mwlist = cls._get_mwlist_from_settings(crawler.settings) middlewares = [] enabled = [] for clspath in mwlist: try: mwcls = load_object(clspath) mw = build_from_crawler(mwcls, crawler) middlewares.append(mw) enabled.append(clspath) except NotConfigured as e: if e.args: logger.warning( "Disabled %(clspath)s: %(eargs)s", {"clspath": clspath, "eargs": e.args[0]}, extra={"crawler": crawler}, ) logger.info( "Enabled %(componentname)ss:\n%(enabledlist)s", { "componentname": cls.component_name, "enabledlist": pprint.pformat(enabled), }, extra={"crawler": crawler}, ) return cls(*middlewares, crawler=crawler) def _add_middleware(self, mw: Any) -> None: # noqa: B027 pass def _check_mw_method_spider_arg(self, method: Callable) -> None: if argument_is_required(method, "spider"): warnings.warn( f"{method.__qualname__}() requires a spider argument," f" this is deprecated and the argument will not be passed in future Scrapy versions." f" If you need to access the spider instance you can save the crawler instance" f" passed to from_crawler() and use its spider attribute.", category=ScrapyDeprecationWarning, stacklevel=2, ) self._mw_methods_requiring_spider.add(method) async def _process_chain( self, methodname: str, obj: _T, *args: Any, add_spider: bool = False, always_add_spider: bool = False, warn_deferred: bool = False, ) -> _T: methods = cast( "Iterable[Callable[Concatenate[_T, _P], _T]]", self.methods[methodname] ) for method in methods: warn = global_object_name(method) if warn_deferred else None if always_add_spider or ( add_spider and method in self._mw_methods_requiring_spider ): obj = await ensure_awaitable( method(obj, *(*args, self._spider)), _warn=warn ) else: obj = await ensure_awaitable(method(obj, *args), _warn=warn) return obj def open_spider(self, spider: Spider) -> Deferred[list[None]]: # pragma: no cover raise NotImplementedError( "MiddlewareManager.open_spider() is no longer implemented" " and will be removed in a future Scrapy version." ) def close_spider(self, spider: Spider) -> Deferred[list[None]]: # pragma: no cover raise NotImplementedError( "MiddlewareManager.close_spider() is no longer implemented" " and will be removed in a future Scrapy version." )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/pqueues.py
scrapy/pqueues.py
from __future__ import annotations import hashlib import logging from typing import TYPE_CHECKING, Protocol, cast from scrapy.utils.misc import build_from_crawler if TYPE_CHECKING: from collections.abc import Iterable # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Request from scrapy.core.downloader import Downloader from scrapy.crawler import Crawler logger = logging.getLogger(__name__) def _path_safe(text: str) -> str: """ Return a filesystem-safe version of a string ``text`` >>> _path_safe('simple.org').startswith('simple.org') True >>> _path_safe('dash-underscore_.org').startswith('dash-underscore_.org') True >>> _path_safe('some@symbol?').startswith('some_symbol_') True """ pathable_slot = "".join([c if c.isalnum() or c in "-._" else "_" for c in text]) # as we replace some letters we can get collision for different slots # add we add unique part unique_slot = hashlib.md5(text.encode("utf8")).hexdigest() # noqa: S324 return f"{pathable_slot}-{unique_slot}" class QueueProtocol(Protocol): """Protocol for downstream queues of ``ScrapyPriorityQueue``.""" def push(self, request: Request) -> None: ... def pop(self) -> Request | None: ... def close(self) -> None: ... def __len__(self) -> int: ... class ScrapyPriorityQueue: """A priority queue implemented using multiple internal queues (typically, FIFO queues). It uses one internal queue for each priority value. The internal queue must implement the following methods: * push(obj) * pop() * close() * __len__() Optionally, the queue could provide a ``peek`` method, that should return the next object to be returned by ``pop``, but without removing it from the queue. ``__init__`` method of ScrapyPriorityQueue receives a downstream_queue_cls argument, which is a class used to instantiate a new (internal) queue when a new priority is allocated. Only integer priorities should be used. Lower numbers are higher priorities. startprios is a sequence of priorities to start with. If the queue was previously closed leaving some priority buckets non-empty, those priorities should be passed in startprios. """ @classmethod def from_crawler( cls, crawler: Crawler, downstream_queue_cls: type[QueueProtocol], key: str, startprios: Iterable[int] = (), *, start_queue_cls: type[QueueProtocol] | None = None, ) -> Self: return cls( crawler, downstream_queue_cls, key, startprios, start_queue_cls=start_queue_cls, ) def __init__( self, crawler: Crawler, downstream_queue_cls: type[QueueProtocol], key: str, startprios: Iterable[int] = (), *, start_queue_cls: type[QueueProtocol] | None = None, ): self.crawler: Crawler = crawler self.downstream_queue_cls: type[QueueProtocol] = downstream_queue_cls self._start_queue_cls: type[QueueProtocol] | None = start_queue_cls self.key: str = key self.queues: dict[int, QueueProtocol] = {} self._start_queues: dict[int, QueueProtocol] = {} self.curprio: int | None = None self.init_prios(startprios) def init_prios(self, startprios: Iterable[int]) -> None: if not startprios: return for priority in startprios: q = self.qfactory(priority) if q: self.queues[priority] = q if self._start_queue_cls: q = self._sqfactory(priority) if q: self._start_queues[priority] = q self.curprio = min(startprios) def qfactory(self, key: int) -> QueueProtocol: return build_from_crawler( self.downstream_queue_cls, self.crawler, self.key + "/" + str(key), ) def _sqfactory(self, key: int) -> QueueProtocol: assert self._start_queue_cls is not None return build_from_crawler( self._start_queue_cls, self.crawler, f"{self.key}/{key}s", ) def priority(self, request: Request) -> int: return -request.priority def push(self, request: Request) -> None: priority = self.priority(request) is_start_request = request.meta.get("is_start_request", False) if is_start_request and self._start_queue_cls: if priority not in self._start_queues: self._start_queues[priority] = self._sqfactory(priority) q = self._start_queues[priority] else: if priority not in self.queues: self.queues[priority] = self.qfactory(priority) q = self.queues[priority] q.push(request) # this may fail (eg. serialization error) if self.curprio is None or priority < self.curprio: self.curprio = priority def pop(self) -> Request | None: while self.curprio is not None: try: q = self.queues[self.curprio] except KeyError: pass else: m = q.pop() if not q: del self.queues[self.curprio] q.close() if not self._start_queues: self._update_curprio() return m if self._start_queues: try: q = self._start_queues[self.curprio] except KeyError: self._update_curprio() else: m = q.pop() if not q: del self._start_queues[self.curprio] q.close() self._update_curprio() return m else: self._update_curprio() return None def _update_curprio(self) -> None: prios = { p for queues in (self.queues, self._start_queues) for p, q in queues.items() if q } self.curprio = min(prios) if prios else None def peek(self) -> Request | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ if self.curprio is None: return None try: queue = self._start_queues[self.curprio] except KeyError: queue = self.queues[self.curprio] # Protocols can't declare optional members return cast("Request", queue.peek()) # type: ignore[attr-defined] def close(self) -> list[int]: active: set[int] = set() for queues in (self.queues, self._start_queues): for p, q in queues.items(): active.add(p) q.close() return list(active) def __len__(self) -> int: return ( sum( len(x) for queues in (self.queues, self._start_queues) for x in queues.values() ) if self.queues or self._start_queues else 0 ) class DownloaderInterface: def __init__(self, crawler: Crawler): assert crawler.engine self.downloader: Downloader = crawler.engine.downloader def stats(self, possible_slots: Iterable[str]) -> list[tuple[int, str]]: return [(self._active_downloads(slot), slot) for slot in possible_slots] def get_slot_key(self, request: Request) -> str: return self.downloader.get_slot_key(request) def _active_downloads(self, slot: str) -> int: """Return a number of requests in a Downloader for a given slot""" if slot not in self.downloader.slots: return 0 return len(self.downloader.slots[slot].active) class DownloaderAwarePriorityQueue: """PriorityQueue which takes Downloader activity into account: domains (slots) with the least amount of active downloads are dequeued first. """ @classmethod def from_crawler( cls, crawler: Crawler, downstream_queue_cls: type[QueueProtocol], key: str, startprios: dict[str, Iterable[int]] | None = None, *, start_queue_cls: type[QueueProtocol] | None = None, ) -> Self: return cls( crawler, downstream_queue_cls, key, startprios, start_queue_cls=start_queue_cls, ) def __init__( self, crawler: Crawler, downstream_queue_cls: type[QueueProtocol], key: str, slot_startprios: dict[str, Iterable[int]] | None = None, *, start_queue_cls: type[QueueProtocol] | None = None, ): if crawler.settings.getint("CONCURRENT_REQUESTS_PER_IP") != 0: raise ValueError( f'"{self.__class__}" does not support CONCURRENT_REQUESTS_PER_IP' ) if slot_startprios and not isinstance(slot_startprios, dict): raise ValueError( "DownloaderAwarePriorityQueue accepts " "``slot_startprios`` as a dict; " f"{slot_startprios.__class__!r} instance " "is passed. Most likely, it means the state is " "created by an incompatible priority queue. " "Only a crawl started with the same priority " "queue class can be resumed." ) self._downloader_interface: DownloaderInterface = DownloaderInterface(crawler) self.downstream_queue_cls: type[QueueProtocol] = downstream_queue_cls self._start_queue_cls: type[QueueProtocol] | None = start_queue_cls self.key: str = key self.crawler: Crawler = crawler self.pqueues: dict[str, ScrapyPriorityQueue] = {} # slot -> priority queue for slot, startprios in (slot_startprios or {}).items(): self.pqueues[slot] = self.pqfactory(slot, startprios) def pqfactory( self, slot: str, startprios: Iterable[int] = () ) -> ScrapyPriorityQueue: return ScrapyPriorityQueue( self.crawler, self.downstream_queue_cls, self.key + "/" + _path_safe(slot), startprios, start_queue_cls=self._start_queue_cls, ) def pop(self) -> Request | None: stats = self._downloader_interface.stats(self.pqueues) if not stats: return None slot = min(stats)[1] queue = self.pqueues[slot] request = queue.pop() if len(queue) == 0: del self.pqueues[slot] return request def push(self, request: Request) -> None: slot = self._downloader_interface.get_slot_key(request) if slot not in self.pqueues: self.pqueues[slot] = self.pqfactory(slot) queue = self.pqueues[slot] queue.push(request) def peek(self) -> Request | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ stats = self._downloader_interface.stats(self.pqueues) if not stats: return None slot = min(stats)[1] queue = self.pqueues[slot] return queue.peek() def close(self) -> dict[str, list[int]]: active = {slot: queue.close() for slot, queue in self.pqueues.items()} self.pqueues.clear() return active def __len__(self) -> int: return sum(len(x) for x in self.pqueues.values()) if self.pqueues else 0 def __contains__(self, slot: str) -> bool: return slot in self.pqueues
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/squeues.py
scrapy/squeues.py
""" Scheduler queues """ from __future__ import annotations import marshal import pickle from pathlib import Path from typing import TYPE_CHECKING, Any from queuelib import queue from scrapy.utils.request import request_from_dict if TYPE_CHECKING: from collections.abc import Callable from os import PathLike # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Request from scrapy.crawler import Crawler def _with_mkdir(queue_class: type[queue.BaseQueue]) -> type[queue.BaseQueue]: class DirectoriesCreated(queue_class): # type: ignore[valid-type,misc] def __init__(self, path: str | PathLike, *args: Any, **kwargs: Any): dirname = Path(path).parent if not dirname.exists(): dirname.mkdir(parents=True, exist_ok=True) super().__init__(path, *args, **kwargs) return DirectoriesCreated def _serializable_queue( queue_class: type[queue.BaseQueue], serialize: Callable[[Any], bytes], deserialize: Callable[[bytes], Any], ) -> type[queue.BaseQueue]: class SerializableQueue(queue_class): # type: ignore[valid-type,misc] def push(self, obj: Any) -> None: s = serialize(obj) super().push(s) def pop(self) -> Any | None: s = super().pop() if s: return deserialize(s) return None def peek(self) -> Any | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ try: s = super().peek() except AttributeError as ex: raise NotImplementedError( "The underlying queue class does not implement 'peek'" ) from ex if s: return deserialize(s) return None return SerializableQueue def _scrapy_serialization_queue( queue_class: type[queue.BaseQueue], ) -> type[queue.BaseQueue]: class ScrapyRequestQueue(queue_class): # type: ignore[valid-type,misc] def __init__(self, crawler: Crawler, key: str): self.spider = crawler.spider super().__init__(key) @classmethod def from_crawler( cls, crawler: Crawler, key: str, *args: Any, **kwargs: Any ) -> Self: return cls(crawler, key) def push(self, request: Request) -> None: request_dict = request.to_dict(spider=self.spider) super().push(request_dict) def pop(self) -> Request | None: request = super().pop() if not request: return None return request_from_dict(request, spider=self.spider) def peek(self) -> Request | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ request = super().peek() if not request: return None return request_from_dict(request, spider=self.spider) return ScrapyRequestQueue def _scrapy_non_serialization_queue( queue_class: type[queue.BaseQueue], ) -> type[queue.BaseQueue]: class ScrapyRequestQueue(queue_class): # type: ignore[valid-type,misc] @classmethod def from_crawler(cls, crawler: Crawler, *args: Any, **kwargs: Any) -> Self: return cls() def peek(self) -> Any | None: """Returns the next object to be returned by :meth:`pop`, but without removing it from the queue. Raises :exc:`NotImplementedError` if the underlying queue class does not implement a ``peek`` method, which is optional for queues. """ try: s = super().peek() except AttributeError as ex: raise NotImplementedError( "The underlying queue class does not implement 'peek'" ) from ex return s return ScrapyRequestQueue def _pickle_serialize(obj: Any) -> bytes: try: return pickle.dumps(obj, protocol=4) # Both pickle.PicklingError and AttributeError can be raised by pickle.dump(s) # TypeError is raised from parsel.Selector except (pickle.PicklingError, AttributeError, TypeError) as e: raise ValueError(str(e)) from e # queue.*Queue aren't subclasses of queue.BaseQueue _PickleFifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.FifoDiskQueue), # type: ignore[arg-type] _pickle_serialize, pickle.loads, ) _PickleLifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.LifoDiskQueue), # type: ignore[arg-type] _pickle_serialize, pickle.loads, ) _MarshalFifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.FifoDiskQueue), # type: ignore[arg-type] marshal.dumps, marshal.loads, ) _MarshalLifoSerializationDiskQueue = _serializable_queue( _with_mkdir(queue.LifoDiskQueue), # type: ignore[arg-type] marshal.dumps, marshal.loads, ) # public queue classes PickleFifoDiskQueue = _scrapy_serialization_queue(_PickleFifoSerializationDiskQueue) PickleLifoDiskQueue = _scrapy_serialization_queue(_PickleLifoSerializationDiskQueue) MarshalFifoDiskQueue = _scrapy_serialization_queue(_MarshalFifoSerializationDiskQueue) MarshalLifoDiskQueue = _scrapy_serialization_queue(_MarshalLifoSerializationDiskQueue) FifoMemoryQueue = _scrapy_non_serialization_queue(queue.FifoMemoryQueue) # type: ignore[arg-type] LifoMemoryQueue = _scrapy_non_serialization_queue(queue.LifoMemoryQueue) # type: ignore[arg-type]
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/exporters.py
scrapy/exporters.py
""" Item Exporters are used to export/serialize items into different formats. """ from __future__ import annotations import csv import marshal import pickle import pprint from abc import ABC, abstractmethod from collections.abc import Callable, Iterable, Mapping from io import BytesIO, TextIOWrapper from typing import TYPE_CHECKING, Any from xml.sax.saxutils import XMLGenerator from xml.sax.xmlreader import AttributesImpl from itemadapter import ItemAdapter, is_item from scrapy.item import Field, Item from scrapy.utils.python import is_listlike, to_bytes, to_unicode from scrapy.utils.serialize import ScrapyJSONEncoder if TYPE_CHECKING: from json import JSONEncoder __all__ = [ "BaseItemExporter", "CsvItemExporter", "JsonItemExporter", "JsonLinesItemExporter", "MarshalItemExporter", "PickleItemExporter", "PprintItemExporter", "XmlItemExporter", ] class BaseItemExporter(ABC): def __init__(self, *, dont_fail: bool = False, **kwargs: Any): self._kwargs: dict[str, Any] = kwargs self._configure(kwargs, dont_fail=dont_fail) def _configure(self, options: dict[str, Any], dont_fail: bool = False) -> None: """Configure the exporter by popping options from the ``options`` dict. If dont_fail is set, it won't raise an exception on unexpected options (useful for using with keyword arguments in subclasses ``__init__`` methods) """ self.encoding: str | None = options.pop("encoding", None) self.fields_to_export: Mapping[str, str] | Iterable[str] | None = options.pop( "fields_to_export", None ) self.export_empty_fields: bool = options.pop("export_empty_fields", False) self.indent: int | None = options.pop("indent", None) if not dont_fail and options: raise TypeError(f"Unexpected options: {', '.join(options.keys())}") @abstractmethod def export_item(self, item: Any) -> None: raise NotImplementedError def serialize_field( self, field: Mapping[str, Any] | Field, name: str, value: Any ) -> Any: serializer: Callable[[Any], Any] = field.get("serializer", lambda x: x) return serializer(value) def start_exporting(self) -> None: # noqa: B027 pass def finish_exporting(self) -> None: # noqa: B027 pass def _get_serialized_fields( self, item: Any, default_value: Any = None, include_empty: bool | None = None ) -> Iterable[tuple[str, Any]]: """Return the fields to export as an iterable of tuples (name, serialized_value) """ item = ItemAdapter(item) if include_empty is None: include_empty = self.export_empty_fields if self.fields_to_export is None: field_iter = item.field_names() if include_empty else item.keys() elif isinstance(self.fields_to_export, Mapping): if include_empty: field_iter = self.fields_to_export.items() else: field_iter = ( (x, y) for x, y in self.fields_to_export.items() if x in item ) elif include_empty: field_iter = self.fields_to_export else: field_iter = (x for x in self.fields_to_export if x in item) for field_name in field_iter: if isinstance(field_name, str): item_field, output_field = field_name, field_name else: item_field, output_field = field_name if item_field in item: field_meta = item.get_field_meta(item_field) value = self.serialize_field(field_meta, output_field, item[item_field]) else: value = default_value yield output_field, value class JsonLinesItemExporter(BaseItemExporter): def __init__(self, file: BytesIO, **kwargs: Any): super().__init__(dont_fail=True, **kwargs) self.file: BytesIO = file self._kwargs.setdefault("ensure_ascii", not self.encoding) self.encoder: JSONEncoder = ScrapyJSONEncoder(**self._kwargs) def export_item(self, item: Any) -> None: itemdict = dict(self._get_serialized_fields(item)) data = self.encoder.encode(itemdict) + "\n" self.file.write(to_bytes(data, self.encoding)) class JsonItemExporter(BaseItemExporter): def __init__(self, file: BytesIO, **kwargs: Any): super().__init__(dont_fail=True, **kwargs) self.file: BytesIO = file # there is a small difference between the behaviour or JsonItemExporter.indent # and ScrapyJSONEncoder.indent. ScrapyJSONEncoder.indent=None is needed to prevent # the addition of newlines everywhere json_indent = ( self.indent if self.indent is not None and self.indent > 0 else None ) self._kwargs.setdefault("indent", json_indent) self._kwargs.setdefault("ensure_ascii", not self.encoding) self.encoder = ScrapyJSONEncoder(**self._kwargs) self.first_item = True def _beautify_newline(self) -> None: if self.indent is not None: self.file.write(b"\n") def _add_comma_after_first(self) -> None: if self.first_item: self.first_item = False else: self.file.write(b",") self._beautify_newline() def start_exporting(self) -> None: self.file.write(b"[") self._beautify_newline() def finish_exporting(self) -> None: self._beautify_newline() self.file.write(b"]") def export_item(self, item: Any) -> None: itemdict = dict(self._get_serialized_fields(item)) data = to_bytes(self.encoder.encode(itemdict), self.encoding) self._add_comma_after_first() self.file.write(data) class XmlItemExporter(BaseItemExporter): def __init__(self, file: BytesIO, **kwargs: Any): self.item_element = kwargs.pop("item_element", "item") self.root_element = kwargs.pop("root_element", "items") super().__init__(**kwargs) if not self.encoding: self.encoding = "utf-8" self.xg = XMLGenerator(file, encoding=self.encoding) def _beautify_newline(self, new_item: bool = False) -> None: if self.indent is not None and (self.indent > 0 or new_item): self.xg.characters("\n") def _beautify_indent(self, depth: int = 1) -> None: if self.indent: self.xg.characters(" " * self.indent * depth) def start_exporting(self) -> None: self.xg.startDocument() self.xg.startElement(self.root_element, AttributesImpl({})) self._beautify_newline(new_item=True) def export_item(self, item: Any) -> None: self._beautify_indent(depth=1) self.xg.startElement(self.item_element, AttributesImpl({})) self._beautify_newline() for name, value in self._get_serialized_fields(item, default_value=""): self._export_xml_field(name, value, depth=2) self._beautify_indent(depth=1) self.xg.endElement(self.item_element) self._beautify_newline(new_item=True) def finish_exporting(self) -> None: self.xg.endElement(self.root_element) self.xg.endDocument() def _export_xml_field(self, name: str, serialized_value: Any, depth: int) -> None: self._beautify_indent(depth=depth) self.xg.startElement(name, AttributesImpl({})) if hasattr(serialized_value, "items"): self._beautify_newline() for subname, value in serialized_value.items(): self._export_xml_field(subname, value, depth=depth + 1) self._beautify_indent(depth=depth) elif is_listlike(serialized_value): self._beautify_newline() for value in serialized_value: self._export_xml_field("value", value, depth=depth + 1) self._beautify_indent(depth=depth) elif isinstance(serialized_value, str): self.xg.characters(serialized_value) else: self.xg.characters(str(serialized_value)) self.xg.endElement(name) self._beautify_newline() class CsvItemExporter(BaseItemExporter): def __init__( self, file: BytesIO, include_headers_line: bool = True, join_multivalued: str = ",", errors: str | None = None, **kwargs: Any, ): super().__init__(dont_fail=True, **kwargs) if not self.encoding: self.encoding = "utf-8" self.include_headers_line = include_headers_line self.stream = TextIOWrapper( file, line_buffering=False, write_through=True, encoding=self.encoding, newline="", # Windows needs this https://github.com/scrapy/scrapy/issues/3034 errors=errors, ) self.csv_writer = csv.writer(self.stream, **self._kwargs) self._headers_not_written = True self._join_multivalued = join_multivalued def serialize_field( self, field: Mapping[str, Any] | Field, name: str, value: Any ) -> Any: serializer: Callable[[Any], Any] = field.get("serializer", self._join_if_needed) return serializer(value) def _join_if_needed(self, value: Any) -> Any: if isinstance(value, (list, tuple)): try: return self._join_multivalued.join(value) except TypeError: # list in value may not contain strings pass return value def export_item(self, item: Any) -> None: if self._headers_not_written: self._headers_not_written = False self._write_headers_and_set_fields_to_export(item) fields = self._get_serialized_fields(item, default_value="", include_empty=True) values = list(self._build_row(x for _, x in fields)) self.csv_writer.writerow(values) def finish_exporting(self) -> None: self.stream.detach() # Avoid closing the wrapped file. def _build_row(self, values: Iterable[Any]) -> Iterable[Any]: for s in values: try: yield to_unicode(s, self.encoding) except TypeError: yield s def _write_headers_and_set_fields_to_export(self, item: Any) -> None: if self.include_headers_line: if not self.fields_to_export: # use declared field names, or keys if the item is a dict self.fields_to_export = ItemAdapter(item).field_names() fields: Iterable[str] if isinstance(self.fields_to_export, Mapping): fields = self.fields_to_export.values() else: assert self.fields_to_export fields = self.fields_to_export row = list(self._build_row(fields)) self.csv_writer.writerow(row) class PickleItemExporter(BaseItemExporter): def __init__(self, file: BytesIO, protocol: int = 4, **kwargs: Any): super().__init__(**kwargs) self.file: BytesIO = file self.protocol: int = protocol def export_item(self, item: Any) -> None: d = dict(self._get_serialized_fields(item)) pickle.dump(d, self.file, self.protocol) class MarshalItemExporter(BaseItemExporter): """Exports items in a Python-specific binary format (see :mod:`marshal`). :param file: The file-like object to use for exporting the data. Its ``write`` method should accept :class:`bytes` (a disk file opened in binary mode, a :class:`~io.BytesIO` object, etc) """ def __init__(self, file: BytesIO, **kwargs: Any): super().__init__(**kwargs) self.file: BytesIO = file def export_item(self, item: Any) -> None: marshal.dump(dict(self._get_serialized_fields(item)), self.file) class PprintItemExporter(BaseItemExporter): def __init__(self, file: BytesIO, **kwargs: Any): super().__init__(**kwargs) self.file: BytesIO = file def export_item(self, item: Any) -> None: itemdict = dict(self._get_serialized_fields(item)) self.file.write(to_bytes(pprint.pformat(itemdict) + "\n")) class PythonItemExporter(BaseItemExporter): """This is a base class for item exporters that extends :class:`BaseItemExporter` with support for nested items. It serializes items to built-in Python types, so that any serialization library (e.g. :mod:`json` or msgpack_) can be used on top of it. .. _msgpack: https://pypi.org/project/msgpack/ """ def _configure(self, options: dict[str, Any], dont_fail: bool = False) -> None: super()._configure(options, dont_fail) if not self.encoding: self.encoding = "utf-8" def serialize_field( self, field: Mapping[str, Any] | Field, name: str, value: Any ) -> Any: serializer: Callable[[Any], Any] = field.get( "serializer", self._serialize_value ) return serializer(value) def _serialize_value(self, value: Any) -> Any: if isinstance(value, Item): return self.export_item(value) if isinstance(value, (str, bytes)): return to_unicode(value, encoding=self.encoding) if is_item(value): return dict(self._serialize_item(value)) if is_listlike(value): return [self._serialize_value(v) for v in value] return value def _serialize_item(self, item: Any) -> Iterable[tuple[str | bytes, Any]]: for key, value in ItemAdapter(item).items(): yield key, self._serialize_value(value) def export_item(self, item: Any) -> dict[str | bytes, Any]: # type: ignore[override] result: dict[str | bytes, Any] = dict(self._get_serialized_fields(item)) return result
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/statscollectors.py
scrapy/statscollectors.py
""" Scrapy extension for collecting scraping stats """ from __future__ import annotations import logging import pprint from typing import TYPE_CHECKING, Any from scrapy.utils.decorators import _warn_spider_arg if TYPE_CHECKING: from scrapy import Spider from scrapy.crawler import Crawler logger = logging.getLogger(__name__) StatsT = dict[str, Any] class StatsCollector: def __init__(self, crawler: Crawler): self._dump: bool = crawler.settings.getbool("STATS_DUMP") self._stats: StatsT = {} self._crawler: Crawler = crawler def __getattribute__(self, name): cached_name = f"_cached_{name}" try: return super().__getattribute__(cached_name) except AttributeError: pass original_attr = super().__getattribute__(name) if name in { "get_value", "get_stats", "set_value", "set_stats", "inc_value", "max_value", "min_value", "clear_stats", "open_spider", "close_spider", } and callable(original_attr): wrapped = _warn_spider_arg(original_attr) setattr(self, cached_name, wrapped) return wrapped return original_attr def get_value( self, key: str, default: Any = None, spider: Spider | None = None ) -> Any: return self._stats.get(key, default) def get_stats(self, spider: Spider | None = None) -> StatsT: return self._stats def set_value(self, key: str, value: Any, spider: Spider | None = None) -> None: self._stats[key] = value def set_stats(self, stats: StatsT, spider: Spider | None = None) -> None: self._stats = stats def inc_value( self, key: str, count: int = 1, start: int = 0, spider: Spider | None = None ) -> None: d = self._stats d[key] = d.setdefault(key, start) + count def max_value(self, key: str, value: Any, spider: Spider | None = None) -> None: self._stats[key] = max(self._stats.setdefault(key, value), value) def min_value(self, key: str, value: Any, spider: Spider | None = None) -> None: self._stats[key] = min(self._stats.setdefault(key, value), value) def clear_stats(self, spider: Spider | None = None) -> None: self._stats.clear() def open_spider(self, spider: Spider | None = None) -> None: pass def close_spider( self, spider: Spider | None = None, reason: str | None = None ) -> None: if self._dump: logger.info( "Dumping Scrapy stats:\n" + pprint.pformat(self._stats), extra={"spider": self._crawler.spider}, ) self._persist_stats(self._stats) def _persist_stats(self, stats: StatsT) -> None: pass class MemoryStatsCollector(StatsCollector): def __init__(self, crawler: Crawler): super().__init__(crawler) self.spider_stats: dict[str, StatsT] = {} def _persist_stats(self, stats: StatsT) -> None: if self._crawler.spider: self.spider_stats[self._crawler.spider.name] = stats class DummyStatsCollector(StatsCollector): def get_value( self, key: str, default: Any = None, spider: Spider | None = None ) -> Any: return default def set_value(self, key: str, value: Any, spider: Spider | None = None) -> None: pass def set_stats(self, stats: StatsT, spider: Spider | None = None) -> None: pass def inc_value( self, key: str, count: int = 1, start: int = 0, spider: Spider | None = None ) -> None: pass def max_value(self, key: str, value: Any, spider: Spider | None = None) -> None: pass def min_value(self, key: str, value: Any, spider: Spider | None = None) -> None: pass
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/signals.py
scrapy/signals.py
""" Scrapy signals These signals are documented in docs/topics/signals.rst. Please don't add new signals here without documenting them there. """ engine_started = object() engine_stopped = object() scheduler_empty = object() spider_opened = object() spider_idle = object() spider_closed = object() spider_error = object() request_scheduled = object() request_dropped = object() request_reached_downloader = object() request_left_downloader = object() response_received = object() response_downloaded = object() headers_received = object() bytes_received = object() item_scraped = object() item_dropped = object() item_error = object() feed_slot_closed = object() feed_exporter_closed = object()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/exceptions.py
scrapy/exceptions.py
""" Scrapy core exceptions These exceptions are documented in docs/topics/exceptions.rst. Please don't add new exceptions here without documenting them there. """ from __future__ import annotations from typing import Any # Internal class NotConfigured(Exception): """Indicates a missing configuration situation""" class _InvalidOutput(TypeError): """ Indicates an invalid value has been returned by a middleware's processing method. Internal and undocumented, it should not be raised or caught by user code. """ # HTTP and crawling class IgnoreRequest(Exception): """Indicates a decision was made not to process a request""" class DontCloseSpider(Exception): """Request the spider not to be closed yet""" class CloseSpider(Exception): """Raise this from callbacks to request the spider to be closed""" def __init__(self, reason: str = "cancelled"): super().__init__() self.reason = reason class StopDownload(Exception): """ Stop the download of the body for a given response. The 'fail' boolean parameter indicates whether or not the resulting partial response should be handled by the request errback. Note that 'fail' is a keyword-only argument. """ def __init__(self, *, fail: bool = True): super().__init__() self.fail = fail # Items class DropItem(Exception): """Drop item from the item pipeline""" def __init__(self, message: str, log_level: str | None = None): super().__init__(message) self.log_level = log_level class NotSupported(Exception): """Indicates a feature or method is not supported""" # Commands class UsageError(Exception): """To indicate a command-line usage error""" def __init__(self, *a: Any, **kw: Any): self.print_help = kw.pop("print_help", True) super().__init__(*a, **kw) class ScrapyDeprecationWarning(Warning): """Warning category for deprecated features, since the default DeprecationWarning is silenced on Python 2.7+ """ class ContractFail(AssertionError): """Error raised in case of a failing contract"""
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/signalmanager.py
scrapy/signalmanager.py
from __future__ import annotations import warnings from typing import Any from pydispatch import dispatcher from twisted.internet.defer import Deferred from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.utils import signal as _signal from scrapy.utils.defer import maybe_deferred_to_future class SignalManager: def __init__(self, sender: Any = dispatcher.Anonymous): self.sender: Any = sender def connect(self, receiver: Any, signal: Any, **kwargs: Any) -> None: """ Connect a receiver function to a signal. The signal can be any object, although Scrapy comes with some predefined signals that are documented in the :ref:`topics-signals` section. :param receiver: the function to be connected :type receiver: collections.abc.Callable :param signal: the signal to connect to :type signal: object """ kwargs.setdefault("sender", self.sender) dispatcher.connect(receiver, signal, **kwargs) def disconnect(self, receiver: Any, signal: Any, **kwargs: Any) -> None: """ Disconnect a receiver function from a signal. This has the opposite effect of the :meth:`connect` method, and the arguments are the same. """ kwargs.setdefault("sender", self.sender) dispatcher.disconnect(receiver, signal, **kwargs) def send_catch_log(self, signal: Any, **kwargs: Any) -> list[tuple[Any, Any]]: """ Send a signal, catch exceptions and log them. The keyword arguments are passed to the signal handlers (connected through the :meth:`connect` method). """ kwargs.setdefault("sender", self.sender) return _signal.send_catch_log(signal, **kwargs) def send_catch_log_deferred( self, signal: Any, **kwargs: Any ) -> Deferred[list[tuple[Any, Any]]]: # pragma: no cover """ Like :meth:`send_catch_log` but supports :ref:`asynchronous signal handlers <signal-deferred>`. Returns a Deferred that gets fired once all signal handlers have finished. Send a signal, catch exceptions and log them. The keyword arguments are passed to the signal handlers (connected through the :meth:`connect` method). """ kwargs.setdefault("sender", self.sender) warnings.warn( "send_catch_log_deferred() is deprecated, use send_catch_log_async() instead", ScrapyDeprecationWarning, stacklevel=2, ) return _signal._send_catch_log_deferred(signal, **kwargs) async def send_catch_log_async( self, signal: Any, **kwargs: Any ) -> list[tuple[Any, Any]]: """ Like :meth:`send_catch_log` but supports :ref:`asynchronous signal handlers <signal-deferred>`. Returns a coroutine that completes once all signal handlers have finished. Send a signal, catch exceptions and log them. The keyword arguments are passed to the signal handlers (connected through the :meth:`connect` method). .. versionadded:: VERSION """ # note that this returns exceptions instead of Failures in the second tuple member kwargs.setdefault("sender", self.sender) return await _signal.send_catch_log_async(signal, **kwargs) def disconnect_all(self, signal: Any, **kwargs: Any) -> None: """ Disconnect all receivers from the given signal. :param signal: the signal to disconnect from :type signal: object """ kwargs.setdefault("sender", self.sender) _signal.disconnect_all(signal, **kwargs) async def wait_for(self, signal): """Await the next *signal*. See :ref:`start-requests-lazy` for an example. """ d = Deferred() def handle(): self.disconnect(handle, signal) d.callback(None) self.connect(handle, signal) await maybe_deferred_to_future(d)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/__main__.py
scrapy/__main__.py
from scrapy.cmdline import execute if __name__ == "__main__": execute()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/dupefilters.py
scrapy/dupefilters.py
from __future__ import annotations import logging from pathlib import Path from typing import TYPE_CHECKING from warnings import warn from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.utils.job import job_dir from scrapy.utils.request import ( RequestFingerprinter, RequestFingerprinterProtocol, referer_str, ) if TYPE_CHECKING: from twisted.internet.defer import Deferred # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.http.request import Request from scrapy.spiders import Spider class BaseDupeFilter: """Dummy duplicate request filtering class (:setting:`DUPEFILTER_CLASS`) that does not filter out any request.""" @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls() def request_seen(self, request: Request) -> bool: return False def open(self) -> Deferred[None] | None: pass def close(self, reason: str) -> Deferred[None] | None: pass def log(self, request: Request, spider: Spider) -> None: """Log that a request has been filtered""" warn( "Calling BaseDupeFilter.log() is deprecated.", ScrapyDeprecationWarning, stacklevel=2, ) class RFPDupeFilter(BaseDupeFilter): """Duplicate request filtering class (:setting:`DUPEFILTER_CLASS`) that filters out requests with the canonical (:func:`w3lib.url.canonicalize_url`) :attr:`~scrapy.http.Request.url`, :attr:`~scrapy.http.Request.method` and :attr:`~scrapy.http.Request.body`. """ def __init__( self, path: str | None = None, debug: bool = False, *, fingerprinter: RequestFingerprinterProtocol | None = None, ) -> None: self.file = None self.fingerprinter: RequestFingerprinterProtocol = ( fingerprinter or RequestFingerprinter() ) self.fingerprints: set[str] = set() self.logdupes = True self.debug = debug self.logger = logging.getLogger(__name__) if path: # line-by-line writing, see: https://github.com/scrapy/scrapy/issues/6019 self.file = Path(path, "requests.seen").open( "a+", buffering=1, encoding="utf-8" ) self.file.reconfigure(write_through=True) self.file.seek(0) self.fingerprints.update(x.rstrip() for x in self.file) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: assert crawler.request_fingerprinter debug = crawler.settings.getbool("DUPEFILTER_DEBUG") return cls( job_dir(crawler.settings), debug, fingerprinter=crawler.request_fingerprinter, ) def request_seen(self, request: Request) -> bool: fp = self.request_fingerprint(request) if fp in self.fingerprints: return True self.fingerprints.add(fp) if self.file: self.file.write(fp + "\n") return False def request_fingerprint(self, request: Request) -> str: """Returns a string that uniquely identifies the specified request.""" return self.fingerprinter.fingerprint(request).hex() def close(self, reason: str) -> None: if self.file: self.file.close() def log(self, request: Request, spider: Spider) -> None: if self.debug: msg = "Filtered duplicate request: %(request)s (referer: %(referer)s)" args = {"request": request, "referer": referer_str(request)} self.logger.debug(msg, args, extra={"spider": spider}) elif self.logdupes: msg = ( "Filtered duplicate request: %(request)s" " - no more duplicates will be shown" " (see DUPEFILTER_DEBUG to show all duplicates)" ) self.logger.debug(msg, {"request": request}, extra={"spider": spider}) self.logdupes = False assert spider.crawler.stats spider.crawler.stats.inc_value("dupefilter/filtered")
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extension.py
scrapy/extension.py
""" The Extension Manager See documentation in docs/topics/extensions.rst """ from __future__ import annotations from typing import TYPE_CHECKING, Any from scrapy.middleware import MiddlewareManager from scrapy.utils.conf import build_component_list if TYPE_CHECKING: from scrapy.settings import Settings class ExtensionManager(MiddlewareManager): component_name = "extension" @classmethod def _get_mwlist_from_settings(cls, settings: Settings) -> list[Any]: return build_component_list(settings.getwithbase("EXTENSIONS"))
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/__init__.py
scrapy/__init__.py
""" Scrapy - a web crawling and web scraping framework written for Python """ import pkgutil import sys import warnings # Declare top-level shortcuts from scrapy.http import FormRequest, Request from scrapy.item import Field, Item from scrapy.selector import Selector from scrapy.spiders import Spider __all__ = [ "Field", "FormRequest", "Item", "Request", "Selector", "Spider", "__version__", "version_info", ] # Scrapy and Twisted versions __version__ = (pkgutil.get_data(__package__, "VERSION") or b"").decode("ascii").strip() version_info = tuple(int(v) if v.isdigit() else v for v in __version__.split(".")) # Ignore noisy twisted deprecation warnings warnings.filterwarnings("ignore", category=DeprecationWarning, module="twisted") del pkgutil del sys del warnings
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/logformatter.py
scrapy/logformatter.py
from __future__ import annotations import logging import os from typing import TYPE_CHECKING, Any, TypedDict from twisted.python.failure import Failure # working around https://github.com/sphinx-doc/sphinx/issues/10400 from scrapy import Request, Spider # noqa: TC001 from scrapy.http import Response # noqa: TC001 from scrapy.utils.python import global_object_name from scrapy.utils.request import referer_str if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler SCRAPEDMSG = "Scraped from %(src)s" + os.linesep + "%(item)s" DROPPEDMSG = "Dropped: %(exception)s" + os.linesep + "%(item)s" CRAWLEDMSG = "Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s" ITEMERRORMSG = "Error processing %(item)s" SPIDERERRORMSG = "Spider error processing %(request)s (referer: %(referer)s)" DOWNLOADERRORMSG_SHORT = "Error downloading %(request)s" DOWNLOADERRORMSG_LONG = "Error downloading %(request)s: %(errmsg)s" class LogFormatterResult(TypedDict): level: int msg: str args: dict[str, Any] | tuple[Any, ...] class LogFormatter: """Class for generating log messages for different actions. All methods must return a dictionary listing the parameters ``level``, ``msg`` and ``args`` which are going to be used for constructing the log message when calling ``logging.log``. Dictionary keys for the method outputs: * ``level`` is the log level for that action, you can use those from the `python logging library <https://docs.python.org/3/library/logging.html>`_ : ``logging.DEBUG``, ``logging.INFO``, ``logging.WARNING``, ``logging.ERROR`` and ``logging.CRITICAL``. * ``msg`` should be a string that can contain different formatting placeholders. This string, formatted with the provided ``args``, is going to be the long message for that action. * ``args`` should be a tuple or dict with the formatting placeholders for ``msg``. The final log message is computed as ``msg % args``. Users can define their own ``LogFormatter`` class if they want to customize how each action is logged or if they want to omit it entirely. In order to omit logging an action the method must return ``None``. Here is an example on how to create a custom log formatter to lower the severity level of the log message when an item is dropped from the pipeline:: class PoliteLogFormatter(logformatter.LogFormatter): def dropped(self, item, exception, response, spider): return { 'level': logging.INFO, # lowering the level from logging.WARNING 'msg': "Dropped: %(exception)s" + os.linesep + "%(item)s", 'args': { 'exception': exception, 'item': item, } } """ def crawled( self, request: Request, response: Response, spider: Spider ) -> LogFormatterResult: """Logs a message when the crawler finds a webpage.""" request_flags = f" {request.flags!s}" if request.flags else "" response_flags = f" {response.flags!s}" if response.flags else "" return { "level": logging.DEBUG, "msg": CRAWLEDMSG, "args": { "status": response.status, "request": request, "request_flags": request_flags, "referer": referer_str(request), "response_flags": response_flags, # backward compatibility with Scrapy logformatter below 1.4 version "flags": response_flags, }, } def scraped( self, item: Any, response: Response | Failure | None, spider: Spider ) -> LogFormatterResult: """Logs a message when an item is scraped by a spider.""" src: Any if response is None: src = f"{global_object_name(spider.__class__)}.start" elif isinstance(response, Failure): src = response.getErrorMessage() else: src = response return { "level": logging.DEBUG, "msg": SCRAPEDMSG, "args": { "src": src, "item": item, }, } def dropped( self, item: Any, exception: BaseException, response: Response | Failure | None, spider: Spider, ) -> LogFormatterResult: """Logs a message when an item is dropped while it is passing through the item pipeline.""" if (level := getattr(exception, "log_level", None)) is None: level = spider.crawler.settings["DEFAULT_DROPITEM_LOG_LEVEL"] if isinstance(level, str): level = getattr(logging, level) return { "level": level, "msg": DROPPEDMSG, "args": { "exception": exception, "item": item, }, } def item_error( self, item: Any, exception: BaseException, response: Response | Failure | None, spider: Spider, ) -> LogFormatterResult: """Logs a message when an item causes an error while it is passing through the item pipeline. .. versionadded:: 2.0 """ return { "level": logging.ERROR, "msg": ITEMERRORMSG, "args": { "item": item, }, } def spider_error( self, failure: Failure, request: Request, response: Response | Failure, spider: Spider, ) -> LogFormatterResult: """Logs an error message from a spider. .. versionadded:: 2.0 """ return { "level": logging.ERROR, "msg": SPIDERERRORMSG, "args": { "request": request, "referer": referer_str(request), }, } def download_error( self, failure: Failure, request: Request, spider: Spider, errmsg: str | None = None, ) -> LogFormatterResult: """Logs a download error message from a spider (typically coming from the engine). .. versionadded:: 2.0 """ args: dict[str, Any] = {"request": request} if errmsg: msg = DOWNLOADERRORMSG_LONG args["errmsg"] = errmsg else: msg = DOWNLOADERRORMSG_SHORT return { "level": logging.ERROR, "msg": msg, "args": args, } @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/item.py
scrapy/item.py
""" Scrapy Item See documentation in docs/topics/item.rst """ from __future__ import annotations from abc import ABCMeta from collections.abc import MutableMapping from copy import deepcopy from pprint import pformat from typing import TYPE_CHECKING, Any, NoReturn from scrapy.utils.trackref import object_ref if TYPE_CHECKING: from collections.abc import Iterator, KeysView # typing.Self requires Python 3.11 from typing_extensions import Self class Field(dict[str, Any]): """Container of field metadata""" class ItemMeta(ABCMeta): """Metaclass_ of :class:`Item` that handles field definitions. .. _metaclass: https://realpython.com/python-metaclasses """ def __new__( mcs, class_name: str, bases: tuple[type, ...], attrs: dict[str, Any] ) -> ItemMeta: classcell = attrs.pop("__classcell__", None) new_bases = tuple(base._class for base in bases if hasattr(base, "_class")) _class = super().__new__(mcs, "x_" + class_name, new_bases, attrs) fields = getattr(_class, "fields", {}) new_attrs = {} for n in dir(_class): v = getattr(_class, n) if isinstance(v, Field): fields[n] = v elif n in attrs: new_attrs[n] = attrs[n] new_attrs["fields"] = fields new_attrs["_class"] = _class if classcell is not None: new_attrs["__classcell__"] = classcell return super().__new__(mcs, class_name, bases, new_attrs) class Item(MutableMapping[str, Any], object_ref, metaclass=ItemMeta): """Base class for scraped items. In Scrapy, an object is considered an ``item`` if it's supported by the `itemadapter`_ library. For example, when the output of a spider callback is evaluated, only such objects are passed to :ref:`item pipelines <topics-item-pipeline>`. :class:`Item` is one of the classes supported by `itemadapter`_ by default. Items must declare :class:`Field` attributes, which are processed and stored in the ``fields`` attribute. This restricts the set of allowed field names and prevents typos, raising ``KeyError`` when referring to undefined fields. Additionally, fields can be used to define metadata and control the way data is processed internally. Please refer to the :ref:`documentation about fields <topics-items-fields>` for additional information. Unlike instances of :class:`dict`, instances of :class:`Item` may be :ref:`tracked <topics-leaks-trackrefs>` to debug memory leaks. .. _itemadapter: https://github.com/scrapy/itemadapter """ #: A dictionary containing *all declared fields* for this Item, not only #: those populated. The keys are the field names and the values are the #: :class:`Field` objects used in the :ref:`Item declaration #: <topics-items-declaring>`. fields: dict[str, Field] def __init__(self, *args: Any, **kwargs: Any): self._values: dict[str, Any] = {} if args or kwargs: # avoid creating dict for most common case for k, v in dict(*args, **kwargs).items(): self[k] = v def __getitem__(self, key: str) -> Any: return self._values[key] def __setitem__(self, key: str, value: Any) -> None: if key in self.fields: self._values[key] = value else: raise KeyError(f"{self.__class__.__name__} does not support field: {key}") def __delitem__(self, key: str) -> None: del self._values[key] def __getattr__(self, name: str) -> NoReturn: if name in self.fields: raise AttributeError(f"Use item[{name!r}] to get field value") raise AttributeError(name) def __setattr__(self, name: str, value: Any) -> None: if not name.startswith("_"): raise AttributeError(f"Use item[{name!r}] = {value!r} to set field value") super().__setattr__(name, value) def __len__(self) -> int: return len(self._values) def __iter__(self) -> Iterator[str]: return iter(self._values) __hash__ = object_ref.__hash__ def keys(self) -> KeysView[str]: return self._values.keys() def __repr__(self) -> str: return pformat(dict(self)) def copy(self) -> Self: return self.__class__(self) def deepcopy(self) -> Self: """Return a :func:`~copy.deepcopy` of this item.""" return deepcopy(self)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/crawler.py
scrapy/crawler.py
from __future__ import annotations import asyncio import contextlib import logging import pprint import signal import warnings from abc import ABC, abstractmethod from typing import TYPE_CHECKING, Any, TypeVar from twisted.internet.defer import Deferred, DeferredList, inlineCallbacks from scrapy import Spider from scrapy.addons import AddonManager from scrapy.core.engine import ExecutionEngine from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.extension import ExtensionManager from scrapy.settings import Settings, overridden_settings from scrapy.signalmanager import SignalManager from scrapy.spiderloader import SpiderLoaderProtocol, get_spider_loader from scrapy.utils.defer import deferred_from_coro from scrapy.utils.log import ( configure_logging, get_scrapy_root_handler, install_scrapy_root_handler, log_reactor_info, log_scrapy_info, ) from scrapy.utils.misc import build_from_crawler, load_object from scrapy.utils.ossignal import install_shutdown_handlers, signal_names from scrapy.utils.reactor import ( _asyncio_reactor_path, install_reactor, is_asyncio_reactor_installed, is_reactor_installed, verify_installed_asyncio_event_loop, verify_installed_reactor, ) if TYPE_CHECKING: from collections.abc import Awaitable, Generator, Iterable from scrapy.logformatter import LogFormatter from scrapy.statscollectors import StatsCollector from scrapy.utils.request import RequestFingerprinterProtocol logger = logging.getLogger(__name__) _T = TypeVar("_T") class Crawler: def __init__( self, spidercls: type[Spider], settings: dict[str, Any] | Settings | None = None, init_reactor: bool = False, ): if isinstance(spidercls, Spider): raise ValueError("The spidercls argument must be a class, not an object") if isinstance(settings, dict) or settings is None: settings = Settings(settings) self.spidercls: type[Spider] = spidercls self.settings: Settings = settings.copy() self.spidercls.update_settings(self.settings) self._update_root_log_handler() self.addons: AddonManager = AddonManager(self) self.signals: SignalManager = SignalManager(self) self._init_reactor: bool = init_reactor self.crawling: bool = False self._started: bool = False self.extensions: ExtensionManager | None = None self.stats: StatsCollector | None = None self.logformatter: LogFormatter | None = None self.request_fingerprinter: RequestFingerprinterProtocol | None = None self.spider: Spider | None = None self.engine: ExecutionEngine | None = None def _update_root_log_handler(self) -> None: if get_scrapy_root_handler() is not None: # scrapy root handler already installed: update it with new settings install_scrapy_root_handler(self.settings) def _apply_settings(self) -> None: if self.settings.frozen: return self.addons.load_settings(self.settings) self.stats = load_object(self.settings["STATS_CLASS"])(self) lf_cls: type[LogFormatter] = load_object(self.settings["LOG_FORMATTER"]) self.logformatter = lf_cls.from_crawler(self) self.request_fingerprinter = build_from_crawler( load_object(self.settings["REQUEST_FINGERPRINTER_CLASS"]), self, ) reactor_class: str = self.settings["TWISTED_REACTOR"] event_loop: str = self.settings["ASYNCIO_EVENT_LOOP"] if self._init_reactor: # this needs to be done after the spider settings are merged, # but before something imports twisted.internet.reactor if reactor_class: install_reactor(reactor_class, event_loop) else: from twisted.internet import reactor # noqa: F401 if reactor_class: verify_installed_reactor(reactor_class) if is_asyncio_reactor_installed() and event_loop: verify_installed_asyncio_event_loop(event_loop) if self._init_reactor or reactor_class: log_reactor_info() self.extensions = ExtensionManager.from_crawler(self) self.settings.freeze() d = dict(overridden_settings(self.settings)) logger.info( "Overridden settings:\n%(settings)s", {"settings": pprint.pformat(d)} ) # Cannot use @deferred_f_from_coro_f because that relies on the reactor # being installed already, which is done within _apply_settings(), inside # this method. @inlineCallbacks def crawl(self, *args: Any, **kwargs: Any) -> Generator[Deferred[Any], Any, None]: """Start the crawler by instantiating its spider class with the given *args* and *kwargs* arguments, while setting the execution engine in motion. Should be called only once. Return a deferred that is fired when the crawl is finished. """ if self.crawling: raise RuntimeError("Crawling already taking place") if self._started: raise RuntimeError( "Cannot run Crawler.crawl() more than once on the same instance." ) self.crawling = self._started = True try: self.spider = self._create_spider(*args, **kwargs) self._apply_settings() self._update_root_log_handler() self.engine = self._create_engine() yield deferred_from_coro(self.engine.open_spider_async()) yield deferred_from_coro(self.engine.start_async()) except Exception: self.crawling = False if self.engine is not None: yield deferred_from_coro(self.engine.close_async()) raise async def crawl_async(self, *args: Any, **kwargs: Any) -> None: """Start the crawler by instantiating its spider class with the given *args* and *kwargs* arguments, while setting the execution engine in motion. Should be called only once. .. versionadded:: VERSION Complete when the crawl is finished. """ if self.crawling: raise RuntimeError("Crawling already taking place") if self._started: raise RuntimeError( "Cannot run Crawler.crawl_async() more than once on the same instance." ) self.crawling = self._started = True try: self.spider = self._create_spider(*args, **kwargs) self._apply_settings() self._update_root_log_handler() self.engine = self._create_engine() await self.engine.open_spider_async() await self.engine.start_async() except Exception: self.crawling = False if self.engine is not None: await self.engine.close_async() raise def _create_spider(self, *args: Any, **kwargs: Any) -> Spider: return self.spidercls.from_crawler(self, *args, **kwargs) def _create_engine(self) -> ExecutionEngine: return ExecutionEngine(self, lambda _: self.stop_async()) def stop(self) -> Deferred[None]: """Start a graceful stop of the crawler and return a deferred that is fired when the crawler is stopped.""" warnings.warn( "Crawler.stop() is deprecated, use stop_async() instead", ScrapyDeprecationWarning, stacklevel=2, ) return deferred_from_coro(self.stop_async()) async def stop_async(self) -> None: """Start a graceful stop of the crawler and complete when the crawler is stopped. .. versionadded:: VERSION """ if self.crawling: self.crawling = False assert self.engine if self.engine.running: await self.engine.stop_async() @staticmethod def _get_component( component_class: type[_T], components: Iterable[Any] ) -> _T | None: for component in components: if isinstance(component, component_class): return component return None def get_addon(self, cls: type[_T]) -> _T | None: """Return the run-time instance of an :ref:`add-on <topics-addons>` of the specified class or a subclass, or ``None`` if none is found. .. versionadded:: 2.12 """ return self._get_component(cls, self.addons.addons) def get_downloader_middleware(self, cls: type[_T]) -> _T | None: """Return the run-time instance of a :ref:`downloader middleware <topics-downloader-middleware>` of the specified class or a subclass, or ``None`` if none is found. .. versionadded:: 2.12 This method can only be called after the crawl engine has been created, e.g. at signals :signal:`engine_started` or :signal:`spider_opened`. """ if not self.engine: raise RuntimeError( "Crawler.get_downloader_middleware() can only be called after " "the crawl engine has been created." ) return self._get_component(cls, self.engine.downloader.middleware.middlewares) def get_extension(self, cls: type[_T]) -> _T | None: """Return the run-time instance of an :ref:`extension <topics-extensions>` of the specified class or a subclass, or ``None`` if none is found. .. versionadded:: 2.12 This method can only be called after the extension manager has been created, e.g. at signals :signal:`engine_started` or :signal:`spider_opened`. """ if not self.extensions: raise RuntimeError( "Crawler.get_extension() can only be called after the " "extension manager has been created." ) return self._get_component(cls, self.extensions.middlewares) def get_item_pipeline(self, cls: type[_T]) -> _T | None: """Return the run-time instance of a :ref:`item pipeline <topics-item-pipeline>` of the specified class or a subclass, or ``None`` if none is found. .. versionadded:: 2.12 This method can only be called after the crawl engine has been created, e.g. at signals :signal:`engine_started` or :signal:`spider_opened`. """ if not self.engine: raise RuntimeError( "Crawler.get_item_pipeline() can only be called after the " "crawl engine has been created." ) return self._get_component(cls, self.engine.scraper.itemproc.middlewares) def get_spider_middleware(self, cls: type[_T]) -> _T | None: """Return the run-time instance of a :ref:`spider middleware <topics-spider-middleware>` of the specified class or a subclass, or ``None`` if none is found. .. versionadded:: 2.12 This method can only be called after the crawl engine has been created, e.g. at signals :signal:`engine_started` or :signal:`spider_opened`. """ if not self.engine: raise RuntimeError( "Crawler.get_spider_middleware() can only be called after the " "crawl engine has been created." ) return self._get_component(cls, self.engine.scraper.spidermw.middlewares) class CrawlerRunnerBase(ABC): def __init__(self, settings: dict[str, Any] | Settings | None = None): if isinstance(settings, dict) or settings is None: settings = Settings(settings) AddonManager.load_pre_crawler_settings(settings) self.settings: Settings = settings self.spider_loader: SpiderLoaderProtocol = get_spider_loader(settings) self._crawlers: set[Crawler] = set() self.bootstrap_failed = False @property def crawlers(self) -> set[Crawler]: """Set of :class:`crawlers <scrapy.crawler.Crawler>` started by :meth:`crawl` and managed by this class.""" return self._crawlers def create_crawler( self, crawler_or_spidercls: type[Spider] | str | Crawler ) -> Crawler: """ Return a :class:`~scrapy.crawler.Crawler` object. * If ``crawler_or_spidercls`` is a Crawler, it is returned as-is. * If ``crawler_or_spidercls`` is a Spider subclass, a new Crawler is constructed for it. * If ``crawler_or_spidercls`` is a string, this function finds a spider with this name in a Scrapy project (using spider loader), then creates a Crawler instance for it. """ if isinstance(crawler_or_spidercls, Spider): raise ValueError( "The crawler_or_spidercls argument cannot be a spider object, " "it must be a spider class (or a Crawler object)" ) if isinstance(crawler_or_spidercls, Crawler): return crawler_or_spidercls return self._create_crawler(crawler_or_spidercls) def _create_crawler(self, spidercls: str | type[Spider]) -> Crawler: if isinstance(spidercls, str): spidercls = self.spider_loader.load(spidercls) return Crawler(spidercls, self.settings) @abstractmethod def crawl( self, crawler_or_spidercls: type[Spider] | str | Crawler, *args: Any, **kwargs: Any, ) -> Awaitable[None]: raise NotImplementedError class CrawlerRunner(CrawlerRunnerBase): """ This is a convenient helper class that keeps track of, manages and runs crawlers inside an already setup :mod:`~twisted.internet.reactor`. The CrawlerRunner object must be instantiated with a :class:`~scrapy.settings.Settings` object. This class shouldn't be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See :ref:`run-from-script` for an example. This class provides Deferred-based APIs. Use :class:`AsyncCrawlerRunner` for modern coroutine APIs. """ def __init__(self, settings: dict[str, Any] | Settings | None = None): super().__init__(settings) self._active: set[Deferred[None]] = set() def crawl( self, crawler_or_spidercls: type[Spider] | str | Crawler, *args: Any, **kwargs: Any, ) -> Deferred[None]: """ Run a crawler with the provided arguments. It will call the given Crawler's :meth:`~Crawler.crawl` method, while keeping track of it so it can be stopped later. If ``crawler_or_spidercls`` isn't a :class:`~scrapy.crawler.Crawler` instance, this method will try to create one using this parameter as the spider class given to it. Returns a deferred that is fired when the crawling is finished. :param crawler_or_spidercls: already created crawler, or a spider class or spider's name inside the project to create it :type crawler_or_spidercls: :class:`~scrapy.crawler.Crawler` instance, :class:`~scrapy.spiders.Spider` subclass or string :param args: arguments to initialize the spider :param kwargs: keyword arguments to initialize the spider """ if isinstance(crawler_or_spidercls, Spider): raise ValueError( "The crawler_or_spidercls argument cannot be a spider object, " "it must be a spider class (or a Crawler object)" ) crawler = self.create_crawler(crawler_or_spidercls) return self._crawl(crawler, *args, **kwargs) @inlineCallbacks def _crawl( self, crawler: Crawler, *args: Any, **kwargs: Any ) -> Generator[Deferred[Any], Any, None]: self.crawlers.add(crawler) d = crawler.crawl(*args, **kwargs) self._active.add(d) try: yield d finally: self.crawlers.discard(crawler) self._active.discard(d) self.bootstrap_failed |= not getattr(crawler, "spider", None) def stop(self) -> Deferred[Any]: """ Stops simultaneously all the crawling jobs taking place. Returns a deferred that is fired when they all have ended. """ return DeferredList(deferred_from_coro(c.stop_async()) for c in self.crawlers) @inlineCallbacks def join(self) -> Generator[Deferred[Any], Any, None]: """ join() Returns a deferred that is fired when all managed :attr:`crawlers` have completed their executions. """ while self._active: yield DeferredList(self._active) class AsyncCrawlerRunner(CrawlerRunnerBase): """ This is a convenient helper class that keeps track of, manages and runs crawlers inside an already setup :mod:`~twisted.internet.reactor`. The AsyncCrawlerRunner object must be instantiated with a :class:`~scrapy.settings.Settings` object. This class shouldn't be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See :ref:`run-from-script` for an example. This class provides coroutine APIs. It requires :class:`~twisted.internet.asyncioreactor.AsyncioSelectorReactor`. """ def __init__(self, settings: dict[str, Any] | Settings | None = None): super().__init__(settings) self._active: set[asyncio.Task[None]] = set() def crawl( self, crawler_or_spidercls: type[Spider] | str | Crawler, *args: Any, **kwargs: Any, ) -> asyncio.Task[None]: """ Run a crawler with the provided arguments. It will call the given Crawler's :meth:`~Crawler.crawl` method, while keeping track of it so it can be stopped later. If ``crawler_or_spidercls`` isn't a :class:`~scrapy.crawler.Crawler` instance, this method will try to create one using this parameter as the spider class given to it. Returns a :class:`~asyncio.Task` object which completes when the crawling is finished. :param crawler_or_spidercls: already created crawler, or a spider class or spider's name inside the project to create it :type crawler_or_spidercls: :class:`~scrapy.crawler.Crawler` instance, :class:`~scrapy.spiders.Spider` subclass or string :param args: arguments to initialize the spider :param kwargs: keyword arguments to initialize the spider """ if isinstance(crawler_or_spidercls, Spider): raise ValueError( "The crawler_or_spidercls argument cannot be a spider object, " "it must be a spider class (or a Crawler object)" ) if not is_asyncio_reactor_installed(): raise RuntimeError( f"{type(self).__name__} requires AsyncioSelectorReactor." ) crawler = self.create_crawler(crawler_or_spidercls) return self._crawl(crawler, *args, **kwargs) def _crawl(self, crawler: Crawler, *args: Any, **kwargs: Any) -> asyncio.Task[None]: # At this point the asyncio loop has been installed either by the user # or by AsyncCrawlerProcess (but it isn't running yet, so no asyncio.create_task()). loop = asyncio.get_event_loop() self.crawlers.add(crawler) task = loop.create_task(crawler.crawl_async(*args, **kwargs)) self._active.add(task) def _done(_: asyncio.Task[None]) -> None: self.crawlers.discard(crawler) self._active.discard(task) self.bootstrap_failed |= not getattr(crawler, "spider", None) task.add_done_callback(_done) return task async def stop(self) -> None: """ Stops simultaneously all the crawling jobs taking place. Completes when they all have ended. """ if self.crawlers: await asyncio.wait( [asyncio.create_task(c.stop_async()) for c in self.crawlers] ) async def join(self) -> None: """ Completes when all managed :attr:`crawlers` have completed their executions. """ while self._active: await asyncio.wait(self._active) class CrawlerProcessBase(CrawlerRunnerBase): def __init__( self, settings: dict[str, Any] | Settings | None = None, install_root_handler: bool = True, ): super().__init__(settings) configure_logging(self.settings, install_root_handler) log_scrapy_info(self.settings) @abstractmethod def start( self, stop_after_crawl: bool = True, install_signal_handlers: bool = True ) -> None: raise NotImplementedError def _signal_shutdown(self, signum: int, _: Any) -> None: from twisted.internet import reactor install_shutdown_handlers(self._signal_kill) signame = signal_names[signum] logger.info( "Received %(signame)s, shutting down gracefully. Send again to force ", {"signame": signame}, ) reactor.callFromThread(self._graceful_stop_reactor) def _signal_kill(self, signum: int, _: Any) -> None: from twisted.internet import reactor install_shutdown_handlers(signal.SIG_IGN) signame = signal_names[signum] logger.info( "Received %(signame)s twice, forcing unclean shutdown", {"signame": signame} ) reactor.callFromThread(self._stop_reactor) def _setup_reactor(self, install_signal_handlers: bool) -> None: from twisted.internet import reactor resolver_class = load_object(self.settings["DNS_RESOLVER"]) # We pass self, which is CrawlerProcess, instead of Crawler here, # which works because the default resolvers only use crawler.settings. resolver = build_from_crawler(resolver_class, self, reactor=reactor) # type: ignore[arg-type] resolver.install_on_reactor() tp = reactor.getThreadPool() tp.adjustPoolsize(maxthreads=self.settings.getint("REACTOR_THREADPOOL_MAXSIZE")) reactor.addSystemEventTrigger("before", "shutdown", self._stop_dfd) if install_signal_handlers: reactor.addSystemEventTrigger( "after", "startup", install_shutdown_handlers, self._signal_shutdown ) @abstractmethod def _stop_dfd(self) -> Deferred[Any]: raise NotImplementedError @inlineCallbacks def _graceful_stop_reactor(self) -> Generator[Deferred[Any], Any, None]: try: yield self._stop_dfd() finally: self._stop_reactor() def _stop_reactor(self, _: Any = None) -> None: from twisted.internet import reactor # raised if already stopped or in shutdown stage with contextlib.suppress(RuntimeError): reactor.stop() class CrawlerProcess(CrawlerProcessBase, CrawlerRunner): """ A class to run multiple scrapy crawlers in a process simultaneously. This class extends :class:`~scrapy.crawler.CrawlerRunner` by adding support for starting a :mod:`~twisted.internet.reactor` and handling shutdown signals, like the keyboard interrupt command Ctrl-C. It also configures top-level logging. This utility should be a better fit than :class:`~scrapy.crawler.CrawlerRunner` if you aren't running another :mod:`~twisted.internet.reactor` within your application. The CrawlerProcess object must be instantiated with a :class:`~scrapy.settings.Settings` object. :param install_root_handler: whether to install root logging handler (default: True) This class shouldn't be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See :ref:`run-from-script` for an example. This class provides Deferred-based APIs. Use :class:`AsyncCrawlerProcess` for modern coroutine APIs. """ def __init__( self, settings: dict[str, Any] | Settings | None = None, install_root_handler: bool = True, ): super().__init__(settings, install_root_handler) self._initialized_reactor: bool = False logger.debug("Using CrawlerProcess") def _create_crawler(self, spidercls: type[Spider] | str) -> Crawler: if isinstance(spidercls, str): spidercls = self.spider_loader.load(spidercls) init_reactor = not self._initialized_reactor self._initialized_reactor = True return Crawler(spidercls, self.settings, init_reactor=init_reactor) def _stop_dfd(self) -> Deferred[Any]: return self.stop() def start( self, stop_after_crawl: bool = True, install_signal_handlers: bool = True ) -> None: """ This method starts a :mod:`~twisted.internet.reactor`, adjusts its pool size to :setting:`REACTOR_THREADPOOL_MAXSIZE`, and installs a DNS cache based on :setting:`DNSCACHE_ENABLED` and :setting:`DNSCACHE_SIZE`. If ``stop_after_crawl`` is True, the reactor will be stopped after all crawlers have finished, using :meth:`join`. :param bool stop_after_crawl: stop or not the reactor when all crawlers have finished :param bool install_signal_handlers: whether to install the OS signal handlers from Twisted and Scrapy (default: True) """ from twisted.internet import reactor if stop_after_crawl: d = self.join() # Don't start the reactor if the deferreds are already fired if d.called: return d.addBoth(self._stop_reactor) self._setup_reactor(install_signal_handlers) reactor.run(installSignalHandlers=install_signal_handlers) # blocking call class AsyncCrawlerProcess(CrawlerProcessBase, AsyncCrawlerRunner): """ A class to run multiple scrapy crawlers in a process simultaneously. This class extends :class:`~scrapy.crawler.AsyncCrawlerRunner` by adding support for starting a :mod:`~twisted.internet.reactor` and handling shutdown signals, like the keyboard interrupt command Ctrl-C. It also configures top-level logging. This utility should be a better fit than :class:`~scrapy.crawler.AsyncCrawlerRunner` if you aren't running another :mod:`~twisted.internet.reactor` within your application. The AsyncCrawlerProcess object must be instantiated with a :class:`~scrapy.settings.Settings` object. :param install_root_handler: whether to install root logging handler (default: True) This class shouldn't be needed (since Scrapy is responsible of using it accordingly) unless writing scripts that manually handle the crawling process. See :ref:`run-from-script` for an example. This class provides coroutine APIs. It requires :class:`~twisted.internet.asyncioreactor.AsyncioSelectorReactor`. """ def __init__( self, settings: dict[str, Any] | Settings | None = None, install_root_handler: bool = True, ): super().__init__(settings, install_root_handler) logger.debug("Using AsyncCrawlerProcess") # We want the asyncio event loop to be installed early, so that it's # always the correct one. And as we do that, we can also install the # reactor here. # The ASYNCIO_EVENT_LOOP setting cannot be overridden by add-ons and # spiders when using AsyncCrawlerProcess. loop_path = self.settings["ASYNCIO_EVENT_LOOP"] if is_reactor_installed(): # The user could install a reactor before this class is instantiated. # We need to make sure the reactor is the correct one and the loop # type matches the setting. verify_installed_reactor(_asyncio_reactor_path) if loop_path: verify_installed_asyncio_event_loop(loop_path) else: install_reactor(_asyncio_reactor_path, loop_path) self._initialized_reactor = True def _stop_dfd(self) -> Deferred[Any]: return deferred_from_coro(self.stop()) def start( self, stop_after_crawl: bool = True, install_signal_handlers: bool = True ) -> None: """ This method starts a :mod:`~twisted.internet.reactor`, adjusts its pool size to :setting:`REACTOR_THREADPOOL_MAXSIZE`, and installs a DNS cache based on :setting:`DNSCACHE_ENABLED` and :setting:`DNSCACHE_SIZE`. If ``stop_after_crawl`` is True, the reactor will be stopped after all crawlers have finished, using :meth:`join`. :param bool stop_after_crawl: stop or not the reactor when all crawlers have finished :param bool install_signal_handlers: whether to install the OS signal handlers from Twisted and Scrapy (default: True) """ from twisted.internet import reactor if stop_after_crawl: loop = asyncio.get_event_loop() join_task = loop.create_task(self.join()) join_task.add_done_callback(self._stop_reactor) self._setup_reactor(install_signal_handlers) reactor.run(installSignalHandlers=install_signal_handlers) # blocking call
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/cmdline.py
scrapy/cmdline.py
from __future__ import annotations import argparse import cProfile import inspect import os import sys from importlib.metadata import entry_points from typing import TYPE_CHECKING, ParamSpec import scrapy from scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter from scrapy.crawler import AsyncCrawlerProcess, CrawlerProcess from scrapy.exceptions import UsageError from scrapy.utils.misc import walk_modules from scrapy.utils.project import get_project_settings, inside_project from scrapy.utils.python import garbage_collect from scrapy.utils.reactor import _asyncio_reactor_path if TYPE_CHECKING: from collections.abc import Callable, Iterable from scrapy.settings import BaseSettings, Settings _P = ParamSpec("_P") class ScrapyArgumentParser(argparse.ArgumentParser): def _parse_optional( self, arg_string: str ) -> tuple[argparse.Action | None, str, str | None] | None: # Support something like ‘-o -:json’, where ‘-:json’ is a value for # ‘-o’, not another parameter. if arg_string.startswith("-:"): return None return super()._parse_optional(arg_string) def _iter_command_classes(module_name: str) -> Iterable[type[ScrapyCommand]]: # TODO: add `name` attribute to commands and merge this function with # scrapy.utils.spider.iter_spider_classes for module in walk_modules(module_name): for obj in vars(module).values(): if ( inspect.isclass(obj) and issubclass(obj, ScrapyCommand) and obj.__module__ == module.__name__ and obj not in (ScrapyCommand, BaseRunSpiderCommand) ): yield obj def _get_commands_from_module(module: str, inproject: bool) -> dict[str, ScrapyCommand]: d: dict[str, ScrapyCommand] = {} for cmd in _iter_command_classes(module): if inproject or not cmd.requires_project: cmdname = cmd.__module__.split(".")[-1] d[cmdname] = cmd() return d def _get_commands_from_entry_points( inproject: bool, group: str = "scrapy.commands" ) -> dict[str, ScrapyCommand]: cmds: dict[str, ScrapyCommand] = {} for entry_point in entry_points(group=group): obj = entry_point.load() if inspect.isclass(obj): cmds[entry_point.name] = obj() else: raise ValueError(f"Invalid entry point {entry_point.name}") return cmds def _get_commands_dict( settings: BaseSettings, inproject: bool ) -> dict[str, ScrapyCommand]: cmds = _get_commands_from_module("scrapy.commands", inproject) cmds.update(_get_commands_from_entry_points(inproject)) cmds_module = settings["COMMANDS_MODULE"] if cmds_module: cmds.update(_get_commands_from_module(cmds_module, inproject)) return cmds def _get_project_only_cmds(settings: BaseSettings) -> set[str]: return set(_get_commands_dict(settings, inproject=True)) - set( _get_commands_dict(settings, inproject=False) ) def _pop_command_name(argv: list[str]) -> str | None: for i in range(1, len(argv)): if not argv[i].startswith("-"): return argv.pop(i) return None def _print_header(settings: BaseSettings, inproject: bool) -> None: version = scrapy.__version__ if inproject: print(f"Scrapy {version} - active project: {settings['BOT_NAME']}\n") else: print(f"Scrapy {version} - no active project\n") def _print_commands(settings: BaseSettings, inproject: bool) -> None: _print_header(settings, inproject) print("Usage:") print(" scrapy <command> [options] [args]\n") print("Available commands:") cmds = _get_commands_dict(settings, inproject) for cmdname, cmdclass in sorted(cmds.items()): print(f" {cmdname:<13} {cmdclass.short_desc()}") if not inproject: print() print(" [ more ] More commands available when run from project directory") print() print('Use "scrapy <command> -h" to see more info about a command') def _print_unknown_command_msg( settings: BaseSettings, cmdname: str, inproject: bool ) -> None: proj_only_cmds = _get_project_only_cmds(settings) if cmdname in proj_only_cmds and not inproject: cmd_list = ", ".join(sorted(proj_only_cmds)) print( f"The {cmdname} command is not available from this location.\n" f"These commands are only available from within a project: {cmd_list}.\n" ) else: print(f"Unknown command: {cmdname}\n") def _print_unknown_command( settings: BaseSettings, cmdname: str, inproject: bool ) -> None: _print_header(settings, inproject) _print_unknown_command_msg(settings, cmdname, inproject) print('Use "scrapy" to see available commands') def _run_print_help( parser: argparse.ArgumentParser, func: Callable[_P, None], *a: _P.args, **kw: _P.kwargs, ) -> None: try: func(*a, **kw) except UsageError as e: if str(e): parser.error(str(e)) if e.print_help: parser.print_help() sys.exit(2) def execute(argv: list[str] | None = None, settings: Settings | None = None) -> None: if argv is None: argv = sys.argv if settings is None: settings = get_project_settings() # set EDITOR from environment if available try: editor = os.environ["EDITOR"] except KeyError: pass else: settings["EDITOR"] = editor inproject = inside_project() cmds = _get_commands_dict(settings, inproject) cmdname = _pop_command_name(argv) if not cmdname: _print_commands(settings, inproject) sys.exit(0) elif cmdname not in cmds: _print_unknown_command(settings, cmdname, inproject) sys.exit(2) cmd = cmds[cmdname] parser = ScrapyArgumentParser( formatter_class=ScrapyHelpFormatter, usage=f"scrapy {cmdname} {cmd.syntax()}", conflict_handler="resolve", description=cmd.long_desc(), ) settings.setdict(cmd.default_settings, priority="command") cmd.settings = settings cmd.add_options(parser) opts, args = parser.parse_known_args(args=argv[1:]) _run_print_help(parser, cmd.process_options, args, opts) if cmd.requires_crawler_process: if settings[ "TWISTED_REACTOR" ] == _asyncio_reactor_path and not settings.getbool("FORCE_CRAWLER_PROCESS"): cmd.crawler_process = AsyncCrawlerProcess(settings) else: cmd.crawler_process = CrawlerProcess(settings) _run_print_help(parser, _run_command, cmd, args, opts) sys.exit(cmd.exitcode) def _run_command(cmd: ScrapyCommand, args: list[str], opts: argparse.Namespace) -> None: if opts.profile: _run_command_profiled(cmd, args, opts) else: cmd.run(args, opts) def _run_command_profiled( cmd: ScrapyCommand, args: list[str], opts: argparse.Namespace ) -> None: if opts.profile: sys.stderr.write(f"scrapy: writing cProfile stats to {opts.profile!r}\n") loc = locals() p = cProfile.Profile() p.runctx("cmd.run(args, opts)", globals(), loc) if opts.profile: p.dump_stats(opts.profile) if __name__ == "__main__": try: execute() finally: # Twisted prints errors in DebugInfo.__del__, but PyPy does not run gc.collect() on exit: # http://doc.pypy.org/en/latest/cpython_differences.html # ?highlight=gc.collect#differences-related-to-garbage-collection-strategies garbage_collect()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/loader/__init__.py
scrapy/loader/__init__.py
""" Item Loader See documentation in docs/topics/loaders.rst """ from __future__ import annotations from typing import TYPE_CHECKING, Any import itemloaders from scrapy.item import Item from scrapy.selector import Selector if TYPE_CHECKING: from scrapy.http import TextResponse class ItemLoader(itemloaders.ItemLoader): """ A user-friendly abstraction to populate an :ref:`item <topics-items>` with data by applying :ref:`field processors <topics-loaders-processors>` to scraped data. When instantiated with a ``selector`` or a ``response`` it supports data extraction from web pages using :ref:`selectors <topics-selectors>`. :param item: The item instance to populate using subsequent calls to :meth:`~ItemLoader.add_xpath`, :meth:`~ItemLoader.add_css`, or :meth:`~ItemLoader.add_value`. :type item: scrapy.item.Item :param selector: The selector to extract data from, when using the :meth:`add_xpath`, :meth:`add_css`, :meth:`replace_xpath`, or :meth:`replace_css` method. :type selector: :class:`~scrapy.Selector` object :param response: The response used to construct the selector using the :attr:`default_selector_class`, unless the selector argument is given, in which case this argument is ignored. :type response: :class:`~scrapy.http.Response` object If no item is given, one is instantiated automatically using the class in :attr:`default_item_class`. The item, selector, response and remaining keyword arguments are assigned to the Loader context (accessible through the :attr:`context` attribute). .. attribute:: item The item object being parsed by this Item Loader. This is mostly used as a property so, when attempting to override this value, you may want to check out :attr:`default_item_class` first. .. attribute:: context The currently active :ref:`Context <loaders-context>` of this Item Loader. .. attribute:: default_item_class An :ref:`item <topics-items>` class (or factory), used to instantiate items when not given in the ``__init__`` method. .. attribute:: default_input_processor The default input processor to use for those fields which don't specify one. .. attribute:: default_output_processor The default output processor to use for those fields which don't specify one. .. attribute:: default_selector_class The class used to construct the :attr:`selector` of this :class:`ItemLoader`, if only a response is given in the ``__init__`` method. If a selector is given in the ``__init__`` method this attribute is ignored. This attribute is sometimes overridden in subclasses. .. attribute:: selector The :class:`~scrapy.Selector` object to extract data from. It's either the selector given in the ``__init__`` method or one created from the response given in the ``__init__`` method using the :attr:`default_selector_class`. This attribute is meant to be read-only. """ default_item_class: type = Item default_selector_class = Selector def __init__( self, item: Any = None, selector: Selector | None = None, response: TextResponse | None = None, parent: itemloaders.ItemLoader | None = None, **context: Any, ): if selector is None and response is not None: try: selector = self.default_selector_class(response) except AttributeError: selector = None context.update(response=response) super().__init__(item=item, selector=selector, parent=parent, **context)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/settings/default_settings.py
scrapy/settings/default_settings.py
"""This module contains the default values for all settings used by Scrapy. For more information about these settings you can read the settings documentation in docs/topics/settings.rst Scrapy developers, if you add a setting here remember to: * add it in alphabetical order, with the exception that enabling flags and other high-level settings for a group should come first in their group and pairs like host/port and user/password should be in the usual order * group similar settings without leaving blank lines * add its documentation to the available settings documentation (docs/topics/settings.rst) """ import sys from importlib import import_module from pathlib import Path __all__ = [ "ADDONS", "AJAXCRAWL_ENABLED", "AJAXCRAWL_MAXSIZE", "ASYNCIO_EVENT_LOOP", "AUTOTHROTTLE_DEBUG", "AUTOTHROTTLE_ENABLED", "AUTOTHROTTLE_MAX_DELAY", "AUTOTHROTTLE_START_DELAY", "AUTOTHROTTLE_TARGET_CONCURRENCY", "BOT_NAME", "CLOSESPIDER_ERRORCOUNT", "CLOSESPIDER_ITEMCOUNT", "CLOSESPIDER_PAGECOUNT", "CLOSESPIDER_TIMEOUT", "COMMANDS_MODULE", "COMPRESSION_ENABLED", "CONCURRENT_ITEMS", "CONCURRENT_REQUESTS", "CONCURRENT_REQUESTS_PER_DOMAIN", "COOKIES_DEBUG", "COOKIES_ENABLED", "CRAWLSPIDER_FOLLOW_LINKS", "DEFAULT_DROPITEM_LOG_LEVEL", "DEFAULT_ITEM_CLASS", "DEFAULT_REQUEST_HEADERS", "DEPTH_LIMIT", "DEPTH_PRIORITY", "DEPTH_STATS_VERBOSE", "DNSCACHE_ENABLED", "DNSCACHE_SIZE", "DNS_RESOLVER", "DNS_TIMEOUT", "DOWNLOADER", "DOWNLOADER_CLIENTCONTEXTFACTORY", "DOWNLOADER_CLIENT_TLS_CIPHERS", "DOWNLOADER_CLIENT_TLS_METHOD", "DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING", "DOWNLOADER_HTTPCLIENTFACTORY", "DOWNLOADER_MIDDLEWARES", "DOWNLOADER_MIDDLEWARES_BASE", "DOWNLOADER_STATS", "DOWNLOAD_DELAY", "DOWNLOAD_FAIL_ON_DATALOSS", "DOWNLOAD_HANDLERS", "DOWNLOAD_HANDLERS_BASE", "DOWNLOAD_MAXSIZE", "DOWNLOAD_TIMEOUT", "DOWNLOAD_WARNSIZE", "DUPEFILTER_CLASS", "EDITOR", "EXTENSIONS", "EXTENSIONS_BASE", "FEEDS", "FEED_EXPORTERS", "FEED_EXPORTERS_BASE", "FEED_EXPORT_BATCH_ITEM_COUNT", "FEED_EXPORT_ENCODING", "FEED_EXPORT_FIELDS", "FEED_EXPORT_INDENT", "FEED_FORMAT", "FEED_STORAGES", "FEED_STORAGES_BASE", "FEED_STORAGE_FTP_ACTIVE", "FEED_STORAGE_GCS_ACL", "FEED_STORAGE_S3_ACL", "FEED_STORE_EMPTY", "FEED_TEMPDIR", "FEED_URI_PARAMS", "FILES_STORE_GCS_ACL", "FILES_STORE_S3_ACL", "FORCE_CRAWLER_PROCESS", "FTP_PASSIVE_MODE", "FTP_PASSWORD", "FTP_USER", "GCS_PROJECT_ID", "HTTPCACHE_ALWAYS_STORE", "HTTPCACHE_DBM_MODULE", "HTTPCACHE_DIR", "HTTPCACHE_ENABLED", "HTTPCACHE_EXPIRATION_SECS", "HTTPCACHE_GZIP", "HTTPCACHE_IGNORE_HTTP_CODES", "HTTPCACHE_IGNORE_MISSING", "HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS", "HTTPCACHE_IGNORE_SCHEMES", "HTTPCACHE_POLICY", "HTTPCACHE_STORAGE", "HTTPPROXY_AUTH_ENCODING", "HTTPPROXY_ENABLED", "IMAGES_STORE_GCS_ACL", "IMAGES_STORE_S3_ACL", "ITEM_PIPELINES", "ITEM_PIPELINES_BASE", "ITEM_PROCESSOR", "JOBDIR", "LOGSTATS_INTERVAL", "LOG_DATEFORMAT", "LOG_ENABLED", "LOG_ENCODING", "LOG_FILE", "LOG_FILE_APPEND", "LOG_FORMAT", "LOG_FORMATTER", "LOG_LEVEL", "LOG_SHORT_NAMES", "LOG_STDOUT", "LOG_VERSIONS", "MAIL_FROM", "MAIL_HOST", "MAIL_PASS", "MAIL_PORT", "MAIL_USER", "MEMDEBUG_ENABLED", "MEMDEBUG_NOTIFY", "MEMUSAGE_CHECK_INTERVAL_SECONDS", "MEMUSAGE_ENABLED", "MEMUSAGE_LIMIT_MB", "MEMUSAGE_NOTIFY_MAIL", "MEMUSAGE_WARNING_MB", "METAREFRESH_ENABLED", "METAREFRESH_IGNORE_TAGS", "METAREFRESH_MAXDELAY", "NEWSPIDER_MODULE", "PERIODIC_LOG_DELTA", "PERIODIC_LOG_STATS", "PERIODIC_LOG_TIMING_ENABLED", "RANDOMIZE_DOWNLOAD_DELAY", "REACTOR_THREADPOOL_MAXSIZE", "REDIRECT_ENABLED", "REDIRECT_MAX_TIMES", "REDIRECT_PRIORITY_ADJUST", "REFERER_ENABLED", "REFERRER_POLICY", "REQUEST_FINGERPRINTER_CLASS", "RETRY_ENABLED", "RETRY_EXCEPTIONS", "RETRY_HTTP_CODES", "RETRY_PRIORITY_ADJUST", "RETRY_TIMES", "ROBOTSTXT_OBEY", "ROBOTSTXT_PARSER", "ROBOTSTXT_USER_AGENT", "SCHEDULER", "SCHEDULER_DEBUG", "SCHEDULER_DISK_QUEUE", "SCHEDULER_MEMORY_QUEUE", "SCHEDULER_PRIORITY_QUEUE", "SCHEDULER_START_DISK_QUEUE", "SCHEDULER_START_MEMORY_QUEUE", "SCRAPER_SLOT_MAX_ACTIVE_SIZE", "SPIDER_CONTRACTS", "SPIDER_CONTRACTS_BASE", "SPIDER_LOADER_CLASS", "SPIDER_LOADER_WARN_ONLY", "SPIDER_MIDDLEWARES", "SPIDER_MIDDLEWARES_BASE", "SPIDER_MODULES", "STATSMAILER_RCPTS", "STATS_CLASS", "STATS_DUMP", "TELNETCONSOLE_ENABLED", "TELNETCONSOLE_HOST", "TELNETCONSOLE_PASSWORD", "TELNETCONSOLE_PORT", "TELNETCONSOLE_USERNAME", "TEMPLATES_DIR", "TWISTED_REACTOR", "URLLENGTH_LIMIT", "USER_AGENT", "WARN_ON_GENERATOR_RETURN_VALUE", ] ADDONS = {} AJAXCRAWL_ENABLED = False AJAXCRAWL_MAXSIZE = 32768 ASYNCIO_EVENT_LOOP = None AUTOTHROTTLE_ENABLED = False AUTOTHROTTLE_DEBUG = False AUTOTHROTTLE_MAX_DELAY = 60.0 AUTOTHROTTLE_START_DELAY = 5.0 AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 BOT_NAME = "scrapybot" CLOSESPIDER_ERRORCOUNT = 0 CLOSESPIDER_ITEMCOUNT = 0 CLOSESPIDER_PAGECOUNT = 0 CLOSESPIDER_TIMEOUT = 0 COMMANDS_MODULE = "" COMPRESSION_ENABLED = True CONCURRENT_ITEMS = 100 CONCURRENT_REQUESTS = 16 CONCURRENT_REQUESTS_PER_DOMAIN = 8 COOKIES_ENABLED = True COOKIES_DEBUG = False CRAWLSPIDER_FOLLOW_LINKS = True DEFAULT_DROPITEM_LOG_LEVEL = "WARNING" DEFAULT_ITEM_CLASS = "scrapy.item.Item" DEFAULT_REQUEST_HEADERS = { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en", } DEPTH_LIMIT = 0 DEPTH_PRIORITY = 0 DEPTH_STATS_VERBOSE = False DNSCACHE_ENABLED = True DNSCACHE_SIZE = 10000 DNS_RESOLVER = "scrapy.resolver.CachingThreadedResolver" DNS_TIMEOUT = 60 DOWNLOAD_DELAY = 0 DOWNLOAD_FAIL_ON_DATALOSS = True DOWNLOAD_HANDLERS = {} DOWNLOAD_HANDLERS_BASE = { "data": "scrapy.core.downloader.handlers.datauri.DataURIDownloadHandler", "file": "scrapy.core.downloader.handlers.file.FileDownloadHandler", "http": "scrapy.core.downloader.handlers.http11.HTTP11DownloadHandler", "https": "scrapy.core.downloader.handlers.http11.HTTP11DownloadHandler", "s3": "scrapy.core.downloader.handlers.s3.S3DownloadHandler", "ftp": "scrapy.core.downloader.handlers.ftp.FTPDownloadHandler", } DOWNLOAD_MAXSIZE = 1024 * 1024 * 1024 # 1024m DOWNLOAD_WARNSIZE = 32 * 1024 * 1024 # 32m DOWNLOAD_TIMEOUT = 180 # 3mins DOWNLOADER = "scrapy.core.downloader.Downloader" DOWNLOADER_CLIENTCONTEXTFACTORY = ( "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory" ) DOWNLOADER_CLIENT_TLS_CIPHERS = "DEFAULT" # Use highest TLS/SSL protocol version supported by the platform, also allowing negotiation: DOWNLOADER_CLIENT_TLS_METHOD = "TLS" DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING = False DOWNLOADER_HTTPCLIENTFACTORY = ( "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory" ) DOWNLOADER_MIDDLEWARES = {} DOWNLOADER_MIDDLEWARES_BASE = { # Engine side "scrapy.downloadermiddlewares.offsite.OffsiteMiddleware": 50, "scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware": 100, "scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware": 300, "scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware": 350, "scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware": 400, "scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": 500, "scrapy.downloadermiddlewares.retry.RetryMiddleware": 550, "scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware": 560, "scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware": 580, "scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware": 590, "scrapy.downloadermiddlewares.redirect.RedirectMiddleware": 600, "scrapy.downloadermiddlewares.cookies.CookiesMiddleware": 700, "scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware": 750, "scrapy.downloadermiddlewares.stats.DownloaderStats": 850, "scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware": 900, # Downloader side } DOWNLOADER_STATS = True DUPEFILTER_CLASS = "scrapy.dupefilters.RFPDupeFilter" EDITOR = "vi" if sys.platform == "win32": EDITOR = "%s -m idlelib.idle" EXTENSIONS = {} EXTENSIONS_BASE = { "scrapy.extensions.corestats.CoreStats": 0, "scrapy.extensions.logcount.LogCount": 0, "scrapy.extensions.telnet.TelnetConsole": 0, "scrapy.extensions.memusage.MemoryUsage": 0, "scrapy.extensions.memdebug.MemoryDebugger": 0, "scrapy.extensions.closespider.CloseSpider": 0, "scrapy.extensions.feedexport.FeedExporter": 0, "scrapy.extensions.logstats.LogStats": 0, "scrapy.extensions.spiderstate.SpiderState": 0, "scrapy.extensions.throttle.AutoThrottle": 0, } FEEDS = {} FEED_EXPORT_BATCH_ITEM_COUNT = 0 FEED_EXPORT_ENCODING = None FEED_EXPORT_FIELDS = None FEED_EXPORT_INDENT = 0 FEED_EXPORTERS = {} FEED_EXPORTERS_BASE = { "json": "scrapy.exporters.JsonItemExporter", "jsonlines": "scrapy.exporters.JsonLinesItemExporter", "jsonl": "scrapy.exporters.JsonLinesItemExporter", "jl": "scrapy.exporters.JsonLinesItemExporter", "csv": "scrapy.exporters.CsvItemExporter", "xml": "scrapy.exporters.XmlItemExporter", "marshal": "scrapy.exporters.MarshalItemExporter", "pickle": "scrapy.exporters.PickleItemExporter", } FEED_FORMAT = "jsonlines" FEED_STORE_EMPTY = True FEED_STORAGES = {} FEED_STORAGES_BASE = { "": "scrapy.extensions.feedexport.FileFeedStorage", "file": "scrapy.extensions.feedexport.FileFeedStorage", "ftp": "scrapy.extensions.feedexport.FTPFeedStorage", "gs": "scrapy.extensions.feedexport.GCSFeedStorage", "s3": "scrapy.extensions.feedexport.S3FeedStorage", "stdout": "scrapy.extensions.feedexport.StdoutFeedStorage", } FEED_STORAGE_FTP_ACTIVE = False FEED_STORAGE_GCS_ACL = "" FEED_STORAGE_S3_ACL = "" FEED_TEMPDIR = None FEED_URI_PARAMS = None # a function to extend uri arguments FILES_STORE_GCS_ACL = "" FILES_STORE_S3_ACL = "private" FORCE_CRAWLER_PROCESS = False FTP_PASSIVE_MODE = True FTP_USER = "anonymous" FTP_PASSWORD = "guest" # noqa: S105 GCS_PROJECT_ID = None HTTPCACHE_ENABLED = False HTTPCACHE_ALWAYS_STORE = False HTTPCACHE_DBM_MODULE = "dbm" HTTPCACHE_DIR = "httpcache" HTTPCACHE_EXPIRATION_SECS = 0 HTTPCACHE_GZIP = False HTTPCACHE_IGNORE_HTTP_CODES = [] HTTPCACHE_IGNORE_MISSING = False HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = [] HTTPCACHE_IGNORE_SCHEMES = ["file"] HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage" HTTPPROXY_ENABLED = True HTTPPROXY_AUTH_ENCODING = "latin-1" IMAGES_STORE_GCS_ACL = "" IMAGES_STORE_S3_ACL = "private" ITEM_PIPELINES = {} ITEM_PIPELINES_BASE = {} ITEM_PROCESSOR = "scrapy.pipelines.ItemPipelineManager" JOBDIR = None LOG_ENABLED = True LOG_DATEFORMAT = "%Y-%m-%d %H:%M:%S" LOG_ENCODING = "utf-8" LOG_FILE = None LOG_FILE_APPEND = True LOG_FORMAT = "%(asctime)s [%(name)s] %(levelname)s: %(message)s" LOG_FORMATTER = "scrapy.logformatter.LogFormatter" LOG_LEVEL = "DEBUG" LOG_SHORT_NAMES = False LOG_STDOUT = False LOG_VERSIONS = [ "lxml", "libxml2", "cssselect", "parsel", "w3lib", "Twisted", "Python", "pyOpenSSL", "cryptography", "Platform", ] LOGSTATS_INTERVAL = 60.0 MAIL_FROM = "scrapy@localhost" MAIL_HOST = "localhost" MAIL_PORT = 25 MAIL_USER = None MAIL_PASS = None MEMDEBUG_ENABLED = False # enable memory debugging MEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown MEMUSAGE_ENABLED = True MEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0 MEMUSAGE_LIMIT_MB = 0 MEMUSAGE_NOTIFY_MAIL = [] MEMUSAGE_WARNING_MB = 0 METAREFRESH_ENABLED = True METAREFRESH_IGNORE_TAGS = ["noscript"] METAREFRESH_MAXDELAY = 100 NEWSPIDER_MODULE = "" PERIODIC_LOG_DELTA = None PERIODIC_LOG_STATS = None PERIODIC_LOG_TIMING_ENABLED = False RANDOMIZE_DOWNLOAD_DELAY = True REACTOR_THREADPOOL_MAXSIZE = 10 REDIRECT_ENABLED = True REDIRECT_MAX_TIMES = 20 # uses Firefox default setting REDIRECT_PRIORITY_ADJUST = +2 REFERER_ENABLED = True REFERRER_POLICY = "scrapy.spidermiddlewares.referer.DefaultReferrerPolicy" REQUEST_FINGERPRINTER_CLASS = "scrapy.utils.request.RequestFingerprinter" RETRY_ENABLED = True RETRY_EXCEPTIONS = [ "twisted.internet.defer.TimeoutError", "twisted.internet.error.TimeoutError", "twisted.internet.error.DNSLookupError", "twisted.internet.error.ConnectionRefusedError", "twisted.internet.error.ConnectionDone", "twisted.internet.error.ConnectError", "twisted.internet.error.ConnectionLost", "twisted.internet.error.TCPTimedOutError", "twisted.web.client.ResponseFailed", # OSError is raised by the HttpCompression middleware when trying to # decompress an empty response OSError, "scrapy.core.downloader.handlers.http11.TunnelError", ] RETRY_HTTP_CODES = [500, 502, 503, 504, 522, 524, 408, 429] RETRY_PRIORITY_ADJUST = -1 RETRY_TIMES = 2 # initial response + 2 retries = 3 requests ROBOTSTXT_OBEY = False ROBOTSTXT_PARSER = "scrapy.robotstxt.ProtegoRobotParser" ROBOTSTXT_USER_AGENT = None SCHEDULER = "scrapy.core.scheduler.Scheduler" SCHEDULER_DEBUG = False SCHEDULER_DISK_QUEUE = "scrapy.squeues.PickleLifoDiskQueue" SCHEDULER_MEMORY_QUEUE = "scrapy.squeues.LifoMemoryQueue" SCHEDULER_PRIORITY_QUEUE = "scrapy.pqueues.DownloaderAwarePriorityQueue" SCHEDULER_START_DISK_QUEUE = "scrapy.squeues.PickleFifoDiskQueue" SCHEDULER_START_MEMORY_QUEUE = "scrapy.squeues.FifoMemoryQueue" SCRAPER_SLOT_MAX_ACTIVE_SIZE = 5000000 SPIDER_CONTRACTS = {} SPIDER_CONTRACTS_BASE = { "scrapy.contracts.default.UrlContract": 1, "scrapy.contracts.default.CallbackKeywordArgumentsContract": 1, "scrapy.contracts.default.MetadataContract": 1, "scrapy.contracts.default.ReturnsContract": 2, "scrapy.contracts.default.ScrapesContract": 3, } SPIDER_LOADER_CLASS = "scrapy.spiderloader.SpiderLoader" SPIDER_LOADER_WARN_ONLY = False SPIDER_MIDDLEWARES = {} SPIDER_MIDDLEWARES_BASE = { # Engine side "scrapy.spidermiddlewares.start.StartSpiderMiddleware": 25, "scrapy.spidermiddlewares.httperror.HttpErrorMiddleware": 50, "scrapy.spidermiddlewares.referer.RefererMiddleware": 700, "scrapy.spidermiddlewares.urllength.UrlLengthMiddleware": 800, "scrapy.spidermiddlewares.depth.DepthMiddleware": 900, # Spider side } SPIDER_MODULES = [] STATS_CLASS = "scrapy.statscollectors.MemoryStatsCollector" STATS_DUMP = True STATSMAILER_RCPTS = [] TELNETCONSOLE_ENABLED = 1 TELNETCONSOLE_HOST = "127.0.0.1" TELNETCONSOLE_PORT = [6023, 6073] TELNETCONSOLE_USERNAME = "scrapy" TELNETCONSOLE_PASSWORD = None TEMPLATES_DIR = str((Path(__file__).parent / ".." / "templates").resolve()) TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" URLLENGTH_LIMIT = 2083 USER_AGENT = f"Scrapy/{import_module('scrapy').__version__} (+https://scrapy.org)" WARN_ON_GENERATOR_RETURN_VALUE = True def __getattr__(name: str): if name == "CONCURRENT_REQUESTS_PER_IP": import warnings # noqa: PLC0415 from scrapy.exceptions import ScrapyDeprecationWarning # noqa: PLC0415 warnings.warn( "The scrapy.settings.default_settings.CONCURRENT_REQUESTS_PER_IP attribute is deprecated, use scrapy.settings.default_settings.CONCURRENT_REQUESTS_PER_DOMAIN instead.", ScrapyDeprecationWarning, stacklevel=2, ) return 0 raise AttributeError
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/settings/__init__.py
scrapy/settings/__init__.py
from __future__ import annotations import copy import json import warnings from collections.abc import Iterable, Iterator, Mapping, MutableMapping from importlib import import_module from pprint import pformat from typing import TYPE_CHECKING, Any, TypeAlias, cast from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.settings import default_settings from scrapy.utils.misc import load_object # The key types are restricted in BaseSettings._get_key() to ones supported by JSON, # see https://github.com/scrapy/scrapy/issues/5383. _SettingsKey: TypeAlias = bool | float | int | str | None if TYPE_CHECKING: from types import ModuleType # https://github.com/python/typing/issues/445#issuecomment-1131458824 from _typeshed import SupportsItems # typing.Self requires Python 3.11 from typing_extensions import Self _SettingsInput: TypeAlias = SupportsItems[_SettingsKey, Any] | str | None SETTINGS_PRIORITIES: dict[str, int] = { "default": 0, "command": 10, "addon": 15, "project": 20, "spider": 30, "cmdline": 40, } def get_settings_priority(priority: int | str) -> int: """ Small helper function that looks up a given string priority in the :attr:`~scrapy.settings.SETTINGS_PRIORITIES` dictionary and returns its numerical value, or directly returns a given numerical priority. """ if isinstance(priority, str): return SETTINGS_PRIORITIES[priority] return priority class SettingsAttribute: """Class for storing data related to settings attributes. This class is intended for internal usage, you should try Settings class for settings configuration, not this one. """ def __init__(self, value: Any, priority: int): self.value: Any = value self.priority: int if isinstance(self.value, BaseSettings): self.priority = max(self.value.maxpriority(), priority) else: self.priority = priority def set(self, value: Any, priority: int) -> None: """Sets value if priority is higher or equal than current priority.""" if priority >= self.priority: if isinstance(self.value, BaseSettings): value = BaseSettings(value, priority=priority) self.value = value self.priority = priority def __repr__(self) -> str: return f"<SettingsAttribute value={self.value!r} priority={self.priority}>" class BaseSettings(MutableMapping[_SettingsKey, Any]): """ Instances of this class behave like dictionaries, but store priorities along with their ``(key, value)`` pairs, and can be frozen (i.e. marked immutable). Key-value entries can be passed on initialization with the ``values`` argument, and they would take the ``priority`` level (unless ``values`` is already an instance of :class:`~scrapy.settings.BaseSettings`, in which case the existing priority levels will be kept). If the ``priority`` argument is a string, the priority name will be looked up in :attr:`~scrapy.settings.SETTINGS_PRIORITIES`. Otherwise, a specific integer should be provided. Once the object is created, new settings can be loaded or updated with the :meth:`~scrapy.settings.BaseSettings.set` method, and can be accessed with the square bracket notation of dictionaries, or with the :meth:`~scrapy.settings.BaseSettings.get` method of the instance and its value conversion variants. When requesting a stored key, the value with the highest priority will be retrieved. """ __default = object() def __init__(self, values: _SettingsInput = None, priority: int | str = "project"): self.frozen: bool = False self.attributes: dict[_SettingsKey, SettingsAttribute] = {} if values: self.update(values, priority) def __getitem__(self, opt_name: _SettingsKey) -> Any: if opt_name not in self: return None return self.attributes[opt_name].value def __contains__(self, name: Any) -> bool: return name in self.attributes def add_to_list(self, name: _SettingsKey, item: Any) -> None: """Append *item* to the :class:`list` setting with the specified *name* if *item* is not already in that list. This change is applied regardless of the priority of the *name* setting. The setting priority is not affected by this change either. """ value: list[str] = self.getlist(name) if item not in value: self.set(name, [*value, item], self.getpriority(name) or 0) def remove_from_list(self, name: _SettingsKey, item: Any) -> None: """Remove *item* from the :class:`list` setting with the specified *name*. If *item* is missing, raise :exc:`ValueError`. This change is applied regardless of the priority of the *name* setting. The setting priority is not affected by this change either. """ value: list[str] = self.getlist(name) if item not in value: raise ValueError(f"{item!r} not found in the {name} setting ({value!r}).") self.set(name, [v for v in value if v != item], self.getpriority(name) or 0) def get(self, name: _SettingsKey, default: Any = None) -> Any: """ Get a setting value without affecting its original type. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ if name == "CONCURRENT_REQUESTS_PER_IP" and ( isinstance(self[name], int) and self[name] != 0 ): warnings.warn( "The CONCURRENT_REQUESTS_PER_IP setting is deprecated, use CONCURRENT_REQUESTS_PER_DOMAIN instead.", ScrapyDeprecationWarning, stacklevel=2, ) return self[name] if self[name] is not None else default def getbool(self, name: _SettingsKey, default: bool = False) -> bool: """ Get a setting value as a boolean. ``1``, ``'1'``, `True`` and ``'True'`` return ``True``, while ``0``, ``'0'``, ``False``, ``'False'`` and ``None`` return ``False``. For example, settings populated through environment variables set to ``'0'`` will return ``False`` when using this method. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ got = self.get(name, default) try: return bool(int(got)) except ValueError: if got in ("True", "true"): return True if got in ("False", "false"): return False raise ValueError( "Supported values for boolean settings " "are 0/1, True/False, '0'/'1', " "'True'/'False' and 'true'/'false'" ) def getint(self, name: _SettingsKey, default: int = 0) -> int: """ Get a setting value as an int. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ return int(self.get(name, default)) def getfloat(self, name: _SettingsKey, default: float = 0.0) -> float: """ Get a setting value as a float. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ return float(self.get(name, default)) def getlist( self, name: _SettingsKey, default: list[Any] | None = None ) -> list[Any]: """ Get a setting value as a list. If the setting original type is a list, a copy of it will be returned. If it's a string it will be split by ",". If it is an empty string, an empty list will be returned. For example, settings populated through environment variables set to ``'one,two'`` will return a list ['one', 'two'] when using this method. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ value = self.get(name, default or []) if not value: return [] if isinstance(value, str): value = value.split(",") return list(value) def getdict( self, name: _SettingsKey, default: dict[Any, Any] | None = None ) -> dict[Any, Any]: """ Get a setting value as a dictionary. If the setting original type is a dictionary, a copy of it will be returned. If it is a string it will be evaluated as a JSON dictionary. In the case that it is a :class:`~scrapy.settings.BaseSettings` instance itself, it will be converted to a dictionary, containing all its current settings values as they would be returned by :meth:`~scrapy.settings.BaseSettings.get`, and losing all information about priority and mutability. :param name: the setting name :type name: str :param default: the value to return if no setting is found :type default: object """ value = self.get(name, default or {}) if isinstance(value, str): value = json.loads(value) return dict(value) def getdictorlist( self, name: _SettingsKey, default: dict[Any, Any] | list[Any] | tuple[Any] | None = None, ) -> dict[Any, Any] | list[Any]: """Get a setting value as either a :class:`dict` or a :class:`list`. If the setting is already a dict or a list, a copy of it will be returned. If it is a string it will be evaluated as JSON, or as a comma-separated list of strings as a fallback. For example, settings populated from the command line will return: - ``{'key1': 'value1', 'key2': 'value2'}`` if set to ``'{"key1": "value1", "key2": "value2"}'`` - ``['one', 'two']`` if set to ``'["one", "two"]'`` or ``'one,two'`` :param name: the setting name :type name: string :param default: the value to return if no setting is found :type default: any """ value = self.get(name, default) if value is None: return {} if isinstance(value, str): try: value_loaded = json.loads(value) if not isinstance(value_loaded, (dict, list)): raise ValueError( f"JSON string for setting '{name}' must evaluate to a dict or list, " f"got {type(value_loaded).__name__}: {value_loaded!r}" ) return value_loaded except ValueError: return value.split(",") if isinstance(value, tuple): return list(value) if not isinstance(value, (dict, list)): raise ValueError( f"Setting '{name}' must be a dict, list, tuple, or string, " f"got {type(value).__name__}: {value!r}" ) return copy.deepcopy(value) def getwithbase(self, name: _SettingsKey) -> BaseSettings: """Get a composition of a dictionary-like setting and its `_BASE` counterpart. :param name: name of the dictionary-like setting :type name: str """ if not isinstance(name, str): raise ValueError(f"Base setting key must be a string, got {name}") compbs = BaseSettings() compbs.update(self[name + "_BASE"]) compbs.update(self[name]) return compbs def getpriority(self, name: _SettingsKey) -> int | None: """ Return the current numerical priority value of a setting, or ``None`` if the given ``name`` does not exist. :param name: the setting name :type name: str """ if name not in self: return None return self.attributes[name].priority def maxpriority(self) -> int: """ Return the numerical value of the highest priority present throughout all settings, or the numerical value for ``default`` from :attr:`~scrapy.settings.SETTINGS_PRIORITIES` if there are no settings stored. """ if len(self) > 0: return max(cast("int", self.getpriority(name)) for name in self) return get_settings_priority("default") def replace_in_component_priority_dict( self, name: _SettingsKey, old_cls: type, new_cls: type, priority: int | None = None, ) -> None: """Replace *old_cls* with *new_cls* in the *name* :ref:`component priority dictionary <component-priority-dictionaries>`. If *old_cls* is missing, or has :data:`None` as value, :exc:`KeyError` is raised. If *old_cls* was present as an import string, even more than once, those keys are dropped and replaced by *new_cls*. If *priority* is specified, that is the value assigned to *new_cls* in the component priority dictionary. Otherwise, the value of *old_cls* is used. If *old_cls* was present multiple times (possible with import strings) with different values, the value assigned to *new_cls* is one of them, with no guarantee about which one it is. This change is applied regardless of the priority of the *name* setting. The setting priority is not affected by this change either. """ component_priority_dict = self.getdict(name) old_priority = None for cls_or_path in tuple(component_priority_dict): if load_object(cls_or_path) != old_cls: continue if (old_priority := component_priority_dict.pop(cls_or_path)) is None: break if old_priority is None: raise KeyError( f"{old_cls} not found in the {name} setting ({component_priority_dict!r})." ) component_priority_dict[new_cls] = ( old_priority if priority is None else priority ) self.set(name, component_priority_dict, priority=self.getpriority(name) or 0) def __setitem__(self, name: _SettingsKey, value: Any) -> None: self.set(name, value) def set( self, name: _SettingsKey, value: Any, priority: int | str = "project" ) -> None: """ Store a key/value attribute with a given priority. Settings should be populated *before* configuring the Crawler object (through the :meth:`~scrapy.crawler.Crawler.configure` method), otherwise they won't have any effect. :param name: the setting name :type name: str :param value: the value to associate with the setting :type value: object :param priority: the priority of the setting. Should be a key of :attr:`~scrapy.settings.SETTINGS_PRIORITIES` or an integer :type priority: str or int """ self._assert_mutability() priority = get_settings_priority(priority) if name not in self: if isinstance(value, SettingsAttribute): self.attributes[name] = value else: self.attributes[name] = SettingsAttribute(value, priority) else: self.attributes[name].set(value, priority) def set_in_component_priority_dict( self, name: _SettingsKey, cls: type, priority: int | None ) -> None: """Set the *cls* component in the *name* :ref:`component priority dictionary <component-priority-dictionaries>` setting with *priority*. If *cls* already exists, its value is updated. If *cls* was present as an import string, even more than once, those keys are dropped and replaced by *cls*. This change is applied regardless of the priority of the *name* setting. The setting priority is not affected by this change either. """ component_priority_dict = self.getdict(name) for cls_or_path in tuple(component_priority_dict): if not isinstance(cls_or_path, str): continue _cls = load_object(cls_or_path) if _cls == cls: del component_priority_dict[cls_or_path] component_priority_dict[cls] = priority self.set(name, component_priority_dict, self.getpriority(name) or 0) def setdefault( self, name: _SettingsKey, default: Any = None, priority: int | str = "project", ) -> Any: if name not in self: self.set(name, default, priority) return default return self.attributes[name].value def setdefault_in_component_priority_dict( self, name: _SettingsKey, cls: type, priority: int | None ) -> None: """Set the *cls* component in the *name* :ref:`component priority dictionary <component-priority-dictionaries>` setting with *priority* if not already defined (even as an import string). If *cls* is not already defined, it is set regardless of the priority of the *name* setting. The setting priority is not affected by this change either. """ component_priority_dict = self.getdict(name) for cls_or_path in tuple(component_priority_dict): if load_object(cls_or_path) == cls: return component_priority_dict[cls] = priority self.set(name, component_priority_dict, self.getpriority(name) or 0) def setdict(self, values: _SettingsInput, priority: int | str = "project") -> None: self.update(values, priority) def setmodule( self, module: ModuleType | str, priority: int | str = "project" ) -> None: """ Store settings from a module with a given priority. This is a helper function that calls :meth:`~scrapy.settings.BaseSettings.set` for every globally declared uppercase variable of ``module`` with the provided ``priority``. :param module: the module or the path of the module :type module: types.ModuleType or str :param priority: the priority of the settings. Should be a key of :attr:`~scrapy.settings.SETTINGS_PRIORITIES` or an integer :type priority: str or int """ self._assert_mutability() if isinstance(module, str): module = import_module(module) for key in dir(module): if key.isupper(): self.set(key, getattr(module, key), priority) # BaseSettings.update() doesn't support all inputs that MutableMapping.update() supports def update(self, values: _SettingsInput, priority: int | str = "project") -> None: # type: ignore[override] """ Store key/value pairs with a given priority. This is a helper function that calls :meth:`~scrapy.settings.BaseSettings.set` for every item of ``values`` with the provided ``priority``. If ``values`` is a string, it is assumed to be JSON-encoded and parsed into a dict with ``json.loads()`` first. If it is a :class:`~scrapy.settings.BaseSettings` instance, the per-key priorities will be used and the ``priority`` parameter ignored. This allows inserting/updating settings with different priorities with a single command. :param values: the settings names and values :type values: dict or string or :class:`~scrapy.settings.BaseSettings` :param priority: the priority of the settings. Should be a key of :attr:`~scrapy.settings.SETTINGS_PRIORITIES` or an integer :type priority: str or int """ self._assert_mutability() if isinstance(values, str): values = cast("dict[_SettingsKey, Any]", json.loads(values)) if values is not None: if isinstance(values, BaseSettings): for name, value in values.items(): self.set(name, value, cast("int", values.getpriority(name))) else: for name, value in values.items(): self.set(name, value, priority) def delete(self, name: _SettingsKey, priority: int | str = "project") -> None: if name not in self: raise KeyError(name) self._assert_mutability() priority = get_settings_priority(priority) if priority >= cast("int", self.getpriority(name)): del self.attributes[name] def __delitem__(self, name: _SettingsKey) -> None: self._assert_mutability() del self.attributes[name] def _assert_mutability(self) -> None: if self.frozen: raise TypeError("Trying to modify an immutable Settings object") def copy(self) -> Self: """ Make a deep copy of current settings. This method returns a new instance of the :class:`Settings` class, populated with the same values and their priorities. Modifications to the new object won't be reflected on the original settings. """ return copy.deepcopy(self) def freeze(self) -> None: """ Disable further changes to the current settings. After calling this method, the present state of the settings will become immutable. Trying to change values through the :meth:`~set` method and its variants won't be possible and will be alerted. """ self.frozen = True def frozencopy(self) -> Self: """ Return an immutable copy of the current settings. Alias for a :meth:`~freeze` call in the object returned by :meth:`copy`. """ copy = self.copy() copy.freeze() return copy def __iter__(self) -> Iterator[_SettingsKey]: return iter(self.attributes) def __len__(self) -> int: return len(self.attributes) def _to_dict(self) -> dict[_SettingsKey, Any]: return { self._get_key(k): (v._to_dict() if isinstance(v, BaseSettings) else v) for k, v in self.items() } def _get_key(self, key_value: Any) -> _SettingsKey: return ( key_value if isinstance(key_value, (bool, float, int, str, type(None))) else str(key_value) ) def copy_to_dict(self) -> dict[_SettingsKey, Any]: """ Make a copy of current settings and convert to a dict. This method returns a new dict populated with the same values and their priorities as the current settings. Modifications to the returned dict won't be reflected on the original settings. This method can be useful for example for printing settings in Scrapy shell. """ settings = self.copy() return settings._to_dict() # https://ipython.readthedocs.io/en/stable/config/integrating.html#pretty-printing def _repr_pretty_(self, p: Any, cycle: bool) -> None: if cycle: p.text(repr(self)) else: p.text(pformat(self.copy_to_dict())) def pop(self, name: _SettingsKey, default: Any = __default) -> Any: try: value = self.attributes[name].value except KeyError: if default is self.__default: raise return default self.__delitem__(name) return value class Settings(BaseSettings): """ This object stores Scrapy settings for the configuration of internal components, and can be used for any further customization. It is a direct subclass and supports all methods of :class:`~scrapy.settings.BaseSettings`. Additionally, after instantiation of this class, the new object will have the global default settings described on :ref:`topics-settings-ref` already populated. """ def __init__(self, values: _SettingsInput = None, priority: int | str = "project"): # Do not pass kwarg values here. We don't want to promote user-defined # dicts, and we want to update, not replace, default dicts with the # values given by the user super().__init__() self.setmodule(default_settings, "default") # Promote default dictionaries to BaseSettings instances for per-key # priorities for name, val in self.items(): if isinstance(val, dict): self.set(name, BaseSettings(val, "default"), "default") self.update(values, priority) def iter_default_settings() -> Iterable[tuple[str, Any]]: """Return the default settings as an iterator of (name, value) tuples""" for name in dir(default_settings): if name.isupper(): yield name, getattr(default_settings, name) def overridden_settings( settings: Mapping[_SettingsKey, Any], ) -> Iterable[tuple[str, Any]]: """Return an iterable of the settings that have been overridden""" for name, defvalue in iter_default_settings(): value = settings[name] if not isinstance(defvalue, dict) and value != defvalue: yield name, value
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/memdebug.py
scrapy/extensions/memdebug.py
""" MemoryDebugger extension See documentation in docs/topics/extensions.rst """ from __future__ import annotations import gc from typing import TYPE_CHECKING from scrapy import Spider, signals from scrapy.exceptions import NotConfigured from scrapy.utils.trackref import live_refs if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.statscollectors import StatsCollector class MemoryDebugger: def __init__(self, stats: StatsCollector): self.stats: StatsCollector = stats @classmethod def from_crawler(cls, crawler: Crawler) -> Self: if not crawler.settings.getbool("MEMDEBUG_ENABLED"): raise NotConfigured assert crawler.stats o = cls(crawler.stats) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) return o def spider_closed(self, spider: Spider, reason: str) -> None: gc.collect() self.stats.set_value("memdebug/gc_garbage_count", len(gc.garbage)) for cls, wdict in live_refs.items(): if not wdict: continue self.stats.set_value(f"memdebug/live_refs/{cls.__name__}", len(wdict))
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/memusage.py
scrapy/extensions/memusage.py
""" MemoryUsage extension See documentation in docs/topics/extensions.rst """ from __future__ import annotations import logging import socket import sys from importlib import import_module from pprint import pformat from typing import TYPE_CHECKING from scrapy import signals from scrapy.exceptions import NotConfigured from scrapy.mail import MailSender from scrapy.utils.asyncio import AsyncioLoopingCall, create_looping_call from scrapy.utils.defer import _schedule_coro from scrapy.utils.engine import get_engine_status if TYPE_CHECKING: from twisted.internet.task import LoopingCall # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler logger = logging.getLogger(__name__) class MemoryUsage: def __init__(self, crawler: Crawler): if not crawler.settings.getbool("MEMUSAGE_ENABLED"): raise NotConfigured try: # stdlib's resource module is only available on unix platforms. self.resource = import_module("resource") except ImportError: raise NotConfigured self.crawler: Crawler = crawler self.warned: bool = False self.notify_mails: list[str] = crawler.settings.getlist("MEMUSAGE_NOTIFY_MAIL") self.limit: int = crawler.settings.getint("MEMUSAGE_LIMIT_MB") * 1024 * 1024 self.warning: int = crawler.settings.getint("MEMUSAGE_WARNING_MB") * 1024 * 1024 self.check_interval: float = crawler.settings.getfloat( "MEMUSAGE_CHECK_INTERVAL_SECONDS" ) self.mail: MailSender = MailSender.from_crawler(crawler) crawler.signals.connect(self.engine_started, signal=signals.engine_started) crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def get_virtual_size(self) -> int: size: int = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss if sys.platform != "darwin": # on macOS ru_maxrss is in bytes, on Linux it is in KB size *= 1024 return size def engine_started(self) -> None: assert self.crawler.stats self.crawler.stats.set_value("memusage/startup", self.get_virtual_size()) self.tasks: list[AsyncioLoopingCall | LoopingCall] = [] tsk = create_looping_call(self.update) self.tasks.append(tsk) tsk.start(self.check_interval, now=True) if self.limit: tsk = create_looping_call(self._check_limit) self.tasks.append(tsk) tsk.start(self.check_interval, now=True) if self.warning: tsk = create_looping_call(self._check_warning) self.tasks.append(tsk) tsk.start(self.check_interval, now=True) def engine_stopped(self) -> None: for tsk in self.tasks: if tsk.running: tsk.stop() def update(self) -> None: assert self.crawler.stats self.crawler.stats.max_value("memusage/max", self.get_virtual_size()) def _check_limit(self) -> None: assert self.crawler.engine assert self.crawler.stats peak_mem_usage = self.get_virtual_size() if peak_mem_usage > self.limit: self.crawler.stats.set_value("memusage/limit_reached", 1) mem = self.limit / 1024 / 1024 logger.error( "Memory usage exceeded %(memusage)dMiB. Shutting down Scrapy...", {"memusage": mem}, extra={"crawler": self.crawler}, ) if self.notify_mails: subj = ( f"{self.crawler.settings['BOT_NAME']} terminated: " f"memory usage exceeded {mem}MiB at {socket.gethostname()}" ) self._send_report(self.notify_mails, subj) self.crawler.stats.set_value("memusage/limit_notified", 1) if self.crawler.engine.spider is not None: _schedule_coro( self.crawler.engine.close_spider_async(reason="memusage_exceeded") ) else: _schedule_coro(self.crawler.stop_async()) else: logger.info( "Peak memory usage is %(virtualsize)dMiB", {"virtualsize": peak_mem_usage / 1024 / 1024}, ) def _check_warning(self) -> None: if self.warned: # warn only once return assert self.crawler.stats if self.get_virtual_size() > self.warning: self.crawler.stats.set_value("memusage/warning_reached", 1) mem = self.warning / 1024 / 1024 logger.warning( "Memory usage reached %(memusage)dMiB", {"memusage": mem}, extra={"crawler": self.crawler}, ) if self.notify_mails: subj = ( f"{self.crawler.settings['BOT_NAME']} warning: " f"memory usage reached {mem}MiB at {socket.gethostname()}" ) self._send_report(self.notify_mails, subj) self.crawler.stats.set_value("memusage/warning_notified", 1) self.warned = True def _send_report(self, rcpts: list[str], subject: str) -> None: """send notification mail with some additional useful info""" assert self.crawler.engine assert self.crawler.stats stats = self.crawler.stats s = f"Memory usage at engine startup : {stats.get_value('memusage/startup') / 1024 / 1024}M\r\n" s += f"Maximum memory usage : {stats.get_value('memusage/max') / 1024 / 1024}M\r\n" s += f"Current memory usage : {self.get_virtual_size() / 1024 / 1024}M\r\n" s += ( "ENGINE STATUS ------------------------------------------------------- \r\n" ) s += "\r\n" s += pformat(get_engine_status(self.crawler.engine)) s += "\r\n" self.mail.send(rcpts, subject, s)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/logstats.py
scrapy/extensions/logstats.py
from __future__ import annotations import logging from typing import TYPE_CHECKING from scrapy import Spider, signals from scrapy.exceptions import NotConfigured from scrapy.utils.asyncio import AsyncioLoopingCall, create_looping_call if TYPE_CHECKING: from twisted.internet.task import LoopingCall # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.statscollectors import StatsCollector logger = logging.getLogger(__name__) class LogStats: """Log basic scraping stats periodically like: * RPM - Requests per Minute * IPM - Items per Minute """ def __init__(self, stats: StatsCollector, interval: float = 60.0): self.stats: StatsCollector = stats self.interval: float = interval self.multiplier: float = 60.0 / self.interval self.task: AsyncioLoopingCall | LoopingCall | None = None @classmethod def from_crawler(cls, crawler: Crawler) -> Self: interval: float = crawler.settings.getfloat("LOGSTATS_INTERVAL") if not interval: raise NotConfigured assert crawler.stats o = cls(crawler.stats, interval) crawler.signals.connect(o.spider_opened, signal=signals.spider_opened) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) return o def spider_opened(self, spider: Spider) -> None: self.pagesprev: int = 0 self.itemsprev: int = 0 self.task = create_looping_call(self.log, spider) self.task.start(self.interval) def log(self, spider: Spider) -> None: self.calculate_stats() msg = ( "Crawled %(pages)d pages (at %(pagerate)d pages/min), " "scraped %(items)d items (at %(itemrate)d items/min)" ) log_args = { "pages": self.pages, "pagerate": self.prate, "items": self.items, "itemrate": self.irate, } logger.info(msg, log_args, extra={"spider": spider}) def calculate_stats(self) -> None: self.items: int = self.stats.get_value("item_scraped_count", 0) self.pages: int = self.stats.get_value("response_received_count", 0) self.irate: float = (self.items - self.itemsprev) * self.multiplier self.prate: float = (self.pages - self.pagesprev) * self.multiplier self.pagesprev, self.itemsprev = self.pages, self.items def spider_closed(self, spider: Spider, reason: str) -> None: if self.task and self.task.running: self.task.stop() rpm_final, ipm_final = self.calculate_final_stats(spider) self.stats.set_value("responses_per_minute", rpm_final) self.stats.set_value("items_per_minute", ipm_final) def calculate_final_stats( self, spider: Spider ) -> tuple[None, None] | tuple[float, float]: start_time = self.stats.get_value("start_time") finish_time = self.stats.get_value("finish_time") if not start_time or not finish_time: return None, None mins_elapsed = (finish_time - start_time).seconds / 60 if mins_elapsed == 0: return None, None items = self.stats.get_value("item_scraped_count", 0) pages = self.stats.get_value("response_received_count", 0) return (pages / mins_elapsed), (items / mins_elapsed)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/debug.py
scrapy/extensions/debug.py
""" Extensions for debugging Scrapy See documentation in docs/topics/extensions.rst """ from __future__ import annotations import contextlib import logging import signal import sys import threading import traceback from pdb import Pdb from typing import TYPE_CHECKING from scrapy.utils.engine import format_engine_status from scrapy.utils.trackref import format_live_refs if TYPE_CHECKING: from types import FrameType # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler logger = logging.getLogger(__name__) class StackTraceDump: def __init__(self, crawler: Crawler): self.crawler: Crawler = crawler try: signal.signal(signal.SIGUSR2, self.dump_stacktrace) # type: ignore[attr-defined] signal.signal(signal.SIGQUIT, self.dump_stacktrace) # type: ignore[attr-defined] except AttributeError: # win32 platforms don't support SIGUSR signals pass @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def dump_stacktrace(self, signum: int, frame: FrameType | None) -> None: assert self.crawler.engine log_args = { "stackdumps": self._thread_stacks(), "enginestatus": format_engine_status(self.crawler.engine), "liverefs": format_live_refs(), } logger.info( "Dumping stack trace and engine status\n" "%(enginestatus)s\n%(liverefs)s\n%(stackdumps)s", log_args, extra={"crawler": self.crawler}, ) def _thread_stacks(self) -> str: id2name = {th.ident: th.name for th in threading.enumerate()} dumps = "" for id_, frame in sys._current_frames().items(): name = id2name.get(id_, "") dump = "".join(traceback.format_stack(frame)) dumps += f"# Thread: {name}({id_})\n{dump}\n" return dumps class Debugger: def __init__(self) -> None: # win32 platforms don't support SIGUSR signals with contextlib.suppress(AttributeError): signal.signal(signal.SIGUSR2, self._enter_debugger) # type: ignore[attr-defined] def _enter_debugger(self, signum: int, frame: FrameType | None) -> None: assert frame Pdb().set_trace(frame.f_back)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/feedexport.py
scrapy/extensions/feedexport.py
""" Feed Exports extension See documentation in docs/topics/feed-exports.rst """ from __future__ import annotations import asyncio import contextlib import logging import re import sys import warnings from abc import ABC, abstractmethod from collections.abc import Callable, Coroutine from datetime import datetime, timezone from pathlib import Path, PureWindowsPath from tempfile import NamedTemporaryFile from typing import IO, TYPE_CHECKING, Any, Protocol, TypeAlias, cast from urllib.parse import unquote, urlparse from twisted.internet.defer import Deferred, DeferredList from twisted.internet.threads import deferToThread from w3lib.url import file_uri_to_path from zope.interface import Interface, implementer from scrapy import Spider, signals from scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning from scrapy.extensions.postprocessing import PostProcessingManager from scrapy.utils.asyncio import is_asyncio_available from scrapy.utils.conf import feed_complete_default_values_from_settings from scrapy.utils.defer import deferred_from_coro, ensure_awaitable from scrapy.utils.ftp import ftp_store_file from scrapy.utils.misc import build_from_crawler, load_object from scrapy.utils.python import without_none_values if TYPE_CHECKING: from _typeshed import OpenBinaryMode # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.exporters import BaseItemExporter from scrapy.settings import BaseSettings, Settings logger = logging.getLogger(__name__) UriParamsCallableT: TypeAlias = Callable[ [dict[str, Any], Spider], dict[str, Any] | None ] class ItemFilter: """ This will be used by FeedExporter to decide if an item should be allowed to be exported to a particular feed. :param feed_options: feed specific options passed from FeedExporter :type feed_options: dict """ feed_options: dict[str, Any] | None item_classes: tuple[type, ...] def __init__(self, feed_options: dict[str, Any] | None) -> None: self.feed_options = feed_options if feed_options is not None: self.item_classes = tuple( load_object(item_class) for item_class in feed_options.get("item_classes") or () ) else: self.item_classes = () def accepts(self, item: Any) -> bool: """ Return ``True`` if `item` should be exported or ``False`` otherwise. :param item: scraped item which user wants to check if is acceptable :type item: :ref:`Scrapy items <topics-items>` :return: `True` if accepted, `False` otherwise :rtype: bool """ if self.item_classes: return isinstance(item, self.item_classes) return True # accept all items by default class IFeedStorage(Interface): """Interface that all Feed Storages must implement""" # pylint: disable=no-self-argument def __init__(uri, *, feed_options=None): # pylint: disable=super-init-not-called """Initialize the storage with the parameters given in the URI and the feed-specific options (see :setting:`FEEDS`)""" def open(spider): """Open the storage for the given spider. It must return a file-like object that will be used for the exporters""" def store(file): """Store the given file stream""" class FeedStorageProtocol(Protocol): """Reimplementation of ``IFeedStorage`` that can be used in type hints.""" def __init__(self, uri: str, *, feed_options: dict[str, Any] | None = None): """Initialize the storage with the parameters given in the URI and the feed-specific options (see :setting:`FEEDS`)""" def open(self, spider: Spider) -> IO[bytes]: """Open the storage for the given spider. It must return a file-like object that will be used for the exporters""" def store(self, file: IO[bytes]) -> Deferred[None] | None: """Store the given file stream""" @implementer(IFeedStorage) class BlockingFeedStorage(ABC): def open(self, spider: Spider) -> IO[bytes]: path = spider.crawler.settings["FEED_TEMPDIR"] if path and not Path(path).is_dir(): raise OSError("Not a Directory: " + str(path)) return NamedTemporaryFile(prefix="feed-", dir=path) def store(self, file: IO[bytes]) -> Deferred[None] | None: return deferToThread(self._store_in_thread, file) @abstractmethod def _store_in_thread(self, file: IO[bytes]) -> None: raise NotImplementedError @implementer(IFeedStorage) class StdoutFeedStorage: def __init__( self, uri: str, _stdout: IO[bytes] | None = None, *, feed_options: dict[str, Any] | None = None, ): if not _stdout: _stdout = sys.stdout.buffer self._stdout: IO[bytes] = _stdout if feed_options and feed_options.get("overwrite", False) is True: logger.warning( "Standard output (stdout) storage does not support " "overwriting. To suppress this warning, remove the " "overwrite option from your FEEDS setting, or set " "it to False." ) def open(self, spider: Spider) -> IO[bytes]: return self._stdout def store(self, file: IO[bytes]) -> Deferred[None] | None: pass @implementer(IFeedStorage) class FileFeedStorage: def __init__(self, uri: str, *, feed_options: dict[str, Any] | None = None): self.path: str = file_uri_to_path(uri) if uri.startswith("file://") else uri feed_options = feed_options or {} self.write_mode: OpenBinaryMode = ( "wb" if feed_options.get("overwrite", False) else "ab" ) def open(self, spider: Spider) -> IO[bytes]: dirname = Path(self.path).parent if dirname and not dirname.exists(): dirname.mkdir(parents=True) return Path(self.path).open(self.write_mode) def store(self, file: IO[bytes]) -> Deferred[None] | None: file.close() return None class S3FeedStorage(BlockingFeedStorage): def __init__( self, uri: str, access_key: str | None = None, secret_key: str | None = None, acl: str | None = None, endpoint_url: str | None = None, *, feed_options: dict[str, Any] | None = None, session_token: str | None = None, region_name: str | None = None, ): try: import boto3.session # noqa: PLC0415 except ImportError: raise NotConfigured("missing boto3 library") u = urlparse(uri) assert u.hostname self.bucketname: str = u.hostname self.access_key: str | None = u.username or access_key self.secret_key: str | None = u.password or secret_key self.session_token: str | None = session_token self.keyname: str = u.path[1:] # remove first "/" self.acl: str | None = acl self.endpoint_url: str | None = endpoint_url self.region_name: str | None = region_name boto3_session = boto3.session.Session() self.s3_client = boto3_session.client( "s3", aws_access_key_id=self.access_key, aws_secret_access_key=self.secret_key, aws_session_token=self.session_token, endpoint_url=self.endpoint_url, region_name=self.region_name, ) if feed_options and feed_options.get("overwrite", True) is False: logger.warning( "S3 does not support appending to files. To " "suppress this warning, remove the overwrite " "option from your FEEDS setting or set it to True." ) @classmethod def from_crawler( cls, crawler: Crawler, uri: str, *, feed_options: dict[str, Any] | None = None, ) -> Self: return cls( uri, access_key=crawler.settings["AWS_ACCESS_KEY_ID"], secret_key=crawler.settings["AWS_SECRET_ACCESS_KEY"], session_token=crawler.settings["AWS_SESSION_TOKEN"], acl=crawler.settings["FEED_STORAGE_S3_ACL"] or None, endpoint_url=crawler.settings["AWS_ENDPOINT_URL"] or None, region_name=crawler.settings["AWS_REGION_NAME"] or None, feed_options=feed_options, ) def _store_in_thread(self, file: IO[bytes]) -> None: file.seek(0) kwargs: dict[str, Any] = {"ExtraArgs": {"ACL": self.acl}} if self.acl else {} self.s3_client.upload_fileobj( Bucket=self.bucketname, Key=self.keyname, Fileobj=file, **kwargs ) file.close() class GCSFeedStorage(BlockingFeedStorage): def __init__( self, uri: str, project_id: str | None, acl: str | None, *, feed_options: dict[str, Any] | None = None, ): self.project_id: str | None = project_id self.acl: str | None = acl u = urlparse(uri) assert u.hostname self.bucket_name: str = u.hostname self.blob_name: str = u.path[1:] # remove first "/" if feed_options and feed_options.get("overwrite", True) is False: logger.warning( "GCS does not support appending to files. To " "suppress this warning, remove the overwrite " "option from your FEEDS setting or set it to True." ) @classmethod def from_crawler( cls, crawler: Crawler, uri: str, *, feed_options: dict[str, Any] | None = None, ) -> Self: return cls( uri, crawler.settings["GCS_PROJECT_ID"], crawler.settings["FEED_STORAGE_GCS_ACL"] or None, feed_options=feed_options, ) def _store_in_thread(self, file: IO[bytes]) -> None: file.seek(0) from google.cloud.storage import Client # noqa: PLC0415 client = Client(project=self.project_id) bucket = client.get_bucket(self.bucket_name) blob = bucket.blob(self.blob_name) blob.upload_from_file(file, predefined_acl=self.acl) class FTPFeedStorage(BlockingFeedStorage): def __init__( self, uri: str, use_active_mode: bool = False, *, feed_options: dict[str, Any] | None = None, ): u = urlparse(uri) if not u.hostname: raise ValueError(f"Got a storage URI without a hostname: {uri}") self.host: str = u.hostname self.port: int = int(u.port or "21") self.username: str = u.username or "" self.password: str = unquote(u.password or "") self.path: str = u.path self.use_active_mode: bool = use_active_mode self.overwrite: bool = not feed_options or feed_options.get("overwrite", True) @classmethod def from_crawler( cls, crawler: Crawler, uri: str, *, feed_options: dict[str, Any] | None = None, ) -> Self: return cls( uri, use_active_mode=crawler.settings.getbool("FEED_STORAGE_FTP_ACTIVE"), feed_options=feed_options, ) def _store_in_thread(self, file: IO[bytes]) -> None: ftp_store_file( path=self.path, file=file, host=self.host, port=self.port, username=self.username, password=self.password, use_active_mode=self.use_active_mode, overwrite=self.overwrite, ) class FeedSlot: def __init__( self, storage: FeedStorageProtocol, uri: str, format: str, # noqa: A002 store_empty: bool, batch_id: int, uri_template: str, filter: ItemFilter, # noqa: A002 feed_options: dict[str, Any], spider: Spider, exporters: dict[str, type[BaseItemExporter]], settings: BaseSettings, crawler: Crawler, ): self.file: IO[bytes] | None = None self.exporter: BaseItemExporter | None = None self.storage: FeedStorageProtocol = storage # feed params self.batch_id: int = batch_id self.format: str = format self.store_empty: bool = store_empty self.uri_template: str = uri_template self.uri: str = uri self.filter: ItemFilter = filter # exporter params self.feed_options: dict[str, Any] = feed_options self.spider: Spider = spider self.exporters: dict[str, type[BaseItemExporter]] = exporters self.settings: BaseSettings = settings self.crawler: Crawler = crawler # flags self.itemcount: int = 0 self._exporting: bool = False self._fileloaded: bool = False def start_exporting(self) -> None: if not self._fileloaded: self.file = self.storage.open(self.spider) if "postprocessing" in self.feed_options: self.file = cast( "IO[bytes]", PostProcessingManager( self.feed_options["postprocessing"], self.file, self.feed_options, ), ) self.exporter = self._get_exporter( file=self.file, format_=self.feed_options["format"], fields_to_export=self.feed_options["fields"], encoding=self.feed_options["encoding"], indent=self.feed_options["indent"], **self.feed_options["item_export_kwargs"], ) self._fileloaded = True if not self._exporting: assert self.exporter self.exporter.start_exporting() self._exporting = True def _get_exporter( self, file: IO[bytes], format_: str, *args: Any, **kwargs: Any ) -> BaseItemExporter: return build_from_crawler( self.exporters[format_], self.crawler, file, *args, **kwargs ) def finish_exporting(self) -> None: if self._exporting: assert self.exporter self.exporter.finish_exporting() self._exporting = False class FeedExporter: @classmethod def from_crawler(cls, crawler: Crawler) -> Self: exporter = cls(crawler) crawler.signals.connect(exporter.open_spider, signals.spider_opened) crawler.signals.connect(exporter.close_spider, signals.spider_closed) crawler.signals.connect(exporter.item_scraped, signals.item_scraped) return exporter def __init__(self, crawler: Crawler): self.crawler: Crawler = crawler self.settings: Settings = crawler.settings self.feeds = {} self.slots: list[FeedSlot] = [] self.filters: dict[str, ItemFilter] = {} self._pending_close_coros: list[Coroutine[Any, Any, None]] = [] if not self.settings["FEEDS"] and not self.settings["FEED_URI"]: raise NotConfigured # Begin: Backward compatibility for FEED_URI and FEED_FORMAT settings if self.settings["FEED_URI"]: warnings.warn( "The `FEED_URI` and `FEED_FORMAT` settings have been deprecated in favor of " "the `FEEDS` setting. Please see the `FEEDS` setting docs for more details", category=ScrapyDeprecationWarning, stacklevel=2, ) uri = self.settings["FEED_URI"] # handle pathlib.Path objects uri = str(uri) if not isinstance(uri, Path) else uri.absolute().as_uri() feed_options = {"format": self.settings["FEED_FORMAT"]} self.feeds[uri] = feed_complete_default_values_from_settings( feed_options, self.settings ) self.filters[uri] = self._load_filter(feed_options) # End: Backward compatibility for FEED_URI and FEED_FORMAT settings # 'FEEDS' setting takes precedence over 'FEED_URI' for uri, feed_options in self.settings.getdict("FEEDS").items(): # handle pathlib.Path objects uri = str(uri) if not isinstance(uri, Path) else uri.absolute().as_uri() self.feeds[uri] = feed_complete_default_values_from_settings( feed_options, self.settings ) self.filters[uri] = self._load_filter(feed_options) self.storages: dict[str, type[FeedStorageProtocol]] = self._load_components( "FEED_STORAGES" ) self.exporters: dict[str, type[BaseItemExporter]] = self._load_components( "FEED_EXPORTERS" ) for uri, feed_options in self.feeds.items(): if not self._storage_supported(uri, feed_options): raise NotConfigured if not self._settings_are_valid(): raise NotConfigured if not self._exporter_supported(feed_options["format"]): raise NotConfigured def open_spider(self, spider: Spider) -> None: for uri, feed_options in self.feeds.items(): uri_params = self._get_uri_params(spider, feed_options["uri_params"]) self.slots.append( self._start_new_batch( batch_id=1, uri=uri % uri_params, feed_options=feed_options, spider=spider, uri_template=uri, ) ) async def close_spider(self, spider: Spider) -> None: self._pending_close_coros.extend( self._close_slot(slot, spider) for slot in self.slots ) if self._pending_close_coros: if is_asyncio_available(): await asyncio.wait( [asyncio.create_task(coro) for coro in self._pending_close_coros] ) else: await DeferredList( deferred_from_coro(coro) for coro in self._pending_close_coros ) # Send FEED_EXPORTER_CLOSED signal await self.crawler.signals.send_catch_log_async(signals.feed_exporter_closed) async def _close_slot(self, slot: FeedSlot, spider: Spider) -> None: def get_file(slot_: FeedSlot) -> IO[bytes]: assert slot_.file if isinstance(slot_.file, PostProcessingManager): slot_.file.close() return slot_.file.file return slot_.file if slot.itemcount: # Normal case slot.finish_exporting() elif slot.store_empty and slot.batch_id == 1: # Need to store the empty file slot.start_exporting() slot.finish_exporting() else: # In this case, the file is not stored, so no processing is required. return logmsg = f"{slot.format} feed ({slot.itemcount} items) in: {slot.uri}" slot_type = type(slot.storage).__name__ assert self.crawler.stats try: await ensure_awaitable(slot.storage.store(get_file(slot))) except Exception: logger.error( "Error storing %s", logmsg, exc_info=True, extra={"spider": spider}, ) self.crawler.stats.inc_value(f"feedexport/failed_count/{slot_type}") else: logger.info("Stored %s", logmsg, extra={"spider": spider}) self.crawler.stats.inc_value(f"feedexport/success_count/{slot_type}") await self.crawler.signals.send_catch_log_async( signals.feed_slot_closed, slot=slot ) def _start_new_batch( self, batch_id: int, uri: str, feed_options: dict[str, Any], spider: Spider, uri_template: str, ) -> FeedSlot: """ Redirect the output data stream to a new file. Execute multiple times if FEED_EXPORT_BATCH_ITEM_COUNT setting or FEEDS.batch_item_count is specified :param batch_id: sequence number of current batch :param uri: uri of the new batch to start :param feed_options: dict with parameters of feed :param spider: user spider :param uri_template: template of uri which contains %(batch_time)s or %(batch_id)d to create new uri """ storage = self._get_storage(uri, feed_options) return FeedSlot( storage=storage, uri=uri, format=feed_options["format"], store_empty=feed_options["store_empty"], batch_id=batch_id, uri_template=uri_template, filter=self.filters[uri_template], feed_options=feed_options, spider=spider, exporters=self.exporters, settings=self.settings, crawler=self.crawler, ) def item_scraped(self, item: Any, spider: Spider) -> None: slots = [] for slot in self.slots: if not slot.filter.accepts(item): slots.append( slot ) # if slot doesn't accept item, continue with next slot continue slot.start_exporting() assert slot.exporter slot.exporter.export_item(item) slot.itemcount += 1 # create new slot for each slot with itemcount == FEED_EXPORT_BATCH_ITEM_COUNT and close the old one if ( self.feeds[slot.uri_template]["batch_item_count"] and slot.itemcount >= self.feeds[slot.uri_template]["batch_item_count"] ): uri_params = self._get_uri_params( spider, self.feeds[slot.uri_template]["uri_params"], slot ) self._pending_close_coros.append(self._close_slot(slot, spider)) slots.append( self._start_new_batch( batch_id=slot.batch_id + 1, uri=slot.uri_template % uri_params, feed_options=self.feeds[slot.uri_template], spider=spider, uri_template=slot.uri_template, ) ) else: slots.append(slot) self.slots = slots def _load_components(self, setting_prefix: str) -> dict[str, Any]: conf = without_none_values( cast("dict[str, str]", self.settings.getwithbase(setting_prefix)) ) d = {} for k, v in conf.items(): with contextlib.suppress(NotConfigured): d[k] = load_object(v) return d def _exporter_supported(self, format_: str) -> bool: if format_ in self.exporters: return True logger.error("Unknown feed format: %(format)s", {"format": format_}) return False def _settings_are_valid(self) -> bool: """ If FEED_EXPORT_BATCH_ITEM_COUNT setting or FEEDS.batch_item_count is specified uri has to contain %(batch_time)s or %(batch_id)d to distinguish different files of partial output """ for uri_template, values in self.feeds.items(): if values["batch_item_count"] and not re.search( r"%\(batch_time\)s|%\(batch_id\)", uri_template ): logger.error( "%%(batch_time)s or %%(batch_id)d must be in the feed URI (%s) if FEED_EXPORT_BATCH_ITEM_COUNT " "setting or FEEDS.batch_item_count is specified and greater than 0. For more info see: " "https://docs.scrapy.org/en/latest/topics/feed-exports.html#feed-export-batch-item-count", uri_template, ) return False return True def _storage_supported(self, uri: str, feed_options: dict[str, Any]) -> bool: scheme = urlparse(uri).scheme if scheme in self.storages or PureWindowsPath(uri).drive: try: self._get_storage(uri, feed_options) return True except NotConfigured as e: logger.error( "Disabled feed storage scheme: %(scheme)s. Reason: %(reason)s", {"scheme": scheme, "reason": str(e)}, ) else: logger.error("Unknown feed storage scheme: %(scheme)s", {"scheme": scheme}) return False def _get_storage( self, uri: str, feed_options: dict[str, Any] ) -> FeedStorageProtocol: """Build a storage object for the specified *uri* with the specified *feed_options*.""" cls = self.storages.get(urlparse(uri).scheme, self.storages["file"]) return build_from_crawler(cls, self.crawler, uri, feed_options=feed_options) def _get_uri_params( self, spider: Spider, uri_params_function: str | UriParamsCallableT | None, slot: FeedSlot | None = None, ) -> dict[str, Any]: params = {} for k in dir(spider): params[k] = getattr(spider, k) utc_now = datetime.now(tz=timezone.utc) params["time"] = utc_now.replace(microsecond=0).isoformat().replace(":", "-") params["batch_time"] = utc_now.isoformat().replace(":", "-") params["batch_id"] = slot.batch_id + 1 if slot is not None else 1 uripar_function: UriParamsCallableT = ( load_object(uri_params_function) if uri_params_function else lambda params, _: params ) new_params = uripar_function(params, spider) return new_params if new_params is not None else params def _load_filter(self, feed_options: dict[str, Any]) -> ItemFilter: # load the item filter if declared else load the default filter class item_filter_class: type[ItemFilter] = load_object( feed_options.get("item_filter", ItemFilter) ) return item_filter_class(feed_options)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/throttle.py
scrapy/extensions/throttle.py
from __future__ import annotations import logging from typing import TYPE_CHECKING from scrapy import Request, Spider, signals from scrapy.exceptions import NotConfigured if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.core.downloader import Slot from scrapy.crawler import Crawler from scrapy.http import Response logger = logging.getLogger(__name__) class AutoThrottle: def __init__(self, crawler: Crawler): self.crawler: Crawler = crawler if not crawler.settings.getbool("AUTOTHROTTLE_ENABLED"): raise NotConfigured self.debug: bool = crawler.settings.getbool("AUTOTHROTTLE_DEBUG") self.target_concurrency: float = crawler.settings.getfloat( "AUTOTHROTTLE_TARGET_CONCURRENCY" ) if self.target_concurrency <= 0.0: raise NotConfigured( f"AUTOTHROTTLE_TARGET_CONCURRENCY " f"({self.target_concurrency!r}) must be higher than 0." ) crawler.signals.connect(self._spider_opened, signal=signals.spider_opened) crawler.signals.connect( self._response_downloaded, signal=signals.response_downloaded ) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def _spider_opened(self, spider: Spider) -> None: self.mindelay = self._min_delay(spider) self.maxdelay = self._max_delay(spider) spider.download_delay = self._start_delay(spider) # type: ignore[attr-defined] def _min_delay(self, spider: Spider) -> float: s = self.crawler.settings return getattr(spider, "download_delay", s.getfloat("DOWNLOAD_DELAY")) def _max_delay(self, spider: Spider) -> float: return self.crawler.settings.getfloat("AUTOTHROTTLE_MAX_DELAY") def _start_delay(self, spider: Spider) -> float: return max( self.mindelay, self.crawler.settings.getfloat("AUTOTHROTTLE_START_DELAY") ) def _response_downloaded( self, response: Response, request: Request, spider: Spider ) -> None: key, slot = self._get_slot(request, spider) latency = request.meta.get("download_latency") if ( latency is None or slot is None or request.meta.get("autothrottle_dont_adjust_delay", False) is True ): return olddelay = slot.delay self._adjust_delay(slot, latency, response) if self.debug: diff = slot.delay - olddelay size = len(response.body) conc = len(slot.transferring) logger.info( "slot: %(slot)s | conc:%(concurrency)2d | " "delay:%(delay)5d ms (%(delaydiff)+d) | " "latency:%(latency)5d ms | size:%(size)6d bytes", { "slot": key, "concurrency": conc, "delay": slot.delay * 1000, "delaydiff": diff * 1000, "latency": latency * 1000, "size": size, }, extra={"spider": spider}, ) def _get_slot( self, request: Request, spider: Spider ) -> tuple[str | None, Slot | None]: key: str | None = request.meta.get("download_slot") if key is None: return None, None assert self.crawler.engine return key, self.crawler.engine.downloader.slots.get(key) def _adjust_delay(self, slot: Slot, latency: float, response: Response) -> None: """Define delay adjustment policy""" # If a server needs `latency` seconds to respond then # we should send a request each `latency/N` seconds # to have N requests processed in parallel target_delay = latency / self.target_concurrency # Adjust the delay to make it closer to target_delay new_delay = (slot.delay + target_delay) / 2.0 # If target delay is bigger than old delay, then use it instead of mean. # It works better with problematic sites. new_delay = max(target_delay, new_delay) # Make sure self.mindelay <= new_delay <= self.max_delay new_delay = min(max(self.mindelay, new_delay), self.maxdelay) # Dont adjust delay if response status != 200 and new delay is smaller # than old one, as error pages (and redirections) are usually small and # so tend to reduce latency, thus provoking a positive feedback by # reducing delay instead of increase. if response.status != 200 and new_delay <= slot.delay: return slot.delay = new_delay
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/logcount.py
scrapy/extensions/logcount.py
from __future__ import annotations import logging from typing import TYPE_CHECKING from scrapy import Spider, signals from scrapy.utils.log import LogCounterHandler if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler logger = logging.getLogger(__name__) class LogCount: """Install a log handler that counts log messages by level. The handler installed is :class:`scrapy.utils.log.LogCounterHandler`. The counts are stored in stats as ``log_count/<level>``. .. versionadded:: VERSION """ def __init__(self, crawler: Crawler): self.crawler: Crawler = crawler self.handler: LogCounterHandler | None = None @classmethod def from_crawler(cls, crawler: Crawler) -> Self: o = cls(crawler) crawler.signals.connect(o.spider_opened, signal=signals.spider_opened) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) return o def spider_opened(self, spider: Spider) -> None: self.handler = LogCounterHandler( self.crawler, level=self.crawler.settings.get("LOG_LEVEL") ) logging.root.addHandler(self.handler) def spider_closed(self, spider: Spider, reason: str) -> None: if self.handler: logging.root.removeHandler(self.handler) self.handler = None
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/corestats.py
scrapy/extensions/corestats.py
""" Extension for collecting core stats like items scraped and start/finish times """ from __future__ import annotations from datetime import datetime, timezone from typing import TYPE_CHECKING, Any from scrapy import Spider, signals if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.statscollectors import StatsCollector class CoreStats: def __init__(self, stats: StatsCollector): self.stats: StatsCollector = stats self.start_time: datetime | None = None @classmethod def from_crawler(cls, crawler: Crawler) -> Self: assert crawler.stats o = cls(crawler.stats) crawler.signals.connect(o.spider_opened, signal=signals.spider_opened) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) crawler.signals.connect(o.item_scraped, signal=signals.item_scraped) crawler.signals.connect(o.item_dropped, signal=signals.item_dropped) crawler.signals.connect(o.response_received, signal=signals.response_received) return o def spider_opened(self, spider: Spider) -> None: self.start_time = datetime.now(tz=timezone.utc) self.stats.set_value("start_time", self.start_time) def spider_closed(self, spider: Spider, reason: str) -> None: assert self.start_time is not None finish_time = datetime.now(tz=timezone.utc) elapsed_time = finish_time - self.start_time elapsed_time_seconds = elapsed_time.total_seconds() self.stats.set_value("elapsed_time_seconds", elapsed_time_seconds) self.stats.set_value("finish_time", finish_time) self.stats.set_value("finish_reason", reason) def item_scraped(self, item: Any, spider: Spider) -> None: self.stats.inc_value("item_scraped_count") def response_received(self, spider: Spider) -> None: self.stats.inc_value("response_received_count") def item_dropped(self, item: Any, spider: Spider, exception: BaseException) -> None: reason = exception.__class__.__name__ self.stats.inc_value("item_dropped_count") self.stats.inc_value(f"item_dropped_reasons_count/{reason}")
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/periodic_log.py
scrapy/extensions/periodic_log.py
from __future__ import annotations import logging from datetime import datetime, timezone from typing import TYPE_CHECKING, Any from scrapy import Spider, signals from scrapy.exceptions import NotConfigured from scrapy.utils.asyncio import AsyncioLoopingCall, create_looping_call from scrapy.utils.serialize import ScrapyJSONEncoder if TYPE_CHECKING: from json import JSONEncoder from twisted.internet.task import LoopingCall # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.statscollectors import StatsCollector logger = logging.getLogger(__name__) class PeriodicLog: """Log basic scraping stats periodically""" def __init__( self, stats: StatsCollector, interval: float = 60.0, ext_stats: dict[str, Any] = {}, ext_delta: dict[str, Any] = {}, ext_timing_enabled: bool = False, ): self.stats: StatsCollector = stats self.interval: float = interval self.multiplier: float = 60.0 / self.interval self.task: AsyncioLoopingCall | LoopingCall | None = None self.encoder: JSONEncoder = ScrapyJSONEncoder(sort_keys=True, indent=4) self.ext_stats_enabled: bool = bool(ext_stats) self.ext_stats_include: list[str] = ext_stats.get("include", []) self.ext_stats_exclude: list[str] = ext_stats.get("exclude", []) self.ext_delta_enabled: bool = bool(ext_delta) self.ext_delta_include: list[str] = ext_delta.get("include", []) self.ext_delta_exclude: list[str] = ext_delta.get("exclude", []) self.ext_timing_enabled: bool = ext_timing_enabled @classmethod def from_crawler(cls, crawler: Crawler) -> Self: interval: float = crawler.settings.getfloat("LOGSTATS_INTERVAL") if not interval: raise NotConfigured try: ext_stats: dict[str, Any] | None = crawler.settings.getdict( "PERIODIC_LOG_STATS" ) except (TypeError, ValueError): ext_stats = ( {"enabled": True} if crawler.settings.getbool("PERIODIC_LOG_STATS") else None ) try: ext_delta: dict[str, Any] | None = crawler.settings.getdict( "PERIODIC_LOG_DELTA" ) except (TypeError, ValueError): ext_delta = ( {"enabled": True} if crawler.settings.getbool("PERIODIC_LOG_DELTA") else None ) ext_timing_enabled: bool = crawler.settings.getbool( "PERIODIC_LOG_TIMING_ENABLED" ) if not (ext_stats or ext_delta or ext_timing_enabled): raise NotConfigured assert crawler.stats assert ext_stats is not None assert ext_delta is not None o = cls( crawler.stats, interval, ext_stats, ext_delta, ext_timing_enabled, ) crawler.signals.connect(o.spider_opened, signal=signals.spider_opened) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) return o def spider_opened(self, spider: Spider) -> None: self.time_prev: datetime = datetime.now(tz=timezone.utc) self.delta_prev: dict[str, int | float] = {} self.stats_prev: dict[str, int | float] = {} self.task = create_looping_call(self.log) self.task.start(self.interval) def log(self) -> None: data: dict[str, Any] = {} if self.ext_timing_enabled: data.update(self.log_timing()) if self.ext_delta_enabled: data.update(self.log_delta()) if self.ext_stats_enabled: data.update(self.log_crawler_stats()) logger.info(self.encoder.encode(data)) def log_delta(self) -> dict[str, Any]: num_stats: dict[str, int | float] = { k: v for k, v in self.stats._stats.items() if isinstance(v, (int, float)) and self.param_allowed(k, self.ext_delta_include, self.ext_delta_exclude) } delta = {k: v - self.delta_prev.get(k, 0) for k, v in num_stats.items()} self.delta_prev = num_stats return {"delta": delta} def log_timing(self) -> dict[str, Any]: now = datetime.now(tz=timezone.utc) time = { "log_interval": self.interval, "start_time": self.stats._stats["start_time"], "utcnow": now, "log_interval_real": (now - self.time_prev).total_seconds(), "elapsed": (now - self.stats._stats["start_time"]).total_seconds(), } self.time_prev = now return {"time": time} def log_crawler_stats(self) -> dict[str, Any]: stats = { k: v for k, v in self.stats._stats.items() if self.param_allowed(k, self.ext_stats_include, self.ext_stats_exclude) } return {"stats": stats} def param_allowed( self, stat_name: str, include: list[str], exclude: list[str] ) -> bool: if not include and not exclude: return True for p in exclude: if p in stat_name: return False if exclude and not include: return True return any(p in stat_name for p in include) def spider_closed(self, spider: Spider, reason: str) -> None: self.log() if self.task and self.task.running: self.task.stop()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/__init__.py
scrapy/extensions/__init__.py
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/closespider.py
scrapy/extensions/closespider.py
"""CloseSpider is an extension that forces spiders to be closed after certain conditions are met. See documentation in docs/topics/extensions.rst """ from __future__ import annotations import logging from collections import defaultdict from typing import TYPE_CHECKING, Any from scrapy import Request, Spider, signals from scrapy.exceptions import NotConfigured from scrapy.utils.asyncio import ( AsyncioLoopingCall, CallLaterResult, call_later, create_looping_call, ) from scrapy.utils.defer import _schedule_coro if TYPE_CHECKING: from twisted.internet.task import LoopingCall from twisted.python.failure import Failure # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.http import Response logger = logging.getLogger(__name__) class CloseSpider: def __init__(self, crawler: Crawler): self.crawler: Crawler = crawler # for CLOSESPIDER_TIMEOUT self.task: CallLaterResult | None = None # for CLOSESPIDER_TIMEOUT_NO_ITEM self.task_no_item: AsyncioLoopingCall | LoopingCall | None = None self.close_on: dict[str, Any] = { "timeout": crawler.settings.getfloat("CLOSESPIDER_TIMEOUT"), "itemcount": crawler.settings.getint("CLOSESPIDER_ITEMCOUNT"), "pagecount": crawler.settings.getint("CLOSESPIDER_PAGECOUNT"), "errorcount": crawler.settings.getint("CLOSESPIDER_ERRORCOUNT"), "timeout_no_item": crawler.settings.getint("CLOSESPIDER_TIMEOUT_NO_ITEM"), "pagecount_no_item": crawler.settings.getint( "CLOSESPIDER_PAGECOUNT_NO_ITEM" ), } if not any(self.close_on.values()): raise NotConfigured self.counter: defaultdict[str, int] = defaultdict(int) if self.close_on.get("errorcount"): crawler.signals.connect(self.error_count, signal=signals.spider_error) if self.close_on.get("pagecount") or self.close_on.get("pagecount_no_item"): crawler.signals.connect(self.page_count, signal=signals.response_received) if self.close_on.get("timeout"): crawler.signals.connect(self.spider_opened, signal=signals.spider_opened) if self.close_on.get("itemcount") or self.close_on.get("pagecount_no_item"): crawler.signals.connect(self.item_scraped, signal=signals.item_scraped) if self.close_on.get("timeout_no_item"): self.timeout_no_item: int = self.close_on["timeout_no_item"] self.items_in_period: int = 0 crawler.signals.connect( self.spider_opened_no_item, signal=signals.spider_opened ) crawler.signals.connect( self.item_scraped_no_item, signal=signals.item_scraped ) crawler.signals.connect(self.spider_closed, signal=signals.spider_closed) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def error_count(self, failure: Failure, response: Response, spider: Spider) -> None: self.counter["errorcount"] += 1 if self.counter["errorcount"] == self.close_on["errorcount"]: self._close_spider("closespider_errorcount") def page_count(self, response: Response, request: Request, spider: Spider) -> None: self.counter["pagecount"] += 1 self.counter["pagecount_since_last_item"] += 1 if self.counter["pagecount"] == self.close_on["pagecount"]: self._close_spider("closespider_pagecount") return if self.close_on["pagecount_no_item"] and ( self.counter["pagecount_since_last_item"] >= self.close_on["pagecount_no_item"] ): self._close_spider("closespider_pagecount_no_item") def spider_opened(self, spider: Spider) -> None: assert self.crawler.engine self.task = call_later( self.close_on["timeout"], self._close_spider, "closespider_timeout" ) def item_scraped(self, item: Any, spider: Spider) -> None: self.counter["itemcount"] += 1 self.counter["pagecount_since_last_item"] = 0 if self.counter["itemcount"] == self.close_on["itemcount"]: self._close_spider("closespider_itemcount") def spider_closed(self, spider: Spider) -> None: if self.task: self.task.cancel() self.task = None if self.task_no_item: if self.task_no_item.running: self.task_no_item.stop() self.task_no_item = None def spider_opened_no_item(self, spider: Spider) -> None: self.task_no_item = create_looping_call(self._count_items_produced) self.task_no_item.start(self.timeout_no_item, now=False) logger.info( f"Spider will stop when no items are produced after " f"{self.timeout_no_item} seconds." ) def item_scraped_no_item(self, item: Any, spider: Spider) -> None: self.items_in_period += 1 def _count_items_produced(self) -> None: if self.items_in_period >= 1: self.items_in_period = 0 else: logger.info( f"Closing spider since no items were produced in the last " f"{self.timeout_no_item} seconds." ) self._close_spider("closespider_timeout_no_item") def _close_spider(self, reason: str) -> None: assert self.crawler.engine _schedule_coro(self.crawler.engine.close_spider_async(reason=reason))
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/spiderstate.py
scrapy/extensions/spiderstate.py
from __future__ import annotations import pickle from pathlib import Path from typing import TYPE_CHECKING from scrapy import Spider, signals from scrapy.exceptions import NotConfigured from scrapy.utils.job import job_dir if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler class SpiderState: """Store and load spider state during a scraping job""" def __init__(self, jobdir: str | None = None): self.jobdir: str | None = jobdir @classmethod def from_crawler(cls, crawler: Crawler) -> Self: jobdir = job_dir(crawler.settings) if not jobdir: raise NotConfigured obj = cls(jobdir) crawler.signals.connect(obj.spider_closed, signal=signals.spider_closed) crawler.signals.connect(obj.spider_opened, signal=signals.spider_opened) return obj def spider_closed(self, spider: Spider) -> None: if self.jobdir: with Path(self.statefn).open("wb") as f: assert hasattr(spider, "state") # set in spider_opened pickle.dump(spider.state, f, protocol=4) def spider_opened(self, spider: Spider) -> None: if self.jobdir and Path(self.statefn).exists(): with Path(self.statefn).open("rb") as f: spider.state = pickle.load(f) # type: ignore[attr-defined] # noqa: S301 else: spider.state = {} # type: ignore[attr-defined] @property def statefn(self) -> str: assert self.jobdir return str(Path(self.jobdir, "spider.state"))
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/statsmailer.py
scrapy/extensions/statsmailer.py
""" StatsMailer extension sends an email when a spider finishes scraping. Use STATSMAILER_RCPTS setting to enable and give the recipient mail address """ from __future__ import annotations from typing import TYPE_CHECKING from scrapy import Spider, signals from scrapy.exceptions import NotConfigured from scrapy.mail import MailSender if TYPE_CHECKING: from twisted.internet.defer import Deferred # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.statscollectors import StatsCollector class StatsMailer: def __init__(self, stats: StatsCollector, recipients: list[str], mail: MailSender): self.stats: StatsCollector = stats self.recipients: list[str] = recipients self.mail: MailSender = mail @classmethod def from_crawler(cls, crawler: Crawler) -> Self: recipients: list[str] = crawler.settings.getlist("STATSMAILER_RCPTS") if not recipients: raise NotConfigured mail: MailSender = MailSender.from_crawler(crawler) assert crawler.stats o = cls(crawler.stats, recipients, mail) crawler.signals.connect(o.spider_closed, signal=signals.spider_closed) return o def spider_closed(self, spider: Spider) -> Deferred[None] | None: spider_stats = self.stats.get_stats() body = "Global stats\n\n" body += "\n".join(f"{k:<50} : {v}" for k, v in self.stats.get_stats().items()) body += f"\n\n{spider.name} stats\n\n" body += "\n".join(f"{k:<50} : {v}" for k, v in spider_stats.items()) return self.mail.send(self.recipients, f"Scrapy stats for: {spider.name}", body)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/telnet.py
scrapy/extensions/telnet.py
""" Scrapy Telnet Console extension See documentation in docs/topics/telnetconsole.rst """ from __future__ import annotations import binascii import logging import os import pprint from typing import TYPE_CHECKING, Any from twisted.conch import telnet from twisted.conch.insults import insults from twisted.internet import protocol from twisted.internet.defer import fail, succeed from scrapy import signals from scrapy.exceptions import NotConfigured from scrapy.utils.engine import print_engine_status from scrapy.utils.reactor import listen_tcp from scrapy.utils.trackref import print_live_refs if TYPE_CHECKING: from twisted.internet.tcp import Port # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler logger = logging.getLogger(__name__) # signal to update telnet variables # args: telnet_vars update_telnet_vars = object() class TelnetConsole(protocol.ServerFactory): def __init__(self, crawler: Crawler): if not crawler.settings.getbool("TELNETCONSOLE_ENABLED"): raise NotConfigured self.crawler: Crawler = crawler self.noisy: bool = False self.portrange: list[int] = [ int(x) for x in crawler.settings.getlist("TELNETCONSOLE_PORT") ] self.host: str = crawler.settings["TELNETCONSOLE_HOST"] self.username: str = crawler.settings["TELNETCONSOLE_USERNAME"] self.password: str = crawler.settings["TELNETCONSOLE_PASSWORD"] if not self.password: self.password = binascii.hexlify(os.urandom(8)).decode("utf8") logger.info("Telnet Password: %s", self.password) self.crawler.signals.connect(self.start_listening, signals.engine_started) self.crawler.signals.connect(self.stop_listening, signals.engine_stopped) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def start_listening(self) -> None: self.port: Port = listen_tcp(self.portrange, self.host, self) h = self.port.getHost() logger.info( "Telnet console listening on %(host)s:%(port)d", {"host": h.host, "port": h.port}, extra={"crawler": self.crawler}, ) def stop_listening(self) -> None: self.port.stopListening() def protocol(self) -> telnet.TelnetTransport: class Portal: """An implementation of IPortal""" def login(self_, credentials, mind, *interfaces): # pylint: disable=no-self-argument if not ( credentials.username == self.username.encode("utf8") and credentials.checkPassword(self.password.encode("utf8")) ): return fail(ValueError("Invalid credentials")) from twisted.conch import manhole protocol = telnet.TelnetBootstrapProtocol( insults.ServerProtocol, manhole.Manhole, self._get_telnet_vars() ) return succeed((interfaces[0], protocol, lambda: None)) return telnet.TelnetTransport(telnet.AuthenticatingTelnetProtocol, Portal()) def _get_telnet_vars(self) -> dict[str, Any]: # Note: if you add entries here also update topics/telnetconsole.rst assert self.crawler.engine telnet_vars: dict[str, Any] = { "engine": self.crawler.engine, "spider": self.crawler.engine.spider, "crawler": self.crawler, "extensions": self.crawler.extensions, "stats": self.crawler.stats, "settings": self.crawler.settings, "est": lambda: print_engine_status(self.crawler.engine), "p": pprint.pprint, "prefs": print_live_refs, "help": "This is Scrapy telnet console. For more info see: " "https://docs.scrapy.org/en/latest/topics/telnetconsole.html", } self.crawler.signals.send_catch_log(update_telnet_vars, telnet_vars=telnet_vars) return telnet_vars
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/httpcache.py
scrapy/extensions/httpcache.py
from __future__ import annotations import gzip import logging import pickle from email.utils import mktime_tz, parsedate_tz from importlib import import_module from pathlib import Path from time import time from typing import IO, TYPE_CHECKING, Any, Concatenate, cast from weakref import WeakKeyDictionary from w3lib.http import headers_dict_to_raw, headers_raw_to_dict from scrapy.http import Headers, Response from scrapy.responsetypes import responsetypes from scrapy.utils.httpobj import urlparse_cached from scrapy.utils.project import data_path from scrapy.utils.python import to_bytes, to_unicode if TYPE_CHECKING: import os from collections.abc import Callable from types import ModuleType from scrapy.http.request import Request from scrapy.settings import BaseSettings from scrapy.spiders import Spider from scrapy.utils.request import RequestFingerprinterProtocol logger = logging.getLogger(__name__) class DummyPolicy: def __init__(self, settings: BaseSettings): self.ignore_schemes: list[str] = settings.getlist("HTTPCACHE_IGNORE_SCHEMES") self.ignore_http_codes: list[int] = [ int(x) for x in settings.getlist("HTTPCACHE_IGNORE_HTTP_CODES") ] def should_cache_request(self, request: Request) -> bool: return urlparse_cached(request).scheme not in self.ignore_schemes def should_cache_response(self, response: Response, request: Request) -> bool: return response.status not in self.ignore_http_codes def is_cached_response_fresh( self, cachedresponse: Response, request: Request ) -> bool: return True def is_cached_response_valid( self, cachedresponse: Response, response: Response, request: Request ) -> bool: return True class RFC2616Policy: MAXAGE = 3600 * 24 * 365 # one year def __init__(self, settings: BaseSettings): self.always_store: bool = settings.getbool("HTTPCACHE_ALWAYS_STORE") self.ignore_schemes: list[str] = settings.getlist("HTTPCACHE_IGNORE_SCHEMES") self._cc_parsed: WeakKeyDictionary[ Request | Response, dict[bytes, bytes | None] ] = WeakKeyDictionary() self.ignore_response_cache_controls: list[bytes] = [ to_bytes(cc) for cc in settings.getlist("HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS") ] def _parse_cachecontrol(self, r: Request | Response) -> dict[bytes, bytes | None]: if r not in self._cc_parsed: cch = r.headers.get(b"Cache-Control", b"") assert cch is not None parsed = parse_cachecontrol(cch) if isinstance(r, Response): for key in self.ignore_response_cache_controls: parsed.pop(key, None) self._cc_parsed[r] = parsed return self._cc_parsed[r] def should_cache_request(self, request: Request) -> bool: if urlparse_cached(request).scheme in self.ignore_schemes: return False cc = self._parse_cachecontrol(request) # obey user-agent directive "Cache-Control: no-store" return b"no-store" not in cc def should_cache_response(self, response: Response, request: Request) -> bool: # What is cacheable - https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 # Response cacheability - https://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.4 # Status code 206 is not included because cache can not deal with partial contents cc = self._parse_cachecontrol(response) # obey directive "Cache-Control: no-store" if b"no-store" in cc: return False # Never cache 304 (Not Modified) responses if response.status == 304: return False # Cache unconditionally if configured to do so if self.always_store: return True # Any hint on response expiration is good if b"max-age" in cc or b"Expires" in response.headers: return True # Firefox fallbacks this statuses to one year expiration if none is set if response.status in (300, 301, 308): return True # Other statuses without expiration requires at least one validator if response.status in (200, 203, 401): return b"Last-Modified" in response.headers or b"ETag" in response.headers # Any other is probably not eligible for caching # Makes no sense to cache responses that does not contain expiration # info and can not be revalidated return False def is_cached_response_fresh( self, cachedresponse: Response, request: Request ) -> bool: cc = self._parse_cachecontrol(cachedresponse) ccreq = self._parse_cachecontrol(request) if b"no-cache" in cc or b"no-cache" in ccreq: return False now = time() freshnesslifetime = self._compute_freshness_lifetime( cachedresponse, request, now ) currentage = self._compute_current_age(cachedresponse, request, now) reqmaxage = self._get_max_age(ccreq) if reqmaxage is not None: freshnesslifetime = min(freshnesslifetime, reqmaxage) if currentage < freshnesslifetime: return True if b"max-stale" in ccreq and b"must-revalidate" not in cc: # From RFC2616: "Indicates that the client is willing to # accept a response that has exceeded its expiration time. # If max-stale is assigned a value, then the client is # willing to accept a response that has exceeded its # expiration time by no more than the specified number of # seconds. If no value is assigned to max-stale, then the # client is willing to accept a stale response of any age." staleage = ccreq[b"max-stale"] if staleage is None: return True try: if currentage < freshnesslifetime + max(0, int(staleage)): return True except ValueError: pass # Cached response is stale, try to set validators if any self._set_conditional_validators(request, cachedresponse) return False def is_cached_response_valid( self, cachedresponse: Response, response: Response, request: Request ) -> bool: # Use the cached response if the new response is a server error, # as long as the old response didn't specify must-revalidate. if response.status >= 500: cc = self._parse_cachecontrol(cachedresponse) if b"must-revalidate" not in cc: return True # Use the cached response if the server says it hasn't changed. return response.status == 304 def _set_conditional_validators( self, request: Request, cachedresponse: Response ) -> None: if b"Last-Modified" in cachedresponse.headers: request.headers[b"If-Modified-Since"] = cachedresponse.headers[ b"Last-Modified" ] if b"ETag" in cachedresponse.headers: request.headers[b"If-None-Match"] = cachedresponse.headers[b"ETag"] def _get_max_age(self, cc: dict[bytes, bytes | None]) -> int | None: try: return max(0, int(cc[b"max-age"])) # type: ignore[arg-type] except (KeyError, ValueError): return None def _compute_freshness_lifetime( self, response: Response, request: Request, now: float ) -> float: # Reference nsHttpResponseHead::ComputeFreshnessLifetime # https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#706 cc = self._parse_cachecontrol(response) maxage = self._get_max_age(cc) if maxage is not None: return maxage # Parse date header or synthesize it if none exists date = rfc1123_to_epoch(response.headers.get(b"Date")) or now # Try HTTP/1.0 Expires header if b"Expires" in response.headers: expires = rfc1123_to_epoch(response.headers[b"Expires"]) # When parsing Expires header fails RFC 2616 section 14.21 says we # should treat this as an expiration time in the past. return max(0, expires - date) if expires else 0 # Fallback to heuristic using last-modified header # This is not in RFC but on Firefox caching implementation lastmodified = rfc1123_to_epoch(response.headers.get(b"Last-Modified")) if lastmodified and lastmodified <= date: return (date - lastmodified) / 10 # This request can be cached indefinitely if response.status in (300, 301, 308): return self.MAXAGE # Insufficient information to compute freshness lifetime return 0 def _compute_current_age( self, response: Response, request: Request, now: float ) -> float: # Reference nsHttpResponseHead::ComputeCurrentAge # https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#658 currentage: float = 0 # If Date header is not set we assume it is a fast connection, and # clock is in sync with the server date = rfc1123_to_epoch(response.headers.get(b"Date")) or now if now > date: currentage = now - date if b"Age" in response.headers: try: age = int(response.headers[b"Age"]) # type: ignore[arg-type] currentage = max(currentage, age) except ValueError: pass return currentage class DbmCacheStorage: def __init__(self, settings: BaseSettings): self.cachedir: str = data_path(settings["HTTPCACHE_DIR"], createdir=True) self.expiration_secs: int = settings.getint("HTTPCACHE_EXPIRATION_SECS") self.dbmodule: ModuleType = import_module(settings["HTTPCACHE_DBM_MODULE"]) self.db: Any = None # the real type is private def open_spider(self, spider: Spider) -> None: dbpath = Path(self.cachedir, f"{spider.name}.db") self.db = self.dbmodule.open(str(dbpath), "c") logger.debug( "Using DBM cache storage in %(cachepath)s", {"cachepath": dbpath}, extra={"spider": spider}, ) assert spider.crawler.request_fingerprinter self._fingerprinter: RequestFingerprinterProtocol = ( spider.crawler.request_fingerprinter ) def close_spider(self, spider: Spider) -> None: self.db.close() def retrieve_response(self, spider: Spider, request: Request) -> Response | None: data = self._read_data(spider, request) if data is None: return None # not cached url = data["url"] status = data["status"] headers = Headers(data["headers"]) body = data["body"] respcls = responsetypes.from_args(headers=headers, url=url, body=body) return respcls(url=url, headers=headers, status=status, body=body) def store_response( self, spider: Spider, request: Request, response: Response ) -> None: key = self._fingerprinter.fingerprint(request).hex() data = { "status": response.status, "url": response.url, "headers": dict(response.headers), "body": response.body, } self.db[f"{key}_data"] = pickle.dumps(data, protocol=4) self.db[f"{key}_time"] = str(time()) def _read_data(self, spider: Spider, request: Request) -> dict[str, Any] | None: key = self._fingerprinter.fingerprint(request).hex() db = self.db tkey = f"{key}_time" if tkey not in db: return None # not found ts = db[tkey] if 0 < self.expiration_secs < time() - float(ts): return None # expired return cast("dict[str, Any]", pickle.loads(db[f"{key}_data"])) # noqa: S301 class FilesystemCacheStorage: def __init__(self, settings: BaseSettings): self.cachedir: str = data_path(settings["HTTPCACHE_DIR"]) self.expiration_secs: int = settings.getint("HTTPCACHE_EXPIRATION_SECS") self.use_gzip: bool = settings.getbool("HTTPCACHE_GZIP") # https://github.com/python/mypy/issues/10740 self._open: Callable[Concatenate[str | os.PathLike, str, ...], IO[bytes]] = ( gzip.open if self.use_gzip else open # type: ignore[assignment] ) def open_spider(self, spider: Spider) -> None: logger.debug( "Using filesystem cache storage in %(cachedir)s", {"cachedir": self.cachedir}, extra={"spider": spider}, ) assert spider.crawler.request_fingerprinter self._fingerprinter = spider.crawler.request_fingerprinter def close_spider(self, spider: Spider) -> None: pass def retrieve_response(self, spider: Spider, request: Request) -> Response | None: """Return response if present in cache, or None otherwise.""" metadata = self._read_meta(spider, request) if metadata is None: return None # not cached rpath = Path(self._get_request_path(spider, request)) with self._open(rpath / "response_body", "rb") as f: body = f.read() with self._open(rpath / "response_headers", "rb") as f: rawheaders = f.read() url = metadata["response_url"] status = metadata["status"] headers = Headers(headers_raw_to_dict(rawheaders)) respcls = responsetypes.from_args(headers=headers, url=url, body=body) return respcls(url=url, headers=headers, status=status, body=body) def store_response( self, spider: Spider, request: Request, response: Response ) -> None: """Store the given response in the cache.""" rpath = Path(self._get_request_path(spider, request)) if not rpath.exists(): rpath.mkdir(parents=True) metadata = { "url": request.url, "method": request.method, "status": response.status, "response_url": response.url, "timestamp": time(), } with self._open(rpath / "meta", "wb") as f: f.write(to_bytes(repr(metadata))) with self._open(rpath / "pickled_meta", "wb") as f: pickle.dump(metadata, f, protocol=4) with self._open(rpath / "response_headers", "wb") as f: f.write(headers_dict_to_raw(response.headers)) with self._open(rpath / "response_body", "wb") as f: f.write(response.body) with self._open(rpath / "request_headers", "wb") as f: f.write(headers_dict_to_raw(request.headers)) with self._open(rpath / "request_body", "wb") as f: f.write(request.body) def _get_request_path(self, spider: Spider, request: Request) -> str: key = self._fingerprinter.fingerprint(request).hex() return str(Path(self.cachedir, spider.name, key[0:2], key)) def _read_meta(self, spider: Spider, request: Request) -> dict[str, Any] | None: rpath = Path(self._get_request_path(spider, request)) metapath = rpath / "pickled_meta" if not metapath.exists(): return None # not found mtime = metapath.stat().st_mtime if 0 < self.expiration_secs < time() - mtime: return None # expired with self._open(metapath, "rb") as f: return cast("dict[str, Any]", pickle.load(f)) # noqa: S301 def parse_cachecontrol(header: bytes) -> dict[bytes, bytes | None]: """Parse Cache-Control header https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9 >>> parse_cachecontrol(b'public, max-age=3600') == {b'public': None, ... b'max-age': b'3600'} True >>> parse_cachecontrol(b'') == {} True """ directives = {} for directive in header.split(b","): key, sep, val = directive.strip().partition(b"=") if key: directives[key.lower()] = val if sep else None return directives def rfc1123_to_epoch(date_str: str | bytes | None) -> int | None: try: date_str = to_unicode(date_str, encoding="ascii") # type: ignore[arg-type] return mktime_tz(parsedate_tz(date_str)) # type: ignore[arg-type] except Exception: return None
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/extensions/postprocessing.py
scrapy/extensions/postprocessing.py
""" Extension for processing data before they are exported to feeds. """ from bz2 import BZ2File from gzip import GzipFile from io import IOBase from lzma import LZMAFile from typing import IO, Any, BinaryIO, cast from scrapy.utils.misc import load_object class GzipPlugin: """ Compresses received data using `gzip <https://en.wikipedia.org/wiki/Gzip>`_. Accepted ``feed_options`` parameters: - `gzip_compresslevel` - `gzip_mtime` - `gzip_filename` See :py:class:`gzip.GzipFile` for more info about parameters. """ def __init__(self, file: BinaryIO, feed_options: dict[str, Any]) -> None: self.file = file self.feed_options = feed_options compress_level = self.feed_options.get("gzip_compresslevel", 9) mtime = self.feed_options.get("gzip_mtime") filename = self.feed_options.get("gzip_filename") self.gzipfile = GzipFile( fileobj=self.file, mode="wb", compresslevel=compress_level, mtime=mtime, filename=filename, ) def write(self, data: bytes) -> int: return self.gzipfile.write(data) def close(self) -> None: self.gzipfile.close() class Bz2Plugin: """ Compresses received data using `bz2 <https://en.wikipedia.org/wiki/Bzip2>`_. Accepted ``feed_options`` parameters: - `bz2_compresslevel` See :py:class:`bz2.BZ2File` for more info about parameters. """ def __init__(self, file: BinaryIO, feed_options: dict[str, Any]) -> None: self.file = file self.feed_options = feed_options compress_level = self.feed_options.get("bz2_compresslevel", 9) self.bz2file = BZ2File( filename=self.file, mode="wb", compresslevel=compress_level ) def write(self, data: bytes) -> int: return self.bz2file.write(data) def close(self) -> None: self.bz2file.close() class LZMAPlugin: """ Compresses received data using `lzma <https://en.wikipedia.org/wiki/Lempel–Ziv–Markov_chain_algorithm>`_. Accepted ``feed_options`` parameters: - `lzma_format` - `lzma_check` - `lzma_preset` - `lzma_filters` .. note:: ``lzma_filters`` cannot be used in pypy version 7.3.1 and older. See :py:class:`lzma.LZMAFile` for more info about parameters. """ def __init__(self, file: BinaryIO, feed_options: dict[str, Any]) -> None: self.file = file self.feed_options = feed_options format_ = self.feed_options.get("lzma_format") check = self.feed_options.get("lzma_check", -1) preset = self.feed_options.get("lzma_preset") filters = self.feed_options.get("lzma_filters") self.lzmafile = LZMAFile( filename=self.file, mode="wb", format=format_, check=check, preset=preset, filters=filters, ) def write(self, data: bytes) -> int: return self.lzmafile.write(data) def close(self) -> None: self.lzmafile.close() # io.IOBase is subclassed here, so that exporters can use the PostProcessingManager # instance as a file like writable object. This could be needed by some exporters # such as CsvItemExporter which wraps the feed storage with io.TextIOWrapper. class PostProcessingManager(IOBase): """ This will manage and use declared plugins to process data in a pipeline-ish way. :param plugins: all the declared plugins for the feed :type plugins: list :param file: final target file where the processed data will be written :type file: file like object """ def __init__( self, plugins: list[Any], file: IO[bytes], feed_options: dict[str, Any] ) -> None: self.plugins = self._load_plugins(plugins) self.file = file self.feed_options = feed_options self.head_plugin = self._get_head_plugin() def write(self, data: bytes) -> int: """ Uses all the declared plugins to process data first, then writes the processed data to target file. :param data: data passed to be written to target file :type data: bytes :return: returns number of bytes written :rtype: int """ return cast("int", self.head_plugin.write(data)) def tell(self) -> int: return self.file.tell() def close(self) -> None: """ Close the target file along with all the plugins. """ self.head_plugin.close() def writable(self) -> bool: return True def _load_plugins(self, plugins: list[Any]) -> list[Any]: return [load_object(plugin) for plugin in plugins] def _get_head_plugin(self) -> Any: prev = self.file for plugin in self.plugins[::-1]: prev = plugin(prev, self.feed_options) return prev
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/runspider.py
scrapy/commands/runspider.py
from __future__ import annotations import sys from importlib import import_module from pathlib import Path from typing import TYPE_CHECKING from scrapy.commands import BaseRunSpiderCommand from scrapy.exceptions import UsageError from scrapy.spiderloader import DummySpiderLoader from scrapy.utils.spider import iter_spider_classes if TYPE_CHECKING: import argparse from os import PathLike from types import ModuleType def _import_file(filepath: str | PathLike[str]) -> ModuleType: abspath = Path(filepath).resolve() if abspath.suffix not in (".py", ".pyw"): raise ValueError(f"Not a Python source file: {abspath}") dirname = str(abspath.parent) sys.path = [dirname, *sys.path] try: module = import_module(abspath.stem) finally: sys.path.pop(0) return module class Command(BaseRunSpiderCommand): default_settings = {"SPIDER_LOADER_CLASS": DummySpiderLoader} def syntax(self) -> str: return "[options] <spider_file>" def short_desc(self) -> str: return "Run a self-contained spider (without creating a project)" def long_desc(self) -> str: return "Run the spider defined in the given file" def run(self, args: list[str], opts: argparse.Namespace) -> None: if len(args) != 1: raise UsageError filename = Path(args[0]) if not filename.exists(): raise UsageError(f"File not found: {filename}\n") try: module = _import_file(filename) except (ImportError, ValueError) as e: raise UsageError(f"Unable to load {str(filename)!r}: {e}\n") spclasses = list(iter_spider_classes(module)) if not spclasses: raise UsageError(f"No spider found in file: {filename}\n") spidercls = spclasses.pop() assert self.crawler_process self.crawler_process.crawl(spidercls, **opts.spargs) self.crawler_process.start() if self.crawler_process.bootstrap_failed: self.exitcode = 1
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/shell.py
scrapy/commands/shell.py
""" Scrapy Shell See documentation in docs/topics/shell.rst """ from __future__ import annotations from threading import Thread from typing import TYPE_CHECKING, Any from scrapy.commands import ScrapyCommand from scrapy.http import Request from scrapy.shell import Shell from scrapy.utils.defer import _schedule_coro from scrapy.utils.spider import DefaultSpider, spidercls_for_request from scrapy.utils.url import guess_scheme if TYPE_CHECKING: from argparse import ArgumentParser, Namespace from scrapy import Spider class Command(ScrapyCommand): default_settings = { "DUPEFILTER_CLASS": "scrapy.dupefilters.BaseDupeFilter", "KEEP_ALIVE": True, "LOGSTATS_INTERVAL": 0, } def syntax(self) -> str: return "[url|file]" def short_desc(self) -> str: return "Interactive scraping console" def long_desc(self) -> str: return ( "Interactive console for scraping the given url or file. " "Use ./file.html syntax or full path for local file." ) def add_options(self, parser: ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "-c", dest="code", help="evaluate the code in the shell, print the result and exit", ) parser.add_argument("--spider", dest="spider", help="use this spider") parser.add_argument( "--no-redirect", dest="no_redirect", action="store_true", default=False, help="do not handle HTTP 3xx status codes and print response as-is", ) def update_vars(self, vars: dict[str, Any]) -> None: # noqa: A002 """You can use this function to update the Scrapy objects that will be available in the shell """ def run(self, args: list[str], opts: Namespace) -> None: url = args[0] if args else None if url: # first argument may be a local file url = guess_scheme(url) assert self.crawler_process spider_loader = self.crawler_process.spider_loader spidercls: type[Spider] = DefaultSpider if opts.spider: spidercls = spider_loader.load(opts.spider) elif url: spidercls = spidercls_for_request( spider_loader, Request(url), spidercls, log_multiple=True ) # The crawler is created this way since the Shell manually handles the # crawling engine, so the set up in the crawl method won't work crawler = self.crawler_process._create_crawler(spidercls) crawler._apply_settings() # The Shell class needs a persistent engine in the crawler crawler.engine = crawler._create_engine() _schedule_coro(crawler.engine.start_async(_start_request_processing=False)) self._start_crawler_thread() shell = Shell(crawler, update_vars=self.update_vars, code=opts.code) shell.start(url=url, redirect=not opts.no_redirect) def _start_crawler_thread(self) -> None: assert self.crawler_process t = Thread( target=self.crawler_process.start, kwargs={"stop_after_crawl": False, "install_signal_handlers": False}, ) t.daemon = True t.start()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/edit.py
scrapy/commands/edit.py
import argparse import os import sys from scrapy.commands import ScrapyCommand from scrapy.exceptions import UsageError from scrapy.spiderloader import get_spider_loader class Command(ScrapyCommand): requires_project = True requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "<spider>" def short_desc(self) -> str: return "Edit spider" def long_desc(self) -> str: return ( "Edit a spider using the editor defined in the EDITOR environment" " variable or else the EDITOR setting" ) def _err(self, msg: str) -> None: sys.stderr.write(msg + os.linesep) self.exitcode = 1 def run(self, args: list[str], opts: argparse.Namespace) -> None: if len(args) != 1: raise UsageError assert self.settings is not None editor = self.settings["EDITOR"] spider_loader = get_spider_loader(self.settings) try: spidercls = spider_loader.load(args[0]) except KeyError: self._err(f"Spider not found: {args[0]}") return sfile = sys.modules[spidercls.__module__].__file__ assert sfile sfile = sfile.replace(".pyc", ".py") self.exitcode = os.system(f'{editor} "{sfile}"') # noqa: S605
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/check.py
scrapy/commands/check.py
import argparse import time from collections import defaultdict from unittest import TextTestResult as _TextTestResult from unittest import TextTestRunner from scrapy.commands import ScrapyCommand from scrapy.contracts import ContractsManager from scrapy.utils.conf import build_component_list from scrapy.utils.misc import load_object, set_environ class TextTestResult(_TextTestResult): def printSummary(self, start: float, stop: float) -> None: write = self.stream.write writeln = self.stream.writeln run = self.testsRun plural = "s" if run != 1 else "" writeln(self.separator2) writeln(f"Ran {run} contract{plural} in {stop - start:.3f}s") writeln() infos = [] if not self.wasSuccessful(): write("FAILED") failed, errored = map(len, (self.failures, self.errors)) if failed: infos.append(f"failures={failed}") if errored: infos.append(f"errors={errored}") else: write("OK") if infos: writeln(f" ({', '.join(infos)})") else: write("\n") class Command(ScrapyCommand): requires_project = True default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "[options] <spider>" def short_desc(self) -> str: return "Check spider contracts" def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "-l", "--list", dest="list", action="store_true", help="only list contracts, without checking them", ) parser.add_argument( "-v", "--verbose", dest="verbose", default=False, action="store_true", help="print contract tests for all spiders", ) def run(self, args: list[str], opts: argparse.Namespace) -> None: # load contracts assert self.settings is not None contracts = build_component_list(self.settings.getwithbase("SPIDER_CONTRACTS")) conman = ContractsManager(load_object(c) for c in contracts) runner = TextTestRunner(verbosity=2 if opts.verbose else 1) result = TextTestResult(runner.stream, runner.descriptions, runner.verbosity) # contract requests contract_reqs = defaultdict(list) assert self.crawler_process spider_loader = self.crawler_process.spider_loader async def start(self): for request in conman.from_spider(self, result): yield request with set_environ(SCRAPY_CHECK="true"): for spidername in args or spider_loader.list(): spidercls = spider_loader.load(spidername) spidercls.start = start # type: ignore[assignment,method-assign,return-value] tested_methods = conman.tested_methods_from_spidercls(spidercls) if opts.list: for method in tested_methods: contract_reqs[spidercls.name].append(method) elif tested_methods: self.crawler_process.crawl(spidercls) # start checks if opts.list: for spider, methods in sorted(contract_reqs.items()): if not methods and not opts.verbose: continue print(spider) for method in sorted(methods): print(f" * {method}") else: start_time = time.time() self.crawler_process.start() stop = time.time() result.printErrors() result.printSummary(start_time, stop) self.exitcode = int(not result.wasSuccessful())
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/view.py
scrapy/commands/view.py
import argparse import logging from scrapy.commands import fetch from scrapy.http import Response, TextResponse from scrapy.utils.response import open_in_browser logger = logging.getLogger(__name__) class Command(fetch.Command): def short_desc(self) -> str: return "Open URL in browser, as seen by Scrapy" def long_desc(self) -> str: return ( "Fetch a URL using the Scrapy downloader and show its contents in a browser" ) def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument("--headers", help=argparse.SUPPRESS) def _print_response(self, response: Response, opts: argparse.Namespace) -> None: if not isinstance(response, TextResponse): logger.error("Cannot view a non-text response.") return open_in_browser(response)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/genspider.py
scrapy/commands/genspider.py
from __future__ import annotations import os import shutil import string from importlib import import_module from pathlib import Path from typing import TYPE_CHECKING, Any, cast from urllib.parse import urlparse import scrapy from scrapy.commands import ScrapyCommand from scrapy.exceptions import UsageError from scrapy.spiderloader import get_spider_loader from scrapy.utils.template import render_templatefile, string_camelcase if TYPE_CHECKING: import argparse def sanitize_module_name(module_name: str) -> str: """Sanitize the given module name, by replacing dashes and points with underscores and prefixing it with a letter if it doesn't start with one """ module_name = module_name.replace("-", "_").replace(".", "_") if module_name[0] not in string.ascii_letters: module_name = "a" + module_name return module_name def extract_domain(url: str) -> str: """Extract domain name from URL string""" o = urlparse(url) if o.scheme == "" and o.netloc == "": o = urlparse("//" + url.lstrip("/")) return o.netloc def verify_url_scheme(url: str) -> str: """Check url for scheme and insert https if none found.""" parsed = urlparse(url) if parsed.scheme == "" and parsed.netloc == "": parsed = urlparse("//" + url)._replace(scheme="https") return parsed.geturl() class Command(ScrapyCommand): requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "[options] <name> <domain>" def short_desc(self) -> str: return "Generate new spider using pre-defined templates" def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "-l", "--list", dest="list", action="store_true", help="List available templates", ) parser.add_argument( "-e", "--edit", dest="edit", action="store_true", help="Edit spider after creating it", ) parser.add_argument( "-d", "--dump", dest="dump", metavar="TEMPLATE", help="Dump template to standard output", ) parser.add_argument( "-t", "--template", dest="template", default="basic", help="Uses a custom template.", ) parser.add_argument( "--force", dest="force", action="store_true", help="If the spider already exists, overwrite it with the template", ) def run(self, args: list[str], opts: argparse.Namespace) -> None: assert self.settings is not None if opts.list: self._list_templates() return if opts.dump: template_file = self._find_template(opts.dump) if template_file: print(template_file.read_text(encoding="utf-8")) return if len(args) != 2: raise UsageError name, url = args[0:2] url = verify_url_scheme(url) module = sanitize_module_name(name) if self.settings.get("BOT_NAME") == module: print("Cannot create a spider with the same name as your project") return if not opts.force and self._spider_exists(name): return template_file = self._find_template(opts.template) if template_file: self._genspider(module, name, url, opts.template, template_file) if opts.edit: self.exitcode = os.system(f'scrapy edit "{name}"') # noqa: S605 def _generate_template_variables( self, module: str, name: str, url: str, template_name: str, ) -> dict[str, Any]: assert self.settings is not None capitalized_module = "".join(s.capitalize() for s in module.split("_")) return { "project_name": self.settings.get("BOT_NAME"), "ProjectName": string_camelcase(self.settings.get("BOT_NAME")), "module": module, "name": name, "url": url, "domain": extract_domain(url), "classname": f"{capitalized_module}Spider", } def _genspider( self, module: str, name: str, url: str, template_name: str, template_file: str | os.PathLike, ) -> None: """Generate the spider module, based on the given template""" assert self.settings is not None tvars = self._generate_template_variables(module, name, url, template_name) if self.settings.get("NEWSPIDER_MODULE"): spiders_module = import_module(self.settings["NEWSPIDER_MODULE"]) assert spiders_module.__file__ spiders_dir = Path(spiders_module.__file__).parent.resolve() else: spiders_module = None spiders_dir = Path() spider_file = f"{spiders_dir / module}.py" shutil.copyfile(template_file, spider_file) render_templatefile(spider_file, **tvars) print( f"Created spider {name!r} using template {template_name!r} ", end=("" if spiders_module else "\n"), ) if spiders_module: print(f"in module:\n {spiders_module.__name__}.{module}") def _find_template(self, template: str) -> Path | None: template_file = Path(self.templates_dir, f"{template}.tmpl") if template_file.exists(): return template_file print(f"Unable to find template: {template}\n") print('Use "scrapy genspider --list" to see all available templates.') return None def _list_templates(self) -> None: print("Available templates:") for file in sorted(Path(self.templates_dir).iterdir()): if file.suffix == ".tmpl": print(f" {file.stem}") def _spider_exists(self, name: str) -> bool: assert self.settings is not None if not self.settings.get("NEWSPIDER_MODULE"): # if run as a standalone command and file with same filename already exists path = Path(name + ".py") if path.exists(): print(f"{path.resolve()} already exists") return True return False spider_loader = get_spider_loader(self.settings) try: spidercls = spider_loader.load(name) except KeyError: pass else: # if spider with same name exists print(f"Spider {name!r} already exists in module:") print(f" {spidercls.__module__}") return True # a file with the same name exists in the target directory spiders_module = import_module(self.settings["NEWSPIDER_MODULE"]) spiders_dir = Path(cast("str", spiders_module.__file__)).parent spiders_dir_abs = spiders_dir.resolve() path = spiders_dir_abs / (name + ".py") if path.exists(): print(f"{path} already exists") return True return False @property def templates_dir(self) -> str: assert self.settings is not None return str( Path( self.settings["TEMPLATES_DIR"] or Path(scrapy.__path__[0], "templates"), "spiders", ) )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/crawl.py
scrapy/commands/crawl.py
from __future__ import annotations from typing import TYPE_CHECKING from scrapy.commands import BaseRunSpiderCommand from scrapy.exceptions import UsageError if TYPE_CHECKING: import argparse class Command(BaseRunSpiderCommand): requires_project = True def syntax(self) -> str: return "[options] <spider>" def short_desc(self) -> str: return "Run a spider" def run(self, args: list[str], opts: argparse.Namespace) -> None: if len(args) < 1: raise UsageError if len(args) > 1: raise UsageError( "running 'scrapy crawl' with more than one spider is not supported" ) spname = args[0] assert self.crawler_process self.crawler_process.crawl(spname, **opts.spargs) self.crawler_process.start() if self.crawler_process.bootstrap_failed: self.exitcode = 1
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/version.py
scrapy/commands/version.py
import argparse import scrapy from scrapy.commands import ScrapyCommand from scrapy.utils.versions import get_versions class Command(ScrapyCommand): requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "[-v]" def short_desc(self) -> str: return "Print Scrapy version" def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "--verbose", "-v", dest="verbose", action="store_true", help="also display twisted/python/platform info (useful for bug reports)", ) def run(self, args: list[str], opts: argparse.Namespace) -> None: if opts.verbose: versions = get_versions() width = max(len(n) for (n, _) in versions) for name, version in versions: print(f"{name:<{width}} : {version}") else: print(f"Scrapy {scrapy.__version__}")
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/list.py
scrapy/commands/list.py
from __future__ import annotations from typing import TYPE_CHECKING from scrapy.commands import ScrapyCommand from scrapy.spiderloader import get_spider_loader if TYPE_CHECKING: import argparse class Command(ScrapyCommand): requires_project = True requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def short_desc(self) -> str: return "List available spiders" def run(self, args: list[str], opts: argparse.Namespace) -> None: assert self.settings is not None spider_loader = get_spider_loader(self.settings) for s in sorted(spider_loader.list()): print(s)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/settings.py
scrapy/commands/settings.py
import argparse import json from scrapy.commands import ScrapyCommand from scrapy.settings import BaseSettings class Command(ScrapyCommand): requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "[options]" def short_desc(self) -> str: return "Get settings values" def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "--get", dest="get", metavar="SETTING", help="print raw setting value" ) parser.add_argument( "--getbool", dest="getbool", metavar="SETTING", help="print setting value, interpreted as a boolean", ) parser.add_argument( "--getint", dest="getint", metavar="SETTING", help="print setting value, interpreted as an integer", ) parser.add_argument( "--getfloat", dest="getfloat", metavar="SETTING", help="print setting value, interpreted as a float", ) parser.add_argument( "--getlist", dest="getlist", metavar="SETTING", help="print setting value, interpreted as a list", ) def run(self, args: list[str], opts: argparse.Namespace) -> None: assert self.settings is not None settings = self.settings if opts.get: s = settings.get(opts.get) if isinstance(s, BaseSettings): print(json.dumps(s.copy_to_dict())) else: print(s) elif opts.getbool: print(settings.getbool(opts.getbool)) elif opts.getint: print(settings.getint(opts.getint)) elif opts.getfloat: print(settings.getfloat(opts.getfloat)) elif opts.getlist: print(settings.getlist(opts.getlist))
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/parse.py
scrapy/commands/parse.py
from __future__ import annotations import functools import inspect import json import logging from typing import TYPE_CHECKING, Any, TypeVar, overload from itemadapter import ItemAdapter from twisted.internet.defer import Deferred, maybeDeferred from w3lib.url import is_url from scrapy.commands import BaseRunSpiderCommand from scrapy.exceptions import UsageError from scrapy.http import Request, Response from scrapy.utils import display from scrapy.utils.asyncgen import collect_asyncgen from scrapy.utils.defer import _schedule_coro, aiter_errback, deferred_from_coro from scrapy.utils.log import failure_to_exc_info from scrapy.utils.misc import arg_to_iter from scrapy.utils.spider import spidercls_for_request if TYPE_CHECKING: import argparse from collections.abc import AsyncGenerator, AsyncIterator, Coroutine, Iterable from twisted.python.failure import Failure from scrapy.http.request import CallbackT from scrapy.spiders import Spider logger = logging.getLogger(__name__) _T = TypeVar("_T") class Command(BaseRunSpiderCommand): requires_project = True spider: Spider | None = None items: dict[int, list[Any]] = {} requests: dict[int, list[Request]] = {} spidercls: type[Spider] | None first_response = None def syntax(self) -> str: return "[options] <url>" def short_desc(self) -> str: return "Parse URL (using its spider) and print the results" def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "--spider", dest="spider", default=None, help="use this spider without looking for one", ) parser.add_argument( "--pipelines", action="store_true", help="process items through pipelines" ) parser.add_argument( "--nolinks", dest="nolinks", action="store_true", help="don't show links to follow (extracted requests)", ) parser.add_argument( "--noitems", dest="noitems", action="store_true", help="don't show scraped items", ) parser.add_argument( "--nocolour", dest="nocolour", action="store_true", help="avoid using pygments to colorize the output", ) parser.add_argument( "-r", "--rules", dest="rules", action="store_true", help="use CrawlSpider rules to discover the callback", ) parser.add_argument( "-c", "--callback", dest="callback", help="use this callback for parsing, instead looking for a callback", ) parser.add_argument( "-m", "--meta", dest="meta", help="inject extra meta into the Request, it must be a valid raw json string", ) parser.add_argument( "--cbkwargs", dest="cbkwargs", help="inject extra callback kwargs into the Request, it must be a valid raw json string", ) parser.add_argument( "-d", "--depth", dest="depth", type=int, default=1, help="maximum depth for parsing requests [default: %(default)s]", ) parser.add_argument( "-v", "--verbose", dest="verbose", action="store_true", help="print each depth level one by one", ) @property def max_level(self) -> int: max_items, max_requests = 0, 0 if self.items: max_items = max(self.items) if self.requests: max_requests = max(self.requests) return max(max_items, max_requests) def handle_exception(self, _failure: Failure) -> None: logger.error( "An error is caught while iterating the async iterable", exc_info=failure_to_exc_info(_failure), ) @overload def iterate_spider_output( self, result: AsyncGenerator[_T] | Coroutine[Any, Any, _T] ) -> Deferred[_T]: ... @overload def iterate_spider_output(self, result: _T) -> Iterable[Any]: ... def iterate_spider_output(self, result: Any) -> Iterable[Any] | Deferred[Any]: if inspect.isasyncgen(result): d = deferred_from_coro( collect_asyncgen(aiter_errback(result, self.handle_exception)) ) d.addCallback(self.iterate_spider_output) return d if inspect.iscoroutine(result): d = deferred_from_coro(result) d.addCallback(self.iterate_spider_output) return d return arg_to_iter(deferred_from_coro(result)) def add_items(self, lvl: int, new_items: list[Any]) -> None: old_items = self.items.get(lvl, []) self.items[lvl] = old_items + new_items def add_requests(self, lvl: int, new_reqs: list[Request]) -> None: old_reqs = self.requests.get(lvl, []) self.requests[lvl] = old_reqs + new_reqs def print_items(self, lvl: int | None = None, colour: bool = True) -> None: if lvl is None: items = [item for lst in self.items.values() for item in lst] else: items = self.items.get(lvl, []) print("# Scraped Items ", "-" * 60) display.pprint([ItemAdapter(x).asdict() for x in items], colorize=colour) def print_requests(self, lvl: int | None = None, colour: bool = True) -> None: if lvl is not None: requests = self.requests.get(lvl, []) elif self.requests: requests = self.requests[max(self.requests)] else: requests = [] print("# Requests ", "-" * 65) display.pprint(requests, colorize=colour) def print_results(self, opts: argparse.Namespace) -> None: colour = not opts.nocolour if opts.verbose: for level in range(1, self.max_level + 1): print(f"\n>>> DEPTH LEVEL: {level} <<<") if not opts.noitems: self.print_items(level, colour) if not opts.nolinks: self.print_requests(level, colour) else: print(f"\n>>> STATUS DEPTH LEVEL {self.max_level} <<<") if not opts.noitems: self.print_items(colour=colour) if not opts.nolinks: self.print_requests(colour=colour) def _get_items_and_requests( self, spider_output: Iterable[Any], opts: argparse.Namespace, depth: int, spider: Spider, callback: CallbackT, ) -> tuple[list[Any], list[Request], argparse.Namespace, int, Spider, CallbackT]: items, requests = [], [] for x in spider_output: if isinstance(x, Request): requests.append(x) else: items.append(x) return items, requests, opts, depth, spider, callback def run_callback( self, response: Response, callback: CallbackT, cb_kwargs: dict[str, Any] | None = None, ) -> Deferred[Any]: cb_kwargs = cb_kwargs or {} return maybeDeferred( self.iterate_spider_output, callback(response, **cb_kwargs) ) def get_callback_from_rules( self, spider: Spider, response: Response ) -> CallbackT | str | None: if getattr(spider, "rules", None): for rule in spider.rules: # type: ignore[attr-defined] if rule.link_extractor.matches(response.url): return rule.callback or "parse" else: logger.error( "No CrawlSpider rules found in spider %(spider)r, " "please specify a callback to use for parsing", {"spider": spider.name}, ) return None def set_spidercls(self, url: str, opts: argparse.Namespace) -> None: assert self.crawler_process spider_loader = self.crawler_process.spider_loader if opts.spider: try: self.spidercls = spider_loader.load(opts.spider) except KeyError: logger.error( "Unable to find spider: %(spider)s", {"spider": opts.spider} ) else: self.spidercls = spidercls_for_request(spider_loader, Request(url)) if not self.spidercls: logger.error("Unable to find spider for: %(url)s", {"url": url}) async def start(spider: Spider) -> AsyncIterator[Any]: yield self.prepare_request(spider, Request(url), opts) if self.spidercls: self.spidercls.start = start # type: ignore[assignment,method-assign] def start_parsing(self, url: str, opts: argparse.Namespace) -> None: assert self.crawler_process assert self.spidercls self.crawler_process.crawl(self.spidercls, **opts.spargs) self.pcrawler = next(iter(self.crawler_process.crawlers)) self.crawler_process.start() if not self.first_response: logger.error("No response downloaded for: %(url)s", {"url": url}) def scraped_data( self, args: tuple[ list[Any], list[Request], argparse.Namespace, int, Spider, CallbackT ], ) -> list[Any]: items, requests, opts, depth, spider, callback = args if opts.pipelines: assert self.pcrawler.engine itemproc = self.pcrawler.engine.scraper.itemproc if hasattr(itemproc, "process_item_async"): for item in items: _schedule_coro(itemproc.process_item_async(item)) else: for item in items: itemproc.process_item(item, spider) self.add_items(depth, items) self.add_requests(depth, requests) scraped_data = items if opts.output else [] if depth < opts.depth: for req in requests: req.meta["_depth"] = depth + 1 req.meta["_callback"] = req.callback req.callback = callback scraped_data += requests return scraped_data def _get_callback( self, *, spider: Spider, opts: argparse.Namespace, response: Response | None = None, ) -> CallbackT: cb: str | CallbackT | None = None if response: cb = response.meta["_callback"] if not cb: if opts.callback: cb = opts.callback elif response and opts.rules and self.first_response == response: cb = self.get_callback_from_rules(spider, response) if not cb: raise ValueError( f"Cannot find a rule that matches {response.url!r} in spider: " f"{spider.name}" ) else: cb = "parse" if not callable(cb): assert cb is not None cb_method = getattr(spider, cb, None) if callable(cb_method): cb = cb_method else: raise ValueError( f"Cannot find callback {cb!r} in spider: {spider.name}" ) assert callable(cb) return cb def prepare_request( self, spider: Spider, request: Request, opts: argparse.Namespace ) -> Request: def callback(response: Response, **cb_kwargs: Any) -> Deferred[list[Any]]: # memorize first request if not self.first_response: self.first_response = response cb = self._get_callback(spider=spider, opts=opts, response=response) # parse items and requests depth: int = response.meta["_depth"] d = self.run_callback(response, cb, cb_kwargs) d.addCallback(self._get_items_and_requests, opts, depth, spider, callback) d.addCallback(self.scraped_data) return d # update request meta if any extra meta was passed through the --meta/-m opts. if opts.meta: request.meta.update(opts.meta) # update cb_kwargs if any extra values were was passed through the --cbkwargs option. if opts.cbkwargs: request.cb_kwargs.update(opts.cbkwargs) request.meta["_depth"] = 1 request.meta["_callback"] = request.callback if not request.callback and not opts.rules: cb = self._get_callback(spider=spider, opts=opts) functools.update_wrapper(callback, cb) request.callback = callback return request def process_options(self, args: list[str], opts: argparse.Namespace) -> None: super().process_options(args, opts) self.process_request_meta(opts) self.process_request_cb_kwargs(opts) def process_request_meta(self, opts: argparse.Namespace) -> None: if opts.meta: try: opts.meta = json.loads(opts.meta) except ValueError: raise UsageError( "Invalid -m/--meta value, pass a valid json string to -m or --meta. " 'Example: --meta=\'{"foo" : "bar"}\'', print_help=False, ) def process_request_cb_kwargs(self, opts: argparse.Namespace) -> None: if opts.cbkwargs: try: opts.cbkwargs = json.loads(opts.cbkwargs) except ValueError: raise UsageError( "Invalid --cbkwargs value, pass a valid json string to --cbkwargs. " 'Example: --cbkwargs=\'{"foo" : "bar"}\'', print_help=False, ) def run(self, args: list[str], opts: argparse.Namespace) -> None: # parse arguments if not len(args) == 1 or not is_url(args[0]): raise UsageError url = args[0] # prepare spidercls self.set_spidercls(url, opts) if self.spidercls and opts.depth > 0: self.start_parsing(url, opts) self.print_results(opts)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/bench.py
scrapy/commands/bench.py
from __future__ import annotations import subprocess import sys import time from typing import TYPE_CHECKING, Any from urllib.parse import urlencode import scrapy from scrapy.commands import ScrapyCommand from scrapy.http import Response, TextResponse from scrapy.linkextractors import LinkExtractor from scrapy.utils.test import get_testenv if TYPE_CHECKING: import argparse from collections.abc import AsyncIterator class Command(ScrapyCommand): default_settings = { "LOG_LEVEL": "INFO", "LOGSTATS_INTERVAL": 1, "CLOSESPIDER_TIMEOUT": 10, } def short_desc(self) -> str: return "Run quick benchmark test" def run(self, args: list[str], opts: argparse.Namespace) -> None: with _BenchServer(): assert self.crawler_process self.crawler_process.crawl(_BenchSpider, total=100000) self.crawler_process.start() class _BenchServer: def __enter__(self) -> None: pargs = [sys.executable, "-u", "-m", "scrapy.utils.benchserver"] self.proc = subprocess.Popen( # noqa: S603 pargs, stdout=subprocess.PIPE, env=get_testenv() ) assert self.proc.stdout self.proc.stdout.readline() def __exit__(self, exc_type, exc_value, traceback) -> None: self.proc.kill() self.proc.wait() time.sleep(0.2) class _BenchSpider(scrapy.Spider): """A spider that follows all links""" name = "follow" total = 10000 show = 20 baseurl = "http://localhost:8998" link_extractor = LinkExtractor() async def start(self) -> AsyncIterator[Any]: qargs = {"total": self.total, "show": self.show} url = f"{self.baseurl}?{urlencode(qargs, doseq=True)}" yield scrapy.Request(url, dont_filter=True) def parse(self, response: Response) -> Any: assert isinstance(response, TextResponse) for link in self.link_extractor.extract_links(response): yield scrapy.Request(link.url, callback=self.parse)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/__init__.py
scrapy/commands/__init__.py
""" Base class for Scrapy commands """ from __future__ import annotations import argparse import builtins import os from abc import ABC, abstractmethod from pathlib import Path from typing import TYPE_CHECKING, Any from twisted.python import failure from scrapy.exceptions import UsageError from scrapy.utils.conf import arglist_to_dict, feed_process_params_from_cli if TYPE_CHECKING: from collections.abc import Iterable from scrapy.crawler import Crawler, CrawlerProcessBase from scrapy.settings import Settings class ScrapyCommand(ABC): requires_project: bool = False requires_crawler_process: bool = True crawler_process: CrawlerProcessBase | None = None # set in scrapy.cmdline # default settings to be used for this command instead of global defaults default_settings: dict[str, Any] = {} exitcode: int = 0 def __init__(self) -> None: self.settings: Settings | None = None # set in scrapy.cmdline def set_crawler(self, crawler: Crawler) -> None: if hasattr(self, "_crawler"): raise RuntimeError("crawler already set") self._crawler: Crawler = crawler def syntax(self) -> str: """ Command syntax (preferably one-line). Do not include command name. """ return "" @abstractmethod def short_desc(self) -> str: """ A short description of the command """ return "" def long_desc(self) -> str: """A long description of the command. Return short description when not available. It cannot contain newlines since contents will be formatted by optparser which removes newlines and wraps text. """ return self.short_desc() def help(self) -> str: """An extensive help for the command. It will be shown when using the "help" command. It can contain newlines since no post-formatting will be applied to its contents. """ return self.long_desc() def add_options(self, parser: argparse.ArgumentParser) -> None: """ Populate option parse with options available for this command """ assert self.settings is not None group = parser.add_argument_group(title="Global Options") group.add_argument( "--logfile", metavar="FILE", help="log file. if omitted stderr will be used" ) group.add_argument( "-L", "--loglevel", metavar="LEVEL", default=None, help=f"log level (default: {self.settings['LOG_LEVEL']})", ) group.add_argument( "--nolog", action="store_true", help="disable logging completely" ) group.add_argument( "--profile", metavar="FILE", default=None, help="write python cProfile stats to FILE", ) group.add_argument("--pidfile", metavar="FILE", help="write process ID to FILE") group.add_argument( "-s", "--set", action="append", default=[], metavar="NAME=VALUE", help="set/override setting (may be repeated)", ) group.add_argument("--pdb", action="store_true", help="enable pdb on failure") def process_options(self, args: list[str], opts: argparse.Namespace) -> None: assert self.settings is not None try: self.settings.setdict(arglist_to_dict(opts.set), priority="cmdline") except ValueError: raise UsageError("Invalid -s value, use -s NAME=VALUE", print_help=False) if opts.logfile: self.settings.set("LOG_ENABLED", True, priority="cmdline") self.settings.set("LOG_FILE", opts.logfile, priority="cmdline") if opts.loglevel: self.settings.set("LOG_ENABLED", True, priority="cmdline") self.settings.set("LOG_LEVEL", opts.loglevel, priority="cmdline") if opts.nolog: self.settings.set("LOG_ENABLED", False, priority="cmdline") if opts.pidfile: Path(opts.pidfile).write_text( str(os.getpid()) + os.linesep, encoding="utf-8" ) if opts.pdb: failure.startDebugMode() @abstractmethod def run(self, args: list[str], opts: argparse.Namespace) -> None: """ Entry point for running commands """ raise NotImplementedError class BaseRunSpiderCommand(ScrapyCommand): """ Common class used to share functionality between the crawl, parse and runspider commands """ def add_options(self, parser: argparse.ArgumentParser) -> None: super().add_options(parser) parser.add_argument( "-a", dest="spargs", action="append", default=[], metavar="NAME=VALUE", help="set spider argument (may be repeated)", ) parser.add_argument( "-o", "--output", metavar="FILE", action="append", help="append scraped items to the end of FILE (use - for stdout)," " to define format set a colon at the end of the output URI (i.e. -o FILE:FORMAT)", ) parser.add_argument( "-O", "--overwrite-output", metavar="FILE", action="append", help="dump scraped items into FILE, overwriting any existing file," " to define format set a colon at the end of the output URI (i.e. -O FILE:FORMAT)", ) def process_options(self, args: list[str], opts: argparse.Namespace) -> None: super().process_options(args, opts) try: opts.spargs = arglist_to_dict(opts.spargs) except ValueError: raise UsageError("Invalid -a value, use -a NAME=VALUE", print_help=False) if opts.output or opts.overwrite_output: assert self.settings is not None feeds = feed_process_params_from_cli( self.settings, opts.output, overwrite_output=opts.overwrite_output, ) self.settings.set("FEEDS", feeds, priority="cmdline") class ScrapyHelpFormatter(argparse.HelpFormatter): """ Help Formatter for scrapy command line help messages. """ def __init__( self, prog: str, indent_increment: int = 2, max_help_position: int = 24, width: int | None = None, ): super().__init__( prog, indent_increment=indent_increment, max_help_position=max_help_position, width=width, ) def _join_parts(self, part_strings: Iterable[str]) -> str: # scrapy.commands.list shadows builtins.list parts = self.format_part_strings(builtins.list(part_strings)) return super()._join_parts(parts) def format_part_strings(self, part_strings: list[str]) -> list[str]: """ Underline and title case command line help message headers. """ if part_strings and part_strings[0].startswith("usage: "): part_strings[0] = "Usage\n=====\n " + part_strings[0][len("usage: ") :] headings = [ i for i in range(len(part_strings)) if part_strings[i].endswith(":\n") ] for index in headings[::-1]: char = "-" if "Global Options" in part_strings[index] else "=" part_strings[index] = part_strings[index][:-2].title() underline = "".join(["\n", (char * len(part_strings[index])), "\n"]) part_strings.insert(index + 1, underline) return part_strings
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/fetch.py
scrapy/commands/fetch.py
from __future__ import annotations import sys from argparse import Namespace # noqa: TC003 from typing import TYPE_CHECKING from w3lib.url import is_url from scrapy.commands import ScrapyCommand from scrapy.exceptions import UsageError from scrapy.http import Request, Response from scrapy.utils.datatypes import SequenceExclude from scrapy.utils.spider import DefaultSpider, spidercls_for_request if TYPE_CHECKING: from argparse import ArgumentParser from scrapy import Spider class Command(ScrapyCommand): def syntax(self) -> str: return "[options] <url>" def short_desc(self) -> str: return "Fetch a URL using the Scrapy downloader" def long_desc(self) -> str: return ( "Fetch a URL using the Scrapy downloader and print its content" " to stdout. You may want to use --nolog to disable logging" ) def add_options(self, parser: ArgumentParser) -> None: super().add_options(parser) parser.add_argument("--spider", dest="spider", help="use this spider") parser.add_argument( "--headers", dest="headers", action="store_true", help="print response HTTP headers instead of body", ) parser.add_argument( "--no-redirect", dest="no_redirect", action="store_true", default=False, help="do not handle HTTP 3xx status codes and print response as-is", ) def _print_headers(self, headers: dict[bytes, list[bytes]], prefix: bytes) -> None: for key, values in headers.items(): for value in values: self._print_bytes(prefix + b" " + key + b": " + value) def _print_response(self, response: Response, opts: Namespace) -> None: if opts.headers: assert response.request self._print_headers(response.request.headers, b">") print(">") self._print_headers(response.headers, b"<") else: self._print_bytes(response.body) def _print_bytes(self, bytes_: bytes) -> None: sys.stdout.buffer.write(bytes_ + b"\n") def run(self, args: list[str], opts: Namespace) -> None: if len(args) != 1 or not is_url(args[0]): raise UsageError request = Request( args[0], callback=self._print_response, cb_kwargs={"opts": opts}, dont_filter=True, ) # by default, let the framework handle redirects, # i.e. command handles all codes expect 3xx if not opts.no_redirect: request.meta["handle_httpstatus_list"] = SequenceExclude(range(300, 400)) else: request.meta["handle_httpstatus_all"] = True spidercls: type[Spider] = DefaultSpider assert self.crawler_process spider_loader = self.crawler_process.spider_loader if opts.spider: spidercls = spider_loader.load(opts.spider) else: spidercls = spidercls_for_request(spider_loader, request, spidercls) async def start(self): yield request spidercls.start = start # type: ignore[method-assign,attr-defined] self.crawler_process.crawl(spidercls) self.crawler_process.start()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/commands/startproject.py
scrapy/commands/startproject.py
from __future__ import annotations import re import string from importlib.util import find_spec from pathlib import Path from shutil import copy2, copystat, ignore_patterns, move from stat import S_IWUSR as OWNER_WRITE_PERMISSION from typing import TYPE_CHECKING import scrapy from scrapy.commands import ScrapyCommand from scrapy.exceptions import UsageError from scrapy.utils.template import render_templatefile, string_camelcase if TYPE_CHECKING: import argparse TEMPLATES_TO_RENDER: tuple[tuple[str, ...], ...] = ( ("scrapy.cfg",), ("${project_name}", "settings.py.tmpl"), ("${project_name}", "items.py.tmpl"), ("${project_name}", "pipelines.py.tmpl"), ("${project_name}", "middlewares.py.tmpl"), ) IGNORE = ignore_patterns("*.pyc", "__pycache__", ".svn") def _make_writable(path: Path) -> None: current_permissions = path.stat().st_mode path.chmod(current_permissions | OWNER_WRITE_PERMISSION) class Command(ScrapyCommand): requires_crawler_process = False default_settings = {"LOG_ENABLED": False} def syntax(self) -> str: return "<project_name> [project_dir]" def short_desc(self) -> str: return "Create new project" def _is_valid_name(self, project_name: str) -> bool: def _module_exists(module_name: str) -> bool: spec = find_spec(module_name) return spec is not None and spec.loader is not None if not re.search(r"^[_a-zA-Z]\w*$", project_name): print( "Error: Project names must begin with a letter and contain" " only\nletters, numbers and underscores" ) elif _module_exists(project_name): print(f"Error: Module {project_name!r} already exists") else: return True return False def _copytree(self, src: Path, dst: Path) -> None: """ Since the original function always creates the directory, to resolve the issue a new function had to be created. It's a simple copy and was reduced for this case. More info at: https://github.com/scrapy/scrapy/pull/2005 """ ignore = IGNORE names = [x.name for x in src.iterdir()] ignored_names = ignore(src, names) if not dst.exists(): dst.mkdir(parents=True) for name in names: if name in ignored_names: continue srcname = src / name dstname = dst / name if srcname.is_dir(): self._copytree(srcname, dstname) else: copy2(srcname, dstname) _make_writable(dstname) copystat(src, dst) _make_writable(dst) def run(self, args: list[str], opts: argparse.Namespace) -> None: if len(args) not in (1, 2): raise UsageError project_name = args[0] project_dir = Path(args[-1]) if (project_dir / "scrapy.cfg").exists(): self.exitcode = 1 print(f"Error: scrapy.cfg already exists in {project_dir.resolve()}") return if not self._is_valid_name(project_name): self.exitcode = 1 return self._copytree(Path(self.templates_dir), project_dir.resolve()) move(project_dir / "module", project_dir / project_name) for paths in TEMPLATES_TO_RENDER: tplfile = Path( project_dir, *( string.Template(s).substitute(project_name=project_name) for s in paths ), ) render_templatefile( tplfile, project_name=project_name, ProjectName=string_camelcase(project_name), ) print( f"New Scrapy project '{project_name}', using template directory " f"'{self.templates_dir}', created in:" ) print(f" {project_dir.resolve()}\n") print("You can start your first spider with:") print(f" cd {project_dir}") print(" scrapy genspider example example.com") @property def templates_dir(self) -> str: assert self.settings is not None return str( Path( self.settings["TEMPLATES_DIR"] or Path(scrapy.__path__[0], "templates"), "project", ) )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/pipelines/files.py
scrapy/pipelines/files.py
""" Files Pipeline See documentation in topics/media-pipeline.rst """ from __future__ import annotations import base64 import functools import hashlib import logging import mimetypes import time import warnings from collections import defaultdict from contextlib import suppress from ftplib import FTP from io import BytesIO from pathlib import Path from typing import IO, TYPE_CHECKING, Any, NoReturn, Protocol, TypedDict, cast from urllib.parse import urlparse from itemadapter import ItemAdapter from twisted.internet.defer import Deferred, maybeDeferred from twisted.internet.threads import deferToThread from scrapy.exceptions import IgnoreRequest, NotConfigured, ScrapyDeprecationWarning from scrapy.http import Request, Response from scrapy.http.request import NO_CALLBACK from scrapy.pipelines.media import FileInfo, FileInfoOrError, MediaPipeline from scrapy.utils.boto import is_botocore_available from scrapy.utils.datatypes import CaseInsensitiveDict from scrapy.utils.ftp import ftp_store_file from scrapy.utils.log import failure_to_exc_info from scrapy.utils.python import to_bytes from scrapy.utils.request import referer_str if TYPE_CHECKING: from os import PathLike from twisted.python.failure import Failure # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.settings import BaseSettings logger = logging.getLogger(__name__) def _to_string(path: str | PathLike[str]) -> str: return str(path) # convert a Path object to string def _md5sum(file: IO[bytes]) -> str: """Calculate the md5 checksum of a file-like object without reading its whole content in memory. >>> from io import BytesIO >>> _md5sum(BytesIO(b'file content to hash')) '784406af91dd5a54fbb9c84c2236595a' """ m = hashlib.md5() # noqa: S324 while True: d = file.read(8096) if not d: break m.update(d) return m.hexdigest() class FileException(Exception): """General media error exception""" class StatInfo(TypedDict, total=False): checksum: str last_modified: float class FilesStoreProtocol(Protocol): def __init__(self, basedir: str): ... def persist_file( self, path: str, buf: BytesIO, info: MediaPipeline.SpiderInfo, meta: dict[str, Any] | None = None, headers: dict[str, str] | None = None, ) -> Deferred[Any] | None: ... def stat_file( self, path: str, info: MediaPipeline.SpiderInfo ) -> StatInfo | Deferred[StatInfo]: ... class FSFilesStore: def __init__(self, basedir: str | PathLike[str]): basedir = _to_string(basedir) if "://" in basedir: basedir = basedir.split("://", 1)[1] self.basedir: str = basedir self._mkdir(Path(self.basedir)) self.created_directories: defaultdict[MediaPipeline.SpiderInfo, set[str]] = ( defaultdict(set) ) def persist_file( self, path: str | PathLike[str], buf: BytesIO, info: MediaPipeline.SpiderInfo, meta: dict[str, Any] | None = None, headers: dict[str, str] | None = None, ) -> None: absolute_path = self._get_filesystem_path(path) self._mkdir(absolute_path.parent, info) absolute_path.write_bytes(buf.getvalue()) def stat_file( self, path: str | PathLike[str], info: MediaPipeline.SpiderInfo ) -> StatInfo: absolute_path = self._get_filesystem_path(path) try: last_modified = absolute_path.stat().st_mtime except OSError: return {} with absolute_path.open("rb") as f: checksum = _md5sum(f) return {"last_modified": last_modified, "checksum": checksum} def _get_filesystem_path(self, path: str | PathLike[str]) -> Path: path_comps = _to_string(path).split("/") return Path(self.basedir, *path_comps) def _mkdir( self, dirname: Path, domain: MediaPipeline.SpiderInfo | None = None ) -> None: seen: set[str] = self.created_directories[domain] if domain else set() if str(dirname) not in seen: if not dirname.exists(): dirname.mkdir(parents=True) seen.add(str(dirname)) class S3FilesStore: AWS_ACCESS_KEY_ID = None AWS_SECRET_ACCESS_KEY = None AWS_SESSION_TOKEN = None AWS_ENDPOINT_URL = None AWS_REGION_NAME = None AWS_USE_SSL = None AWS_VERIFY = None POLICY = "private" # Overridden from settings.FILES_STORE_S3_ACL in FilesPipeline.from_crawler() HEADERS = { "Cache-Control": "max-age=172800", } def __init__(self, uri: str): if not is_botocore_available(): raise NotConfigured("missing botocore library") import botocore.session # noqa: PLC0415 session = botocore.session.get_session() self.s3_client = session.create_client( "s3", aws_access_key_id=self.AWS_ACCESS_KEY_ID, aws_secret_access_key=self.AWS_SECRET_ACCESS_KEY, aws_session_token=self.AWS_SESSION_TOKEN, endpoint_url=self.AWS_ENDPOINT_URL, region_name=self.AWS_REGION_NAME, use_ssl=self.AWS_USE_SSL, verify=self.AWS_VERIFY, ) if not uri.startswith("s3://"): raise ValueError(f"Incorrect URI scheme in {uri}, expected 's3'") self.bucket, self.prefix = uri[5:].split("/", 1) def stat_file( self, path: str, info: MediaPipeline.SpiderInfo ) -> Deferred[StatInfo]: def _onsuccess(boto_key: dict[str, Any]) -> StatInfo: checksum = boto_key["ETag"].strip('"') last_modified = boto_key["LastModified"] modified_stamp = time.mktime(last_modified.timetuple()) return {"checksum": checksum, "last_modified": modified_stamp} return self._get_boto_key(path).addCallback(_onsuccess) def _get_boto_key(self, path: str) -> Deferred[dict[str, Any]]: key_name = f"{self.prefix}{path}" return cast( "Deferred[dict[str, Any]]", deferToThread( self.s3_client.head_object, # type: ignore[attr-defined] Bucket=self.bucket, Key=key_name, ), ) def persist_file( self, path: str, buf: BytesIO, info: MediaPipeline.SpiderInfo, meta: dict[str, Any] | None = None, headers: dict[str, str] | None = None, ) -> Deferred[Any]: """Upload file to S3 storage""" key_name = f"{self.prefix}{path}" buf.seek(0) extra = self._headers_to_botocore_kwargs(self.HEADERS) if headers: extra.update(self._headers_to_botocore_kwargs(headers)) return deferToThread( self.s3_client.put_object, # type: ignore[attr-defined] Bucket=self.bucket, Key=key_name, Body=buf, Metadata={k: str(v) for k, v in (meta or {}).items()}, ACL=self.POLICY, **extra, ) def _headers_to_botocore_kwargs(self, headers: dict[str, Any]) -> dict[str, Any]: """Convert headers to botocore keyword arguments.""" # This is required while we need to support both boto and botocore. mapping = CaseInsensitiveDict( { "Content-Type": "ContentType", "Cache-Control": "CacheControl", "Content-Disposition": "ContentDisposition", "Content-Encoding": "ContentEncoding", "Content-Language": "ContentLanguage", "Content-Length": "ContentLength", "Content-MD5": "ContentMD5", "Expires": "Expires", "X-Amz-Grant-Full-Control": "GrantFullControl", "X-Amz-Grant-Read": "GrantRead", "X-Amz-Grant-Read-ACP": "GrantReadACP", "X-Amz-Grant-Write-ACP": "GrantWriteACP", "X-Amz-Object-Lock-Legal-Hold": "ObjectLockLegalHoldStatus", "X-Amz-Object-Lock-Mode": "ObjectLockMode", "X-Amz-Object-Lock-Retain-Until-Date": "ObjectLockRetainUntilDate", "X-Amz-Request-Payer": "RequestPayer", "X-Amz-Server-Side-Encryption": "ServerSideEncryption", "X-Amz-Server-Side-Encryption-Aws-Kms-Key-Id": "SSEKMSKeyId", "X-Amz-Server-Side-Encryption-Context": "SSEKMSEncryptionContext", "X-Amz-Server-Side-Encryption-Customer-Algorithm": "SSECustomerAlgorithm", "X-Amz-Server-Side-Encryption-Customer-Key": "SSECustomerKey", "X-Amz-Server-Side-Encryption-Customer-Key-Md5": "SSECustomerKeyMD5", "X-Amz-Storage-Class": "StorageClass", "X-Amz-Tagging": "Tagging", "X-Amz-Website-Redirect-Location": "WebsiteRedirectLocation", } ) extra: dict[str, Any] = {} for key, value in headers.items(): try: kwarg = mapping[key] except KeyError: raise TypeError(f'Header "{key}" is not supported by botocore') extra[kwarg] = value return extra class GCSFilesStore: GCS_PROJECT_ID = None CACHE_CONTROL = "max-age=172800" # The bucket's default object ACL will be applied to the object. # Overridden from settings.FILES_STORE_GCS_ACL in FilesPipeline.from_crawler(). POLICY = None def __init__(self, uri: str): from google.cloud import storage # noqa: PLC0415 client = storage.Client(project=self.GCS_PROJECT_ID) bucket, prefix = uri[5:].split("/", 1) self.bucket = client.bucket(bucket) self.prefix: str = prefix permissions = self.bucket.test_iam_permissions( ["storage.objects.get", "storage.objects.create"] ) if "storage.objects.get" not in permissions: logger.warning( "No 'storage.objects.get' permission for GSC bucket %(bucket)s. " "Checking if files are up to date will be impossible. Files will be downloaded every time.", {"bucket": bucket}, ) if "storage.objects.create" not in permissions: logger.error( "No 'storage.objects.create' permission for GSC bucket %(bucket)s. Saving files will be impossible!", {"bucket": bucket}, ) def stat_file( self, path: str, info: MediaPipeline.SpiderInfo ) -> Deferred[StatInfo]: def _onsuccess(blob) -> StatInfo: if blob: checksum = base64.b64decode(blob.md5_hash).hex() last_modified = time.mktime(blob.updated.timetuple()) return {"checksum": checksum, "last_modified": last_modified} return {} blob_path = self._get_blob_path(path) return cast( "Deferred[StatInfo]", deferToThread(self.bucket.get_blob, blob_path).addCallback(_onsuccess), ) def _get_content_type(self, headers: dict[str, str] | None) -> str: if headers and "Content-Type" in headers: return headers["Content-Type"] return "application/octet-stream" def _get_blob_path(self, path: str) -> str: return self.prefix + path def persist_file( self, path: str, buf: BytesIO, info: MediaPipeline.SpiderInfo, meta: dict[str, Any] | None = None, headers: dict[str, str] | None = None, ) -> Deferred[Any]: blob_path = self._get_blob_path(path) blob = self.bucket.blob(blob_path) blob.cache_control = self.CACHE_CONTROL blob.metadata = {k: str(v) for k, v in (meta or {}).items()} return deferToThread( blob.upload_from_string, data=buf.getvalue(), content_type=self._get_content_type(headers), predefined_acl=self.POLICY, ) class FTPFilesStore: FTP_USERNAME: str | None = None FTP_PASSWORD: str | None = None USE_ACTIVE_MODE: bool | None = None def __init__(self, uri: str): if not uri.startswith("ftp://"): raise ValueError(f"Incorrect URI scheme in {uri}, expected 'ftp'") u = urlparse(uri) assert u.port assert u.hostname self.port: int = u.port self.host: str = u.hostname self.port = int(u.port or 21) assert self.FTP_USERNAME assert self.FTP_PASSWORD self.username: str = u.username or self.FTP_USERNAME self.password: str = u.password or self.FTP_PASSWORD self.basedir: str = u.path.rstrip("/") def persist_file( self, path: str, buf: BytesIO, info: MediaPipeline.SpiderInfo, meta: dict[str, Any] | None = None, headers: dict[str, str] | None = None, ) -> Deferred[Any]: path = f"{self.basedir}/{path}" return deferToThread( ftp_store_file, path=path, file=buf, host=self.host, port=self.port, username=self.username, password=self.password, use_active_mode=self.USE_ACTIVE_MODE, ) def stat_file( self, path: str, info: MediaPipeline.SpiderInfo ) -> Deferred[StatInfo]: def _stat_file(path: str) -> StatInfo: try: ftp = FTP() ftp.connect(self.host, self.port) ftp.login(self.username, self.password) if self.USE_ACTIVE_MODE: ftp.set_pasv(False) file_path = f"{self.basedir}/{path}" last_modified = float(ftp.voidcmd(f"MDTM {file_path}")[4:].strip()) m = hashlib.md5() # noqa: S324 ftp.retrbinary(f"RETR {file_path}", m.update) return {"last_modified": last_modified, "checksum": m.hexdigest()} # The file doesn't exist except Exception: return {} return cast("Deferred[StatInfo]", deferToThread(_stat_file, path)) class FilesPipeline(MediaPipeline): """Abstract pipeline that implement the file downloading This pipeline tries to minimize network transfers and file processing, doing stat of the files and determining if file is new, up-to-date or expired. ``new`` files are those that pipeline never processed and needs to be downloaded from supplier site the first time. ``uptodate`` files are the ones that the pipeline processed and are still valid files. ``expired`` files are those that pipeline already processed but the last modification was made long time ago, so a reprocessing is recommended to refresh it in case of change. """ MEDIA_NAME: str = "file" EXPIRES: int = 90 STORE_SCHEMES: dict[str, type[FilesStoreProtocol]] = { "": FSFilesStore, "file": FSFilesStore, "s3": S3FilesStore, "gs": GCSFilesStore, "ftp": FTPFilesStore, } DEFAULT_FILES_URLS_FIELD: str = "file_urls" DEFAULT_FILES_RESULT_FIELD: str = "files" def __init__( self, store_uri: str | PathLike[str], download_func: None = None, *, crawler: Crawler, ): if download_func is not None: # pragma: no cover warnings.warn( "The download_func argument of FilesPipeline.__init__() is ignored" " and will be removed in a future Scrapy version.", category=ScrapyDeprecationWarning, stacklevel=2, ) if not (store_uri and (store_uri := _to_string(store_uri))): from scrapy.pipelines.images import ImagesPipeline # noqa: PLC0415 setting_name = ( "IMAGES_STORE" if isinstance(self, ImagesPipeline) else "FILES_STORE" ) raise NotConfigured( f"{setting_name} setting must be set to a valid path (not empty) " f"to enable {self.__class__.__name__}." ) settings = crawler.settings cls_name = "FilesPipeline" self.store: FilesStoreProtocol = self._get_store(store_uri) resolve = functools.partial( self._key_for_pipe, base_class_name=cls_name, settings=settings ) self.expires: int = settings.getint(resolve("FILES_EXPIRES"), self.EXPIRES) if not hasattr(self, "FILES_URLS_FIELD"): self.FILES_URLS_FIELD = self.DEFAULT_FILES_URLS_FIELD if not hasattr(self, "FILES_RESULT_FIELD"): self.FILES_RESULT_FIELD = self.DEFAULT_FILES_RESULT_FIELD self.files_urls_field: str = settings.get( resolve("FILES_URLS_FIELD"), self.FILES_URLS_FIELD ) self.files_result_field: str = settings.get( resolve("FILES_RESULT_FIELD"), self.FILES_RESULT_FIELD ) super().__init__(crawler=crawler) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: settings = crawler.settings cls._update_stores(settings) store_uri = settings["FILES_STORE"] return cls(store_uri, crawler=crawler) @classmethod def _update_stores(cls, settings: BaseSettings) -> None: s3store: type[S3FilesStore] = cast( "type[S3FilesStore]", cls.STORE_SCHEMES["s3"] ) s3store.AWS_ACCESS_KEY_ID = settings["AWS_ACCESS_KEY_ID"] s3store.AWS_SECRET_ACCESS_KEY = settings["AWS_SECRET_ACCESS_KEY"] s3store.AWS_SESSION_TOKEN = settings["AWS_SESSION_TOKEN"] s3store.AWS_ENDPOINT_URL = settings["AWS_ENDPOINT_URL"] s3store.AWS_REGION_NAME = settings["AWS_REGION_NAME"] s3store.AWS_USE_SSL = settings["AWS_USE_SSL"] s3store.AWS_VERIFY = settings["AWS_VERIFY"] s3store.POLICY = settings["FILES_STORE_S3_ACL"] gcs_store: type[GCSFilesStore] = cast( "type[GCSFilesStore]", cls.STORE_SCHEMES["gs"] ) gcs_store.GCS_PROJECT_ID = settings["GCS_PROJECT_ID"] gcs_store.POLICY = settings["FILES_STORE_GCS_ACL"] or None ftp_store: type[FTPFilesStore] = cast( "type[FTPFilesStore]", cls.STORE_SCHEMES["ftp"] ) ftp_store.FTP_USERNAME = settings["FTP_USER"] ftp_store.FTP_PASSWORD = settings["FTP_PASSWORD"] ftp_store.USE_ACTIVE_MODE = settings.getbool("FEED_STORAGE_FTP_ACTIVE") def _get_store(self, uri: str) -> FilesStoreProtocol: # to support win32 paths like: C:\\some\dir scheme = "file" if Path(uri).is_absolute() else urlparse(uri).scheme store_cls = self.STORE_SCHEMES[scheme] return store_cls(uri) def media_to_download( self, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None ) -> Deferred[FileInfo | None] | None: def _onsuccess(result: StatInfo) -> FileInfo | None: if not result: return None # returning None force download last_modified = result.get("last_modified", None) if not last_modified: return None # returning None force download age_seconds = time.time() - last_modified age_days = age_seconds / 60 / 60 / 24 if age_days > self.expires: return None # returning None force download referer = referer_str(request) logger.debug( "File (uptodate): Downloaded %(medianame)s from %(request)s " "referred in <%(referer)s>", {"medianame": self.MEDIA_NAME, "request": request, "referer": referer}, extra={"spider": info.spider}, ) self.inc_stats("uptodate") checksum = result.get("checksum", None) return { "url": request.url, "path": path, "checksum": checksum, "status": "uptodate", } path = self.file_path(request, info=info, item=item) # maybeDeferred() overloads don't seem to support a Union[_T, Deferred[_T]] return type dfd: Deferred[StatInfo] = maybeDeferred(self.store.stat_file, path, info) # type: ignore[call-overload] dfd2: Deferred[FileInfo | None] = dfd.addCallback(_onsuccess) dfd2.addErrback(lambda _: None) dfd2.addErrback( lambda f: logger.error( self.__class__.__name__ + ".store.stat_file", exc_info=failure_to_exc_info(f), extra={"spider": info.spider}, ) ) return dfd2 def media_failed( self, failure: Failure, request: Request, info: MediaPipeline.SpiderInfo ) -> NoReturn: if not isinstance(failure.value, IgnoreRequest): referer = referer_str(request) logger.warning( "File (unknown-error): Error downloading %(medianame)s from " "%(request)s referred in <%(referer)s>: %(exception)s", { "medianame": self.MEDIA_NAME, "request": request, "referer": referer, "exception": failure.value, }, extra={"spider": info.spider}, ) raise FileException def media_downloaded( self, response: Response, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None, ) -> FileInfo: referer = referer_str(request) if response.status != 200: logger.warning( "File (code: %(status)s): Error downloading file from " "%(request)s referred in <%(referer)s>", {"status": response.status, "request": request, "referer": referer}, extra={"spider": info.spider}, ) raise FileException("download-error") if not response.body: logger.warning( "File (empty-content): Empty file from %(request)s referred " "in <%(referer)s>: no-content", {"request": request, "referer": referer}, extra={"spider": info.spider}, ) raise FileException("empty-content") status = "cached" if "cached" in response.flags else "downloaded" logger.debug( "File (%(status)s): Downloaded file from %(request)s referred in " "<%(referer)s>", {"status": status, "request": request, "referer": referer}, extra={"spider": info.spider}, ) self.inc_stats(status) try: path = self.file_path(request, response=response, info=info, item=item) checksum = self.file_downloaded(response, request, info, item=item) except FileException as exc: logger.warning( "File (error): Error processing file from %(request)s " "referred in <%(referer)s>: %(errormsg)s", {"request": request, "referer": referer, "errormsg": str(exc)}, extra={"spider": info.spider}, exc_info=True, ) raise except Exception as exc: logger.error( "File (unknown-error): Error processing file from %(request)s " "referred in <%(referer)s>", {"request": request, "referer": referer}, exc_info=True, extra={"spider": info.spider}, ) raise FileException(str(exc)) return { "url": request.url, "path": path, "checksum": checksum, "status": status, } def inc_stats(self, status: str) -> None: assert self.crawler.stats self.crawler.stats.inc_value("file_count") self.crawler.stats.inc_value(f"file_status_count/{status}") # Overridable Interface def get_media_requests( self, item: Any, info: MediaPipeline.SpiderInfo ) -> list[Request]: urls = ItemAdapter(item).get(self.files_urls_field, []) if not isinstance(urls, list): raise TypeError( f"{self.files_urls_field} must be a list of URLs, got {type(urls).__name__}. " ) return [Request(u, callback=NO_CALLBACK) for u in urls] def file_downloaded( self, response: Response, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None, ) -> str: path = self.file_path(request, response=response, info=info, item=item) buf = BytesIO(response.body) checksum = _md5sum(buf) buf.seek(0) self.store.persist_file(path, buf, info) return checksum def item_completed( self, results: list[FileInfoOrError], item: Any, info: MediaPipeline.SpiderInfo ) -> Any: with suppress(KeyError): ItemAdapter(item)[self.files_result_field] = [x for ok, x in results if ok] return item def file_path( self, request: Request, response: Response | None = None, info: MediaPipeline.SpiderInfo | None = None, *, item: Any = None, ) -> str: media_guid = hashlib.sha1(to_bytes(request.url)).hexdigest() # noqa: S324 media_ext = Path(request.url).suffix # Handles empty and wild extensions by trying to guess the # mime type then extension or default to empty string otherwise if media_ext not in mimetypes.types_map: media_ext = "" media_type = mimetypes.guess_type(request.url)[0] if media_type: media_ext = cast("str", mimetypes.guess_extension(media_type)) return f"full/{media_guid}{media_ext}"
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/pipelines/__init__.py
scrapy/pipelines/__init__.py
""" Item pipeline See documentation in docs/item-pipeline.rst """ from __future__ import annotations import asyncio import warnings from typing import TYPE_CHECKING, Any, cast from twisted.internet.defer import Deferred, DeferredList from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.middleware import MiddlewareManager from scrapy.utils.asyncio import is_asyncio_available from scrapy.utils.conf import build_component_list from scrapy.utils.defer import deferred_from_coro, ensure_awaitable, maybeDeferred_coro from scrapy.utils.python import global_object_name if TYPE_CHECKING: from collections.abc import Awaitable, Callable, Coroutine, Iterable from twisted.python.failure import Failure from scrapy import Spider from scrapy.settings import Settings class ItemPipelineManager(MiddlewareManager): component_name = "item pipeline" @classmethod def _get_mwlist_from_settings(cls, settings: Settings) -> list[Any]: return build_component_list(settings.getwithbase("ITEM_PIPELINES")) def _add_middleware(self, pipe: Any) -> None: if hasattr(pipe, "open_spider"): self.methods["open_spider"].append(pipe.open_spider) self._check_mw_method_spider_arg(pipe.open_spider) if hasattr(pipe, "close_spider"): self.methods["close_spider"].appendleft(pipe.close_spider) self._check_mw_method_spider_arg(pipe.close_spider) if hasattr(pipe, "process_item"): self.methods["process_item"].append(pipe.process_item) self._check_mw_method_spider_arg(pipe.process_item) def process_item(self, item: Any, spider: Spider) -> Deferred[Any]: warnings.warn( f"{global_object_name(type(self))}.process_item() is deprecated, use process_item_async() instead.", category=ScrapyDeprecationWarning, stacklevel=2, ) self._set_compat_spider(spider) return deferred_from_coro(self.process_item_async(item)) async def process_item_async(self, item: Any) -> Any: return await self._process_chain( "process_item", item, add_spider=True, warn_deferred=True ) def _process_parallel_dfd(self, methodname: str) -> Deferred[list[None]]: methods = cast( "Iterable[Callable[..., Coroutine[Any, Any, None] | Deferred[None] | None]]", self.methods[methodname], ) def get_dfd( method: Callable[..., Coroutine[Any, Any, None] | Deferred[None] | None], ) -> Deferred[None]: if method in self._mw_methods_requiring_spider: return maybeDeferred_coro(method, self._spider) return maybeDeferred_coro(method) dfds = [get_dfd(m) for m in methods] d: Deferred[list[tuple[bool, None]]] = DeferredList( dfds, fireOnOneErrback=True, consumeErrors=True ) d2: Deferred[list[None]] = d.addCallback(lambda r: [x[1] for x in r]) def eb(failure: Failure) -> Failure: return failure.value.subFailure d2.addErrback(eb) return d2 async def _process_parallel_asyncio(self, methodname: str) -> list[None]: methods = cast( "Iterable[Callable[..., Coroutine[Any, Any, None] | Deferred[None] | None]]", self.methods[methodname], ) if not methods: return [] def get_awaitable( method: Callable[..., Coroutine[Any, Any, None] | Deferred[None] | None], ) -> Awaitable[None]: if method in self._mw_methods_requiring_spider: result = method(self._spider) else: result = method() return ensure_awaitable(result, _warn=global_object_name(method)) awaitables = [get_awaitable(m) for m in methods] await asyncio.gather(*awaitables) return [None for _ in methods] async def _process_parallel(self, methodname: str) -> list[None]: if is_asyncio_available(): return await self._process_parallel_asyncio(methodname) return await self._process_parallel_dfd(methodname) def open_spider(self, spider: Spider) -> Deferred[list[None]]: warnings.warn( f"{global_object_name(type(self))}.open_spider() is deprecated, use open_spider_async() instead.", category=ScrapyDeprecationWarning, stacklevel=2, ) self._set_compat_spider(spider) return deferred_from_coro(self._process_parallel("open_spider")) async def open_spider_async(self) -> None: await self._process_parallel("open_spider") def close_spider(self, spider: Spider) -> Deferred[list[None]]: warnings.warn( f"{global_object_name(type(self))}.close_spider() is deprecated, use close_spider_async() instead.", category=ScrapyDeprecationWarning, stacklevel=2, ) self._set_compat_spider(spider) return deferred_from_coro(self._process_parallel("close_spider")) async def close_spider_async(self) -> None: await self._process_parallel("close_spider")
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/pipelines/images.py
scrapy/pipelines/images.py
""" Images Pipeline See documentation in topics/media-pipeline.rst """ from __future__ import annotations import functools import hashlib import warnings from contextlib import suppress from io import BytesIO from typing import TYPE_CHECKING, Any from itemadapter import ItemAdapter from scrapy.exceptions import NotConfigured, ScrapyDeprecationWarning from scrapy.http import Request, Response from scrapy.http.request import NO_CALLBACK from scrapy.pipelines.files import FileException, FilesPipeline, _md5sum from scrapy.utils.python import to_bytes if TYPE_CHECKING: from collections.abc import Iterable from os import PathLike from PIL import Image # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler from scrapy.pipelines.media import FileInfoOrError, MediaPipeline class ImageException(FileException): """General image error exception""" class ImagesPipeline(FilesPipeline): """Abstract pipeline that implement the image thumbnail generation logic""" MEDIA_NAME: str = "image" # Uppercase attributes kept for backward compatibility with code that subclasses # ImagesPipeline. They may be overridden by settings. MIN_WIDTH: int = 0 MIN_HEIGHT: int = 0 EXPIRES: int = 90 THUMBS: dict[str, tuple[int, int]] = {} DEFAULT_IMAGES_URLS_FIELD = "image_urls" DEFAULT_IMAGES_RESULT_FIELD = "images" def __init__( self, store_uri: str | PathLike[str], download_func: None = None, *, crawler: Crawler, ): if download_func is not None: # pragma: no cover warnings.warn( "The download_func argument of ImagesPipeline.__init__() is ignored" " and will be removed in a future Scrapy version.", category=ScrapyDeprecationWarning, stacklevel=2, ) try: from PIL import Image, ImageOps # noqa: PLC0415 self._Image = Image self._ImageOps = ImageOps except ImportError: raise NotConfigured( "ImagesPipeline requires installing Pillow 8.3.2 or later" ) super().__init__(store_uri, crawler=crawler) settings = crawler.settings resolve = functools.partial( self._key_for_pipe, base_class_name="ImagesPipeline", settings=settings, ) self.expires: int = settings.getint(resolve("IMAGES_EXPIRES"), self.EXPIRES) if not hasattr(self, "IMAGES_RESULT_FIELD"): self.IMAGES_RESULT_FIELD: str = self.DEFAULT_IMAGES_RESULT_FIELD if not hasattr(self, "IMAGES_URLS_FIELD"): self.IMAGES_URLS_FIELD: str = self.DEFAULT_IMAGES_URLS_FIELD self.images_urls_field: str = settings.get( resolve("IMAGES_URLS_FIELD"), self.IMAGES_URLS_FIELD ) self.images_result_field: str = settings.get( resolve("IMAGES_RESULT_FIELD"), self.IMAGES_RESULT_FIELD ) self.min_width: int = settings.getint( resolve("IMAGES_MIN_WIDTH"), self.MIN_WIDTH ) self.min_height: int = settings.getint( resolve("IMAGES_MIN_HEIGHT"), self.MIN_HEIGHT ) self.thumbs: dict[str, tuple[int, int]] = settings.get( resolve("IMAGES_THUMBS"), self.THUMBS ) @classmethod def from_crawler(cls, crawler: Crawler) -> Self: settings = crawler.settings cls._update_stores(settings) store_uri = settings["IMAGES_STORE"] return cls(store_uri, crawler=crawler) def file_downloaded( self, response: Response, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None, ) -> str: return self.image_downloaded(response, request, info, item=item) def image_downloaded( self, response: Response, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None, ) -> str: checksum: str | None = None for path, image, buf in self.get_images(response, request, info, item=item): if checksum is None: buf.seek(0) checksum = _md5sum(buf) width, height = image.size self.store.persist_file( path, buf, info, meta={"width": width, "height": height}, headers={"Content-Type": "image/jpeg"}, ) assert checksum is not None return checksum def get_images( self, response: Response, request: Request, info: MediaPipeline.SpiderInfo, *, item: Any = None, ) -> Iterable[tuple[str, Image.Image, BytesIO]]: path = self.file_path(request, response=response, info=info, item=item) orig_image = self._Image.open(BytesIO(response.body)) transposed_image = self._ImageOps.exif_transpose(orig_image) width, height = transposed_image.size if width < self.min_width or height < self.min_height: raise ImageException( "Image too small " f"({width}x{height} < " f"{self.min_width}x{self.min_height})" ) image, buf = self.convert_image( transposed_image, response_body=BytesIO(response.body) ) yield path, image, buf for thumb_id, size in self.thumbs.items(): thumb_path = self.thumb_path( request, thumb_id, response=response, info=info, item=item ) thumb_image, thumb_buf = self.convert_image(image, size, response_body=buf) yield thumb_path, thumb_image, thumb_buf def convert_image( self, image: Image.Image, size: tuple[int, int] | None = None, *, response_body: BytesIO, ) -> tuple[Image.Image, BytesIO]: if image.format in ("PNG", "WEBP") and image.mode == "RGBA": background = self._Image.new("RGBA", image.size, (255, 255, 255)) background.paste(image, image) image = background.convert("RGB") elif image.mode == "P": image = image.convert("RGBA") background = self._Image.new("RGBA", image.size, (255, 255, 255)) background.paste(image, image) image = background.convert("RGB") elif image.mode != "RGB": image = image.convert("RGB") if size: image = image.copy() try: # Image.Resampling.LANCZOS was added in Pillow 9.1.0 # remove this try except block, # when updating the minimum requirements for Pillow. resampling_filter = self._Image.Resampling.LANCZOS except AttributeError: resampling_filter = self._Image.ANTIALIAS # type: ignore[attr-defined] image.thumbnail(size, resampling_filter) elif image.format == "JPEG": return image, response_body buf = BytesIO() image.save(buf, "JPEG") return image, buf def get_media_requests( self, item: Any, info: MediaPipeline.SpiderInfo ) -> list[Request]: urls = ItemAdapter(item).get(self.images_urls_field, []) if not isinstance(urls, list): raise TypeError( f"{self.images_urls_field} must be a list of URLs, got {type(urls).__name__}. " ) return [Request(u, callback=NO_CALLBACK) for u in urls] def item_completed( self, results: list[FileInfoOrError], item: Any, info: MediaPipeline.SpiderInfo ) -> Any: with suppress(KeyError): ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok] return item def file_path( self, request: Request, response: Response | None = None, info: MediaPipeline.SpiderInfo | None = None, *, item: Any = None, ) -> str: image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest() # noqa: S324 return f"full/{image_guid}.jpg" def thumb_path( self, request: Request, thumb_id: str, response: Response | None = None, info: MediaPipeline.SpiderInfo | None = None, *, item: Any = None, ) -> str: thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest() # noqa: S324 return f"thumbs/{thumb_id}/{thumb_guid}.jpg"
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/pipelines/media.py
scrapy/pipelines/media.py
from __future__ import annotations import asyncio import functools import logging import warnings from abc import ABC, abstractmethod from collections import defaultdict from typing import TYPE_CHECKING, Any, Literal, TypeAlias, TypedDict, cast from twisted import version as twisted_version from twisted.internet.defer import Deferred, DeferredList from twisted.python.failure import Failure from twisted.python.versions import Version from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.http.request import NO_CALLBACK, Request from scrapy.utils.asyncio import call_later, is_asyncio_available from scrapy.utils.datatypes import SequenceExclude from scrapy.utils.decorators import _warn_spider_arg from scrapy.utils.defer import ( _DEFER_DELAY, _defer_sleep_async, deferred_from_coro, ensure_awaitable, maybe_deferred_to_future, ) from scrapy.utils.log import failure_to_exc_info from scrapy.utils.misc import arg_to_iter from scrapy.utils.python import global_object_name if TYPE_CHECKING: # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy import Spider from scrapy.crawler import Crawler from scrapy.http import Response from scrapy.settings import Settings from scrapy.utils.request import RequestFingerprinterProtocol class FileInfo(TypedDict): url: str path: str checksum: str | None status: str FileInfoOrError: TypeAlias = ( tuple[Literal[True], FileInfo] | tuple[Literal[False], Failure] ) logger = logging.getLogger(__name__) class MediaPipeline(ABC): LOG_FAILED_RESULTS: bool = True class SpiderInfo: def __init__(self, spider: Spider): self.spider: Spider = spider self.downloading: set[bytes] = set() self.downloaded: dict[bytes, FileInfo | Failure] = {} self.waiting: defaultdict[bytes, list[Deferred[FileInfo]]] = defaultdict( list ) def __init__( self, download_func: None = None, *, crawler: Crawler, ): if download_func is not None: # pragma: no cover warnings.warn( "The download_func argument of MediaPipeline.__init__() is ignored" " and will be removed in a future Scrapy version.", category=ScrapyDeprecationWarning, stacklevel=2, ) self.crawler: Crawler = crawler assert crawler.request_fingerprinter self._fingerprinter: RequestFingerprinterProtocol = ( crawler.request_fingerprinter ) settings = crawler.settings resolve = functools.partial( self._key_for_pipe, base_class_name="MediaPipeline", settings=settings ) self.allow_redirects: bool = settings.getbool( resolve("MEDIA_ALLOW_REDIRECTS"), False ) self._handle_statuses(self.allow_redirects) def _handle_statuses(self, allow_redirects: bool) -> None: self.handle_httpstatus_list = None if allow_redirects: self.handle_httpstatus_list = SequenceExclude(range(300, 400)) def _key_for_pipe( self, key: str, base_class_name: str | None = None, settings: Settings | None = None, ) -> str: class_name = self.__class__.__name__ formatted_key = f"{class_name.upper()}_{key}" if ( not base_class_name or class_name == base_class_name or (settings and not settings.get(formatted_key)) ): return key return formatted_key @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler=crawler) @_warn_spider_arg def open_spider(self, spider: Spider | None = None) -> None: assert self.crawler.spider self.spiderinfo = self.SpiderInfo(self.crawler.spider) @_warn_spider_arg async def process_item(self, item: Any, spider: Spider | None = None) -> Any: info = self.spiderinfo requests = arg_to_iter(self.get_media_requests(item, info)) coros = [self._process_request(r, info, item) for r in requests] results: list[FileInfoOrError] = [] if coros: if is_asyncio_available(): results_asyncio = await asyncio.gather(*coros, return_exceptions=True) for res in results_asyncio: if isinstance(res, BaseException): results.append((False, Failure(res))) else: results.append((True, res)) else: results = await cast( "Deferred[list[FileInfoOrError]]", DeferredList( (deferred_from_coro(coro) for coro in coros), consumeErrors=True ), ) return self.item_completed(results, item, info) async def _process_request( self, request: Request, info: SpiderInfo, item: Any ) -> FileInfo: fp = self._fingerprinter.fingerprint(request) eb = request.errback request.callback = NO_CALLBACK request.errback = None # Return cached result if request was already seen if fp in info.downloaded: await _defer_sleep_async() cached_result = info.downloaded[fp] if isinstance(cached_result, Failure): if eb: return eb(cached_result) cached_result.raiseException() return cached_result # Otherwise, wait for result wad: Deferred[FileInfo] = Deferred() if eb: wad.addErrback(eb) info.waiting[fp].append(wad) # Check if request is downloading right now to avoid doing it twice if fp in info.downloading: return await maybe_deferred_to_future(wad) # Download request checking media_to_download hook output first info.downloading.add(fp) await _defer_sleep_async() result: FileInfo | Failure try: file_info: FileInfo | None = await ensure_awaitable( self.media_to_download(request, info, item=item) ) if file_info: # got a result without downloading result = file_info else: # download the result result = await self._check_media_to_download(request, info, item=item) except Exception: result = Failure() logger.exception(result) self._cache_result_and_execute_waiters(result, fp, info) return await maybe_deferred_to_future(wad) # it must return wad at last def _modify_media_request(self, request: Request) -> None: if self.handle_httpstatus_list: request.meta["handle_httpstatus_list"] = self.handle_httpstatus_list else: request.meta["handle_httpstatus_all"] = True async def _check_media_to_download( self, request: Request, info: SpiderInfo, item: Any ) -> FileInfo: try: self._modify_media_request(request) assert self.crawler.engine response = await self.crawler.engine.download_async(request) return self.media_downloaded(response, request, info, item=item) except Exception: failure = self.media_failed(Failure(), request, info) if isinstance(failure, Failure): warnings.warn( f"{global_object_name(self.media_failed)} returned a Failure instance." f" This is deprecated, please raise an exception instead, e.g. via failure.raiseException().", category=ScrapyDeprecationWarning, stacklevel=2, ) failure.raiseException() def _cache_result_and_execute_waiters( self, result: FileInfo | Failure, fp: bytes, info: SpiderInfo ) -> None: if isinstance(result, Failure): # minimize cached information for failure result.cleanFailure() result.frames = [] if twisted_version < Version("twisted", 24, 10, 0): result.stack = [] # type: ignore[method-assign] # This code fixes a memory leak by avoiding to keep references to # the Request and Response objects on the Media Pipeline cache. # # What happens when the media_downloaded callback raises an # exception, for example a FileException('download-error') when # the Response status code is not 200 OK, is that the original # StopIteration exception (which in turn contains the failed # Response and by extension, the original Request) gets encapsulated # within the FileException context. # # Originally, Scrapy was using twisted.internet.defer.returnValue # inside functions decorated with twisted.internet.defer.inlineCallbacks, # encapsulating the returned Response in a _DefGen_Return exception # instead of a StopIteration. # # To avoid keeping references to the Response and therefore Request # objects on the Media Pipeline cache, we should wipe the context of # the encapsulated exception when it is a StopIteration instance context = getattr(result.value, "__context__", None) if isinstance(context, StopIteration): result.value.__context__ = None info.downloading.remove(fp) info.downloaded[fp] = result # cache result for wad in info.waiting.pop(fp): if isinstance(result, Failure): call_later(_DEFER_DELAY, wad.errback, result) else: call_later(_DEFER_DELAY, wad.callback, result) # Overridable Interface @abstractmethod def media_to_download( self, request: Request, info: SpiderInfo, *, item: Any = None ) -> Deferred[FileInfo | None] | None: """Check request before starting download""" raise NotImplementedError @abstractmethod def get_media_requests(self, item: Any, info: SpiderInfo) -> list[Request]: """Returns the media requests to download""" raise NotImplementedError @abstractmethod def media_downloaded( self, response: Response, request: Request, info: SpiderInfo, *, item: Any = None, ) -> FileInfo: """Handler for success downloads""" raise NotImplementedError @abstractmethod def media_failed( self, failure: Failure, request: Request, info: SpiderInfo ) -> Failure: """Handler for failed downloads""" raise NotImplementedError def item_completed( self, results: list[FileInfoOrError], item: Any, info: SpiderInfo ) -> Any: """Called per item when all media requests has been processed""" if self.LOG_FAILED_RESULTS: for ok, value in results: if not ok: assert isinstance(value, Failure) logger.error( "%(class)s found errors processing %(item)s", {"class": self.__class__.__name__, "item": item}, exc_info=failure_to_exc_info(value), extra={"spider": info.spider}, ) return item @abstractmethod def file_path( self, request: Request, response: Response | None = None, info: SpiderInfo | None = None, *, item: Any = None, ) -> str: """Returns the path where downloaded media should be stored""" raise NotImplementedError
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/gz.py
scrapy/utils/gz.py
from __future__ import annotations import struct from gzip import GzipFile from io import BytesIO from typing import TYPE_CHECKING from ._compression import _CHUNK_SIZE, _check_max_size if TYPE_CHECKING: from scrapy.http import Response def gunzip(data: bytes, *, max_size: int = 0) -> bytes: """Gunzip the given data and return as much data as possible. This is resilient to CRC checksum errors. """ f = GzipFile(fileobj=BytesIO(data)) output_stream = BytesIO() chunk = b"." decompressed_size = 0 while chunk: try: chunk = f.read1(_CHUNK_SIZE) except (OSError, EOFError, struct.error): # complete only if there is some data, otherwise re-raise # see issue 87 about catching struct.error # some pages are quite small so output_stream is empty if output_stream.getbuffer().nbytes > 0: break raise decompressed_size += len(chunk) _check_max_size(decompressed_size, max_size) output_stream.write(chunk) return output_stream.getvalue() def gzip_magic_number(response: Response) -> bool: return response.body[:3] == b"\x1f\x8b\x08"
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/testproc.py
scrapy/utils/testproc.py
from __future__ import annotations import os import sys import warnings from typing import TYPE_CHECKING, cast from twisted.internet.defer import Deferred from twisted.internet.protocol import ProcessProtocol from scrapy.exceptions import ScrapyDeprecationWarning if TYPE_CHECKING: from collections.abc import Iterable from twisted.internet.error import ProcessTerminated from twisted.python.failure import Failure warnings.warn( "The scrapy.utils.testproc module is deprecated.", ScrapyDeprecationWarning, ) class ProcessTest: command: str | None = None prefix = [sys.executable, "-m", "scrapy.cmdline"] cwd = os.getcwd() # trial chdirs to temp dir # noqa: PTH109 def execute( self, args: Iterable[str], check_code: bool = True, settings: str | None = None, ) -> Deferred[TestProcessProtocol]: from twisted.internet import reactor env = os.environ.copy() if settings is not None: env["SCRAPY_SETTINGS_MODULE"] = settings assert self.command cmd = [*self.prefix, self.command, *args] pp = TestProcessProtocol() pp.deferred.addCallback(self._process_finished, cmd, check_code) reactor.spawnProcess(pp, cmd[0], cmd, env=env, path=self.cwd) return pp.deferred def _process_finished( self, pp: TestProcessProtocol, cmd: list[str], check_code: bool ) -> tuple[int, bytes, bytes]: if pp.exitcode and check_code: msg = f"process {cmd} exit with code {pp.exitcode}" msg += f"\n>>> stdout <<<\n{pp.out.decode()}" msg += "\n" msg += f"\n>>> stderr <<<\n{pp.err.decode()}" raise RuntimeError(msg) return cast("int", pp.exitcode), pp.out, pp.err class TestProcessProtocol(ProcessProtocol): def __init__(self) -> None: self.deferred: Deferred[TestProcessProtocol] = Deferred() self.out: bytes = b"" self.err: bytes = b"" self.exitcode: int | None = None def outReceived(self, data: bytes) -> None: self.out += data def errReceived(self, data: bytes) -> None: self.err += data def processEnded(self, status: Failure) -> None: self.exitcode = cast("ProcessTerminated", status.value).exitCode self.deferred.callback(self)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/sitemap.py
scrapy/utils/sitemap.py
""" Module for processing Sitemaps. Note: The main purpose of this module is to provide support for the SitemapSpider, its API is subject to change without notice. """ from __future__ import annotations from typing import TYPE_CHECKING, Any from urllib.parse import urljoin import lxml.etree if TYPE_CHECKING: from collections.abc import Iterable, Iterator class Sitemap: """Class to parse Sitemap (type=urlset) and Sitemap Index (type=sitemapindex) files""" def __init__(self, xmltext: str | bytes): xmlp = lxml.etree.XMLParser( recover=True, remove_comments=True, resolve_entities=False ) self._root = lxml.etree.fromstring(xmltext, parser=xmlp) rt = self._root.tag assert isinstance(rt, str) self.type = rt.split("}", 1)[1] if "}" in rt else rt def __iter__(self) -> Iterator[dict[str, Any]]: for elem in self._root.getchildren(): d: dict[str, Any] = {} for el in elem.getchildren(): tag = el.tag assert isinstance(tag, str) name = tag.split("}", 1)[1] if "}" in tag else tag if name == "link": if "href" in el.attrib: d.setdefault("alternate", []).append(el.get("href")) else: d[name] = el.text.strip() if el.text else "" if "loc" in d: yield d def sitemap_urls_from_robots( robots_text: str, base_url: str | None = None ) -> Iterable[str]: """Return an iterator over all sitemap urls contained in the given robots.txt file """ for line in robots_text.splitlines(): if line.lstrip().lower().startswith("sitemap:"): url = line.split(":", 1)[1].strip() yield urljoin(base_url or "", url)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/spider.py
scrapy/utils/spider.py
from __future__ import annotations import inspect import logging from typing import TYPE_CHECKING, Any, TypeVar, overload from scrapy.spiders import Spider from scrapy.utils.defer import deferred_from_coro from scrapy.utils.misc import arg_to_iter if TYPE_CHECKING: from collections.abc import AsyncGenerator, Iterable from types import CoroutineType, ModuleType from twisted.internet.defer import Deferred from scrapy import Request from scrapy.spiderloader import SpiderLoaderProtocol logger = logging.getLogger(__name__) _T = TypeVar("_T") # https://stackoverflow.com/questions/60222982 @overload def iterate_spider_output(result: AsyncGenerator[_T]) -> AsyncGenerator[_T]: ... # type: ignore[overload-overlap] @overload def iterate_spider_output(result: CoroutineType[Any, Any, _T]) -> Deferred[_T]: ... @overload def iterate_spider_output(result: _T) -> Iterable[Any]: ... def iterate_spider_output( result: Any, ) -> Iterable[Any] | AsyncGenerator[_T] | Deferred[_T]: if inspect.isasyncgen(result): return result if inspect.iscoroutine(result): d = deferred_from_coro(result) d.addCallback(iterate_spider_output) return d return arg_to_iter(deferred_from_coro(result)) def iter_spider_classes(module: ModuleType) -> Iterable[type[Spider]]: """Return an iterator over all spider classes defined in the given module that can be instantiated (i.e. which have name) """ for obj in vars(module).values(): if ( inspect.isclass(obj) and issubclass(obj, Spider) and obj.__module__ == module.__name__ and getattr(obj, "name", None) ): yield obj @overload def spidercls_for_request( spider_loader: SpiderLoaderProtocol, request: Request, default_spidercls: type[Spider], log_none: bool = ..., log_multiple: bool = ..., ) -> type[Spider]: ... @overload def spidercls_for_request( spider_loader: SpiderLoaderProtocol, request: Request, default_spidercls: None, log_none: bool = ..., log_multiple: bool = ..., ) -> type[Spider] | None: ... @overload def spidercls_for_request( spider_loader: SpiderLoaderProtocol, request: Request, *, log_none: bool = ..., log_multiple: bool = ..., ) -> type[Spider] | None: ... def spidercls_for_request( spider_loader: SpiderLoaderProtocol, request: Request, default_spidercls: type[Spider] | None = None, log_none: bool = False, log_multiple: bool = False, ) -> type[Spider] | None: """Return a spider class that handles the given Request. This will look for the spiders that can handle the given request (using the spider loader) and return a Spider class if (and only if) there is only one Spider able to handle the Request. If multiple spiders (or no spider) are found, it will return the default_spidercls passed. It can optionally log if multiple or no spiders are found. """ snames = spider_loader.find_by_request(request) if len(snames) == 1: return spider_loader.load(snames[0]) if len(snames) > 1 and log_multiple: logger.error( "More than one spider can handle: %(request)s - %(snames)s", {"request": request, "snames": ", ".join(snames)}, ) if len(snames) == 0 and log_none: logger.error( "Unable to find spider that handles: %(request)s", {"request": request} ) return default_spidercls class DefaultSpider(Spider): name = "default"
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/ssl.py
scrapy/utils/ssl.py
from __future__ import annotations from typing import TYPE_CHECKING, Any import OpenSSL._util as pyOpenSSLutil import OpenSSL.SSL import OpenSSL.version from scrapy.utils.python import to_unicode if TYPE_CHECKING: from OpenSSL.crypto import X509Name def ffi_buf_to_string(buf: Any) -> str: return to_unicode(pyOpenSSLutil.ffi.string(buf)) def x509name_to_string(x509name: X509Name) -> str: # from OpenSSL.crypto.X509Name.__repr__ result_buffer: Any = pyOpenSSLutil.ffi.new("char[]", 512) pyOpenSSLutil.lib.X509_NAME_oneline( x509name._name, result_buffer, len(result_buffer) ) return ffi_buf_to_string(result_buffer) def get_temp_key_info(ssl_object: Any) -> str | None: # adapted from OpenSSL apps/s_cb.c::ssl_print_tmp_key() if not hasattr(pyOpenSSLutil.lib, "SSL_get_server_tmp_key"): # removed in cryptography 40.0.0 return None temp_key_p = pyOpenSSLutil.ffi.new("EVP_PKEY **") if not pyOpenSSLutil.lib.SSL_get_server_tmp_key(ssl_object, temp_key_p): return None temp_key = temp_key_p[0] if temp_key == pyOpenSSLutil.ffi.NULL: return None temp_key = pyOpenSSLutil.ffi.gc(temp_key, pyOpenSSLutil.lib.EVP_PKEY_free) key_info = [] key_type = pyOpenSSLutil.lib.EVP_PKEY_id(temp_key) if key_type == pyOpenSSLutil.lib.EVP_PKEY_RSA: key_info.append("RSA") elif key_type == pyOpenSSLutil.lib.EVP_PKEY_DH: key_info.append("DH") elif key_type == pyOpenSSLutil.lib.EVP_PKEY_EC: key_info.append("ECDH") ec_key = pyOpenSSLutil.lib.EVP_PKEY_get1_EC_KEY(temp_key) ec_key = pyOpenSSLutil.ffi.gc(ec_key, pyOpenSSLutil.lib.EC_KEY_free) nid = pyOpenSSLutil.lib.EC_GROUP_get_curve_name( pyOpenSSLutil.lib.EC_KEY_get0_group(ec_key) ) cname = pyOpenSSLutil.lib.EC_curve_nid2nist(nid) if cname == pyOpenSSLutil.ffi.NULL: cname = pyOpenSSLutil.lib.OBJ_nid2sn(nid) key_info.append(ffi_buf_to_string(cname)) else: key_info.append(ffi_buf_to_string(pyOpenSSLutil.lib.OBJ_nid2sn(key_type))) key_info.append(f"{pyOpenSSLutil.lib.EVP_PKEY_bits(temp_key)} bits") return ", ".join(key_info) def get_openssl_version() -> str: system_openssl_bytes = OpenSSL.SSL.SSLeay_version(OpenSSL.SSL.SSLEAY_VERSION) system_openssl = system_openssl_bytes.decode("ascii", errors="replace") return f"{OpenSSL.version.__version__} ({system_openssl})"
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/url.py
scrapy/utils/url.py
""" This module contains general purpose URL functions not found in the standard library. """ from __future__ import annotations import re import warnings from importlib import import_module from typing import TYPE_CHECKING, TypeAlias from urllib.parse import ParseResult, urldefrag, urlparse, urlunparse from warnings import warn from w3lib.url import __all__ as _public_w3lib_objects from w3lib.url import add_or_replace_parameter as _add_or_replace_parameter from w3lib.url import any_to_uri as _any_to_uri from w3lib.url import parse_url as _parse_url from scrapy.exceptions import ScrapyDeprecationWarning def __getattr__(name: str): if name in ("_unquotepath", "_safe_chars", "parse_url", *_public_w3lib_objects): obj_type = "attribute" if name == "_safe_chars" else "function" warnings.warn( f"The scrapy.utils.url.{name} {obj_type} is deprecated, use w3lib.url.{name} instead.", ScrapyDeprecationWarning, ) return getattr(import_module("w3lib.url"), name) raise AttributeError if TYPE_CHECKING: from collections.abc import Iterable from scrapy import Spider UrlT: TypeAlias = str | bytes | ParseResult def url_is_from_any_domain(url: UrlT, domains: Iterable[str]) -> bool: """Return True if the url belongs to any of the given domains""" host = _parse_url(url).netloc.lower() if not host: return False domains = [d.lower() for d in domains] return any((host == d) or (host.endswith(f".{d}")) for d in domains) def url_is_from_spider(url: UrlT, spider: type[Spider]) -> bool: """Return True if the url belongs to the given spider""" return url_is_from_any_domain( url, [spider.name, *getattr(spider, "allowed_domains", [])] ) def url_has_any_extension(url: UrlT, extensions: Iterable[str]) -> bool: """Return True if the url ends with one of the extensions provided""" lowercase_path = _parse_url(url).path.lower() return any(lowercase_path.endswith(ext) for ext in extensions) def escape_ajax(url: str) -> str: """ Return the crawlable url >>> escape_ajax("www.example.com/ajax.html#!key=value") 'www.example.com/ajax.html?_escaped_fragment_=key%3Dvalue' >>> escape_ajax("www.example.com/ajax.html?k1=v1&k2=v2#!key=value") 'www.example.com/ajax.html?k1=v1&k2=v2&_escaped_fragment_=key%3Dvalue' >>> escape_ajax("www.example.com/ajax.html?#!key=value") 'www.example.com/ajax.html?_escaped_fragment_=key%3Dvalue' >>> escape_ajax("www.example.com/ajax.html#!") 'www.example.com/ajax.html?_escaped_fragment_=' URLs that are not "AJAX crawlable" (according to Google) returned as-is: >>> escape_ajax("www.example.com/ajax.html#key=value") 'www.example.com/ajax.html#key=value' >>> escape_ajax("www.example.com/ajax.html#") 'www.example.com/ajax.html#' >>> escape_ajax("www.example.com/ajax.html") 'www.example.com/ajax.html' """ warn( "escape_ajax() is deprecated and will be removed in a future Scrapy version.", ScrapyDeprecationWarning, stacklevel=2, ) defrag, frag = urldefrag(url) if not frag.startswith("!"): return url return _add_or_replace_parameter(defrag, "_escaped_fragment_", frag[1:]) def add_http_if_no_scheme(url: str) -> str: """Add http as the default scheme if it is missing from the url.""" match = re.match(r"^\w+://", url, flags=re.IGNORECASE) if not match: parts = urlparse(url) scheme = "http:" if parts.netloc else "http://" url = scheme + url return url def _is_posix_path(string: str) -> bool: return bool( re.match( r""" ^ # start with... ( \. # ...a single dot, ( \. | [^/\.]+ # optionally followed by )? # either a second dot or some characters | ~ # $HOME )? # optional match of ".", ".." or ".blabla" / # at least one "/" for a file path, . # and something after the "/" """, string, flags=re.VERBOSE, ) ) def _is_windows_path(string: str) -> bool: return bool( re.match( r""" ^ ( [a-z]:\\ | \\\\ ) """, string, flags=re.IGNORECASE | re.VERBOSE, ) ) def _is_filesystem_path(string: str) -> bool: return _is_posix_path(string) or _is_windows_path(string) def guess_scheme(url: str) -> str: """Add an URL scheme if missing: file:// for filepath-like input or http:// otherwise.""" if _is_filesystem_path(url): return _any_to_uri(url) return add_http_if_no_scheme(url) def strip_url( url: str, strip_credentials: bool = True, strip_default_port: bool = True, origin_only: bool = False, strip_fragment: bool = True, ) -> str: """Strip URL string from some of its components: - ``strip_credentials`` removes "user:password@" - ``strip_default_port`` removes ":80" (resp. ":443", ":21") from http:// (resp. https://, ftp://) URLs - ``origin_only`` replaces path component with "/", also dropping query and fragment components ; it also strips credentials - ``strip_fragment`` drops any #fragment component """ parsed_url = urlparse(url) netloc = parsed_url.netloc if (strip_credentials or origin_only) and ( parsed_url.username or parsed_url.password ): netloc = netloc.split("@")[-1] if ( strip_default_port and parsed_url.port and (parsed_url.scheme, parsed_url.port) in ( ("http", 80), ("https", 443), ("ftp", 21), ) ): netloc = netloc.replace(f":{parsed_url.port}", "") return urlunparse( ( parsed_url.scheme, netloc, "/" if origin_only else parsed_url.path, "" if origin_only else parsed_url.params, "" if origin_only else parsed_url.query, "" if strip_fragment else parsed_url.fragment, ) )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/response.py
scrapy/utils/response.py
""" This module provides some useful functions for working with scrapy.http.Response objects """ from __future__ import annotations import os import re import tempfile import webbrowser from typing import TYPE_CHECKING, Any from weakref import WeakKeyDictionary from twisted.web import http from w3lib import html from scrapy.utils.python import to_bytes, to_unicode if TYPE_CHECKING: from collections.abc import Callable, Iterable from scrapy.http import Response, TextResponse _baseurl_cache: WeakKeyDictionary[Response, str] = WeakKeyDictionary() def get_base_url(response: TextResponse) -> str: """Return the base url of the given response, joined with the response url""" if response not in _baseurl_cache: text = response.text[0:4096] _baseurl_cache[response] = html.get_base_url( text, response.url, response.encoding ) return _baseurl_cache[response] _metaref_cache: WeakKeyDictionary[Response, tuple[None, None] | tuple[float, str]] = ( WeakKeyDictionary() ) def get_meta_refresh( response: TextResponse, ignore_tags: Iterable[str] = ("script", "noscript"), ) -> tuple[None, None] | tuple[float, str]: """Parse the http-equiv refresh parameter from the given response""" if response not in _metaref_cache: text = response.text[0:4096] _metaref_cache[response] = html.get_meta_refresh( text, get_base_url(response), response.encoding, ignore_tags=ignore_tags ) return _metaref_cache[response] def response_status_message(status: bytes | float | str) -> str: """Return status code plus status text descriptive message""" status_int = int(status) message = http.RESPONSES.get(status_int, "Unknown Status") return f"{status_int} {to_unicode(message)}" def _remove_html_comments(body: bytes) -> bytes: start = body.find(b"<!--") while start != -1: end = body.find(b"-->", start + 1) if end == -1: return body[:start] body = body[:start] + body[end + 3 :] start = body.find(b"<!--") return body def open_in_browser( response: TextResponse, _openfunc: Callable[[str], Any] = webbrowser.open, ) -> Any: """Open *response* in a local web browser, adjusting the `base tag`_ for external links to work, e.g. so that images and styles are displayed. .. _base tag: https://www.w3schools.com/tags/tag_base.asp For example: .. code-block:: python from scrapy.utils.response import open_in_browser def parse_details(self, response): if "item name" not in response.body: open_in_browser(response) """ # circular imports from scrapy.http import HtmlResponse, TextResponse # noqa: PLC0415 # XXX: this implementation is a bit dirty and could be improved body = response.body if isinstance(response, HtmlResponse): if b"<base" not in body: _remove_html_comments(body) repl = rf'\0<base href="{response.url}">' body = re.sub(rb"<head(?:[^<>]*?>)", to_bytes(repl), body, count=1) ext = ".html" elif isinstance(response, TextResponse): ext = ".txt" else: raise TypeError(f"Unsupported response type: {response.__class__.__name__}") fd, fname = tempfile.mkstemp(ext) os.write(fd, body) os.close(fd) return _openfunc(f"file://{fname}")
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/asyncio.py
scrapy/utils/asyncio.py
"""Utilities related to asyncio and its support in Scrapy.""" from __future__ import annotations import asyncio import logging import time from collections.abc import AsyncIterator, Callable, Coroutine, Iterable from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar from twisted.internet.defer import Deferred from twisted.internet.task import LoopingCall from scrapy.utils.asyncgen import as_async_generator from scrapy.utils.reactor import is_asyncio_reactor_installed, is_reactor_installed if TYPE_CHECKING: from twisted.internet.base import DelayedCall # typing.Self, typing.TypeVarTuple and typing.Unpack require Python 3.11 from typing_extensions import Self, TypeVarTuple, Unpack _Ts = TypeVarTuple("_Ts") _T = TypeVar("_T") _P = ParamSpec("_P") logger = logging.getLogger(__name__) def is_asyncio_available() -> bool: """Check if it's possible to call asyncio code that relies on the asyncio event loop. .. versionadded:: VERSION Currently this function is identical to :func:`scrapy.utils.reactor.is_asyncio_reactor_installed`: it returns ``True`` if the Twisted reactor that is installed is :class:`~twisted.internet.asyncioreactor.AsyncioSelectorReactor`, returns ``False`` if a different reactor is installed, and raises a :exc:`RuntimeError` if no reactor is installed. In a future Scrapy version, when Scrapy supports running without a Twisted reactor, this function will also return ``True`` when running in that mode, so code that doesn't directly require a Twisted reactor should use this function instead of :func:`~scrapy.utils.reactor.is_asyncio_reactor_installed`. When this returns ``True``, an asyncio loop is installed and used by Scrapy. It's possible to call functions that require it, such as :func:`asyncio.sleep`, and await on :class:`asyncio.Future` objects in Scrapy-related code. When this returns ``False``, a non-asyncio Twisted reactor is installed. It's not possible to use asyncio features that require an asyncio event loop or await on :class:`asyncio.Future` objects in Scrapy-related code, but it's possible to await on :class:`~twisted.internet.defer.Deferred` objects. """ if not is_reactor_installed(): raise RuntimeError( "is_asyncio_available() called without an installed reactor." ) return is_asyncio_reactor_installed() async def _parallel_asyncio( iterable: Iterable[_T] | AsyncIterator[_T], count: int, callable_: Callable[Concatenate[_T, _P], Coroutine[Any, Any, None]], *args: _P.args, **kwargs: _P.kwargs, ) -> None: """Execute a callable over the objects in the given iterable, in parallel, using no more than ``count`` concurrent calls. This function is only used in :meth:`scrapy.core.scraper.Scraper.handle_spider_output_async` and so it assumes that neither *callable* nor iterating *iterable* will raise an exception. """ queue: asyncio.Queue[_T | None] = asyncio.Queue(count * 2) async def worker() -> None: while True: item = await queue.get() if item is None: break try: await callable_(item, *args, **kwargs) finally: queue.task_done() async def fill_queue() -> None: async for item in as_async_generator(iterable): await queue.put(item) for _ in range(count): await queue.put(None) fill_task = asyncio.create_task(fill_queue()) work_tasks = [asyncio.create_task(worker()) for _ in range(count)] await asyncio.wait([fill_task, *work_tasks]) class AsyncioLoopingCall: """A simple implementation of a periodic call using asyncio, keeping some API and behavior compatibility with the Twisted ``LoopingCall``. The function is called every *interval* seconds, independent of the finish time of the previous call. If the function is still running when it's time to call it again, calls are skipped until the function finishes. The function must not return a coroutine or a ``Deferred``. """ def __init__(self, func: Callable[_P, _T], *args: _P.args, **kwargs: _P.kwargs): self._func: Callable[_P, _T] = func self._args: tuple[Any, ...] = args self._kwargs: dict[str, Any] = kwargs self._task: asyncio.Task | None = None self.interval: float | None = None self._start_time: float | None = None @property def running(self) -> bool: return self._start_time is not None def start(self, interval: float, now: bool = True) -> None: """Start calling the function every *interval* seconds. :param interval: The interval in seconds between calls. :type interval: float :param now: If ``True``, also call the function immediately. :type now: bool """ if self.running: raise RuntimeError("AsyncioLoopingCall already running") if interval <= 0: raise ValueError("Interval must be greater than 0") self.interval = interval self._start_time = time.time() if now: self._call() loop = asyncio.get_event_loop() self._task = loop.create_task(self._loop()) def _to_sleep(self) -> float: """Return the time to sleep until the next call.""" assert self.interval is not None assert self._start_time is not None now = time.time() running_for = now - self._start_time return self.interval - (running_for % self.interval) async def _loop(self) -> None: """Run an infinite loop that calls the function periodically.""" while self.running: await asyncio.sleep(self._to_sleep()) self._call() def stop(self) -> None: """Stop the periodic calls.""" self.interval = self._start_time = None if self._task is not None: self._task.cancel() self._task = None def _call(self) -> None: """Execute the function.""" try: result = self._func(*self._args, **self._kwargs) except Exception: logger.exception("Error calling the AsyncioLoopingCall function") self.stop() else: if isinstance(result, (Coroutine, Deferred)): self.stop() raise TypeError( "The AsyncioLoopingCall function must not return a coroutine or a Deferred" ) def create_looping_call( func: Callable[_P, _T], *args: _P.args, **kwargs: _P.kwargs ) -> AsyncioLoopingCall | LoopingCall: """Create an instance of a looping call class. This creates an instance of :class:`AsyncioLoopingCall` or :class:`LoopingCall`, depending on whether asyncio support is available. """ if is_asyncio_available(): return AsyncioLoopingCall(func, *args, **kwargs) return LoopingCall(func, *args, **kwargs) def call_later( delay: float, func: Callable[[Unpack[_Ts]], object], *args: Unpack[_Ts] ) -> CallLaterResult: """Schedule a function to be called after a delay. This uses either ``loop.call_later()`` or ``reactor.callLater()``, depending on whether asyncio support is available. """ if is_asyncio_available(): loop = asyncio.get_event_loop() return CallLaterResult.from_asyncio(loop.call_later(delay, func, *args)) from twisted.internet import reactor return CallLaterResult.from_twisted(reactor.callLater(delay, func, *args)) class CallLaterResult: """An universal result for :func:`call_later`, wrapping either :class:`asyncio.TimerHandle` or :class:`twisted.internet.base.DelayedCall`. The provided API is close to the :class:`asyncio.TimerHandle` one: there is no ``active()`` (as there is no such public API in :class:`asyncio.TimerHandle`) but ``cancel()`` can be called on already called or cancelled instances. """ _timer_handle: asyncio.TimerHandle | None = None _delayed_call: DelayedCall | None = None @classmethod def from_asyncio(cls, timer_handle: asyncio.TimerHandle) -> Self: """Create a CallLaterResult from an asyncio TimerHandle.""" o = cls() o._timer_handle = timer_handle return o @classmethod def from_twisted(cls, delayed_call: DelayedCall) -> Self: """Create a CallLaterResult from a Twisted DelayedCall.""" o = cls() o._delayed_call = delayed_call return o def cancel(self) -> None: """Cancel the underlying delayed call. Does nothing if the delayed call was already called or cancelled. """ if self._timer_handle: self._timer_handle.cancel() self._timer_handle = None elif self._delayed_call and self._delayed_call.active(): self._delayed_call.cancel() self._delayed_call = None
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/iterators.py
scrapy/utils/iterators.py
from __future__ import annotations import csv import logging import re from io import StringIO from typing import TYPE_CHECKING, Any, Literal, cast, overload from warnings import warn from lxml import etree from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.http import Response, TextResponse from scrapy.selector import Selector from scrapy.utils.python import re_rsearch if TYPE_CHECKING: from collections.abc import Callable, Iterator logger = logging.getLogger(__name__) def xmliter(obj: Response | str | bytes, nodename: str) -> Iterator[Selector]: """Return a iterator of Selector's over all nodes of a XML document, given the name of the node to iterate. Useful for parsing XML feeds. obj can be: - a Response object - a unicode string - a string encoded as utf-8 """ warn( ( "xmliter is deprecated and its use strongly discouraged because " "it is vulnerable to ReDoS attacks. Use xmliter_lxml instead. See " "https://github.com/scrapy/scrapy/security/advisories/GHSA-cc65-xxvf-f7r9" ), ScrapyDeprecationWarning, stacklevel=2, ) nodename_patt = re.escape(nodename) DOCUMENT_HEADER_RE = re.compile(r"<\?xml[^>]+>\s*", re.DOTALL) HEADER_END_RE = re.compile(rf"<\s*/{nodename_patt}\s*>", re.DOTALL) END_TAG_RE = re.compile(r"<\s*/([^\s>]+)\s*>", re.DOTALL) NAMESPACE_RE = re.compile(r"((xmlns[:A-Za-z]*)=[^>\s]+)", re.DOTALL) text = _body_or_str(obj) document_header_match = re.search(DOCUMENT_HEADER_RE, text) document_header = ( document_header_match.group().strip() if document_header_match else "" ) header_end_idx = re_rsearch(HEADER_END_RE, text) header_end = text[header_end_idx[1] :].strip() if header_end_idx else "" namespaces: dict[str, str] = {} if header_end: for tagname in reversed(re.findall(END_TAG_RE, header_end)): assert header_end_idx tag = re.search( rf"<\s*{tagname}.*?xmlns[:=][^>]*>", text[: header_end_idx[1]], re.DOTALL, ) if tag: for x in re.findall(NAMESPACE_RE, tag.group()): namespaces[x[1]] = x[0] r = re.compile(rf"<{nodename_patt}[\s>].*?</{nodename_patt}>", re.DOTALL) for match in r.finditer(text): nodetext = ( document_header + match.group().replace( nodename, f"{nodename} {' '.join(namespaces.values())}", 1 ) + header_end ) yield Selector(text=nodetext, type="xml") def xmliter_lxml( obj: Response | str | bytes, nodename: str, namespace: str | None = None, prefix: str = "x", ) -> Iterator[Selector]: reader = _StreamReader(obj) tag = f"{{{namespace}}}{nodename}" if namespace else nodename iterable = etree.iterparse( reader, encoding=reader.encoding, events=("end", "start-ns"), resolve_entities=False, huge_tree=True, ) selxpath = "//" + (f"{prefix}:{nodename}" if namespace else nodename) needs_namespace_resolution = not namespace and ":" in nodename if needs_namespace_resolution: prefix, nodename = nodename.split(":", maxsplit=1) for event, data in iterable: if event == "start-ns": assert isinstance(data, tuple) if needs_namespace_resolution: _prefix, _namespace = data if _prefix != prefix: continue namespace = _namespace needs_namespace_resolution = False selxpath = f"//{prefix}:{nodename}" tag = f"{{{namespace}}}{nodename}" continue assert isinstance(data, etree._Element) node = data if node.tag != tag: continue nodetext = etree.tostring(node, encoding="unicode") node.clear() xs = Selector(text=nodetext, type="xml") if namespace: xs.register_namespace(prefix, namespace) yield xs.xpath(selxpath)[0] class _StreamReader: def __init__(self, obj: Response | str | bytes): self._ptr: int = 0 self._text: str | bytes if isinstance(obj, TextResponse): self._text, self.encoding = obj.body, obj.encoding elif isinstance(obj, Response): self._text, self.encoding = obj.body, "utf-8" else: self._text, self.encoding = obj, "utf-8" self._is_unicode: bool = isinstance(self._text, str) self._is_first_read: bool = True def read(self, n: int = 65535) -> bytes: method: Callable[[int], bytes] = ( self._read_unicode if self._is_unicode else self._read_string ) result = method(n) if self._is_first_read: self._is_first_read = False result = result.lstrip() return result def _read_string(self, n: int = 65535) -> bytes: s, e = self._ptr, self._ptr + n self._ptr = e return cast("bytes", self._text)[s:e] def _read_unicode(self, n: int = 65535) -> bytes: s, e = self._ptr, self._ptr + n self._ptr = e return cast("str", self._text)[s:e].encode("utf-8") def csviter( obj: Response | str | bytes, delimiter: str | None = None, headers: list[str] | None = None, encoding: str | None = None, quotechar: str | None = None, ) -> Iterator[dict[str, str]]: """Returns an iterator of dictionaries from the given csv object obj can be: - a Response object - a unicode string - a string encoded as utf-8 delimiter is the character used to separate fields on the given obj. headers is an iterable that when provided offers the keys for the returned dictionaries, if not the first row is used. quotechar is the character used to enclosure fields on the given obj. """ if encoding is not None: warn( "The encoding argument of csviter() is ignored and will be removed" " in a future Scrapy version.", category=ScrapyDeprecationWarning, stacklevel=2, ) lines = StringIO(_body_or_str(obj, unicode=True)) kwargs: dict[str, Any] = {} if delimiter: kwargs["delimiter"] = delimiter if quotechar: kwargs["quotechar"] = quotechar csv_r = csv.reader(lines, **kwargs) if not headers: try: headers = next(csv_r) except StopIteration: return for row in csv_r: if len(row) != len(headers): logger.warning( "ignoring row %(csvlnum)d (length: %(csvrow)d, " "should be: %(csvheader)d)", { "csvlnum": csv_r.line_num, "csvrow": len(row), "csvheader": len(headers), }, ) continue yield dict(zip(headers, row, strict=False)) @overload def _body_or_str(obj: Response | str | bytes) -> str: ... @overload def _body_or_str(obj: Response | str | bytes, unicode: Literal[True]) -> str: ... @overload def _body_or_str(obj: Response | str | bytes, unicode: Literal[False]) -> bytes: ... def _body_or_str(obj: Response | str | bytes, unicode: bool = True) -> str | bytes: expected_types = (Response, str, bytes) if not isinstance(obj, expected_types): expected_types_str = " or ".join(t.__name__ for t in expected_types) raise TypeError( f"Object {obj!r} must be {expected_types_str}, not {type(obj).__name__}" ) if isinstance(obj, Response): if not unicode: return obj.body if isinstance(obj, TextResponse): return obj.text return obj.body.decode("utf-8") if isinstance(obj, str): return obj if unicode else obj.encode("utf-8") return obj.decode("utf-8") if unicode else obj
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/serialize.py
scrapy/utils/serialize.py
import datetime import decimal import json from typing import Any from itemadapter import ItemAdapter, is_item from twisted.internet import defer from scrapy.http import Request, Response class ScrapyJSONEncoder(json.JSONEncoder): DATE_FORMAT = "%Y-%m-%d" TIME_FORMAT = "%H:%M:%S" def default(self, o: Any) -> Any: if isinstance(o, set): return list(o) if isinstance(o, datetime.datetime): return o.strftime(f"{self.DATE_FORMAT} {self.TIME_FORMAT}") if isinstance(o, datetime.date): return o.strftime(self.DATE_FORMAT) if isinstance(o, datetime.time): return o.strftime(self.TIME_FORMAT) if isinstance(o, decimal.Decimal): return str(o) if isinstance(o, defer.Deferred): return str(o) if isinstance(o, Request): return f"<{type(o).__name__} {o.method} {o.url}>" if isinstance(o, Response): return f"<{type(o).__name__} {o.status} {o.url}>" if is_item(o): return ItemAdapter(o).asdict() return super().default(o)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/template.py
scrapy/utils/template.py
"""Helper functions for working with templates""" from __future__ import annotations import re import string from pathlib import Path from typing import TYPE_CHECKING, Any if TYPE_CHECKING: from os import PathLike def render_templatefile(path: str | PathLike, **kwargs: Any) -> None: path_obj = Path(path) raw = path_obj.read_text("utf8") content = string.Template(raw).substitute(**kwargs) render_path = path_obj.with_suffix("") if path_obj.suffix == ".tmpl" else path_obj if path_obj.suffix == ".tmpl": path_obj.rename(render_path) render_path.write_text(content, "utf8") CAMELCASE_INVALID_CHARS = re.compile(r"[^a-zA-Z\d]") def string_camelcase(string: str) -> str: """Convert a word to its CamelCase version and remove invalid chars >>> string_camelcase('lost-pound') 'LostPound' >>> string_camelcase('missing_images') 'MissingImages' """ return CAMELCASE_INVALID_CHARS.sub("", string.title())
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/ftp.py
scrapy/utils/ftp.py
import posixpath from ftplib import FTP, error_perm from posixpath import dirname from typing import IO def ftp_makedirs_cwd(ftp: FTP, path: str, first_call: bool = True) -> None: """Set the current directory of the FTP connection given in the ``ftp`` argument (as a ftplib.FTP object), creating all parent directories if they don't exist. The ftplib.FTP object must be already connected and logged in. """ try: ftp.cwd(path) except error_perm: ftp_makedirs_cwd(ftp, dirname(path), False) ftp.mkd(path) if first_call: ftp.cwd(path) def ftp_store_file( *, path: str, file: IO[bytes], host: str, port: int, username: str, password: str, use_active_mode: bool = False, overwrite: bool = True, ) -> None: """Opens a FTP connection with passed credentials,sets current directory to the directory extracted from given path, then uploads the file to server """ with FTP() as ftp: ftp.connect(host, port) ftp.login(username, password) if use_active_mode: ftp.set_pasv(False) file.seek(0) dirname, filename = posixpath.split(path) ftp_makedirs_cwd(ftp, dirname) command = "STOR" if overwrite else "APPE" ftp.storbinary(f"{command} {filename}", file) file.close()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/defer.py
scrapy/utils/defer.py
""" Helper functions for dealing with Twisted deferreds """ from __future__ import annotations import asyncio import inspect import warnings from asyncio import Future from collections.abc import Awaitable, Coroutine, Iterable, Iterator from functools import wraps from typing import ( TYPE_CHECKING, Any, Concatenate, Generic, ParamSpec, TypeVar, cast, overload, ) from twisted.internet.defer import Deferred, DeferredList, fail, succeed from twisted.internet.task import Cooperator from twisted.python import failure from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.utils.asyncio import is_asyncio_available from scrapy.utils.python import global_object_name if TYPE_CHECKING: from collections.abc import AsyncIterator, Callable from twisted.python.failure import Failure _T = TypeVar("_T") _T2 = TypeVar("_T2") _P = ParamSpec("_P") _DEFER_DELAY = 0.1 def defer_fail(_failure: Failure) -> Deferred[Any]: """Same as twisted.internet.defer.fail but delay calling errback until next reactor loop It delays by 100ms so reactor has a chance to go through readers and writers before attending pending delayed calls, so do not set delay to zero. """ warnings.warn( "scrapy.utils.defer.defer_fail() is deprecated, use" " twisted.internet.defer.fail(), plus an explicit sleep if needed.", category=ScrapyDeprecationWarning, stacklevel=2, ) from twisted.internet import reactor d: Deferred[Any] = Deferred() reactor.callLater(_DEFER_DELAY, d.errback, _failure) return d def defer_succeed(result: _T) -> Deferred[_T]: """Same as twisted.internet.defer.succeed but delay calling callback until next reactor loop It delays by 100ms so reactor has a chance to go through readers and writers before attending pending delayed calls, so do not set delay to zero. """ warnings.warn( "scrapy.utils.defer.defer_succeed() is deprecated, use" " twisted.internet.defer.succeed(), plus an explicit sleep if needed.", category=ScrapyDeprecationWarning, stacklevel=2, ) from twisted.internet import reactor d: Deferred[_T] = Deferred() reactor.callLater(_DEFER_DELAY, d.callback, result) return d async def _defer_sleep_async() -> None: """Delay by _DEFER_DELAY so reactor has a chance to go through readers and writers before attending pending delayed calls, so do not set delay to zero. """ if is_asyncio_available(): await asyncio.sleep(_DEFER_DELAY) else: from twisted.internet import reactor d: Deferred[None] = Deferred() reactor.callLater(_DEFER_DELAY, d.callback, None) await d def defer_result(result: Any) -> Deferred[Any]: warnings.warn( "scrapy.utils.defer.defer_result() is deprecated, use" " twisted.internet.defer.success() and twisted.internet.defer.fail()," " plus an explicit sleep if needed, or explicit reactor.callLater().", category=ScrapyDeprecationWarning, stacklevel=2, ) if isinstance(result, Deferred): return result from twisted.internet import reactor d: Deferred[Any] = Deferred() if isinstance(result, failure.Failure): reactor.callLater(_DEFER_DELAY, d.errback, result) else: reactor.callLater(_DEFER_DELAY, d.callback, result) return d @overload def mustbe_deferred( f: Callable[_P, Deferred[_T]], *args: _P.args, **kw: _P.kwargs ) -> Deferred[_T]: ... @overload def mustbe_deferred( f: Callable[_P, _T], *args: _P.args, **kw: _P.kwargs ) -> Deferred[_T]: ... def mustbe_deferred( f: Callable[_P, Deferred[_T] | _T], *args: _P.args, **kw: _P.kwargs, ) -> Deferred[_T]: """Same as twisted.internet.defer.maybeDeferred, but delay calling callback/errback to next reactor loop """ warnings.warn( "scrapy.utils.defer.mustbe_deferred() is deprecated, use" " twisted.internet.defer.maybeDeferred(), with an explicit sleep if needed.", category=ScrapyDeprecationWarning, stacklevel=2, ) result: _T | Deferred[_T] | Failure try: result = f(*args, **kw) except Exception: result = failure.Failure() return defer_result(result) def parallel( iterable: Iterable[_T], count: int, callable: Callable[Concatenate[_T, _P], _T2], # noqa: A002 *args: _P.args, **named: _P.kwargs, ) -> Deferred[list[tuple[bool, Iterator[_T2]]]]: """Execute a callable over the objects in the given iterable, in parallel, using no more than ``count`` concurrent calls. Taken from: https://jcalderone.livejournal.com/24285.html """ coop = Cooperator() work: Iterator[_T2] = (callable(elem, *args, **named) for elem in iterable) return DeferredList([coop.coiterate(work) for _ in range(count)]) class _AsyncCooperatorAdapter(Iterator, Generic[_T]): """A class that wraps an async iterable into a normal iterator suitable for using in Cooperator.coiterate(). As it's only needed for parallel_async(), it calls the callable directly in the callback, instead of providing a more generic interface. On the outside, this class behaves as an iterator that yields Deferreds. Each Deferred is fired with the result of the callable which was called on the next result from aiterator. It raises StopIteration when aiterator is exhausted, as expected. Cooperator calls __next__() multiple times and waits on the Deferreds returned from it. As async generators (since Python 3.8) don't support awaiting on __anext__() several times in parallel, we need to serialize this. It's done by storing the Deferreds returned from __next__() and firing the oldest one when a result from __anext__() is available. The workflow: 1. When __next__() is called for the first time, it creates a Deferred, stores it in self.waiting_deferreds and returns it. It also makes a Deferred that will wait for self.aiterator.__anext__() and puts it into self.anext_deferred. 2. If __next__() is called again before self.anext_deferred fires, more Deferreds are added to self.waiting_deferreds. 3. When self.anext_deferred fires, it either calls _callback() or _errback(). Both clear self.anext_deferred. 3.1. _callback() calls the callable passing the result value that it takes, pops a Deferred from self.waiting_deferreds, and if the callable result was a Deferred, it chains those Deferreds so that the waiting Deferred will fire when the result Deferred does, otherwise it fires it directly. This causes one awaiting task to receive a result. If self.waiting_deferreds is still not empty, new __anext__() is called and self.anext_deferred is populated. 3.2. _errback() checks the exception class. If it's StopAsyncIteration it means self.aiterator is exhausted and so it sets self.finished and fires all self.waiting_deferreds. Other exceptions are propagated. 4. If __next__() is called after __anext__() was handled, then if self.finished is True, it raises StopIteration, otherwise it acts like in step 2, but if self.anext_deferred is now empty is also populates it with a new __anext__(). Note that CooperativeTask ignores the value returned from the Deferred that it waits for, so we fire them with None when needed. It may be possible to write an async iterator-aware replacement for Cooperator/CooperativeTask and use it instead of this adapter to achieve the same goal. """ def __init__( self, aiterable: AsyncIterator[_T], callable_: Callable[Concatenate[_T, _P], Deferred[Any] | None], *callable_args: _P.args, **callable_kwargs: _P.kwargs, ): self.aiterator: AsyncIterator[_T] = aiterable.__aiter__() self.callable: Callable[Concatenate[_T, _P], Deferred[Any] | None] = callable_ self.callable_args: tuple[Any, ...] = callable_args self.callable_kwargs: dict[str, Any] = callable_kwargs self.finished: bool = False self.waiting_deferreds: list[Deferred[Any]] = [] self.anext_deferred: Deferred[_T] | None = None def _callback(self, result: _T) -> None: # This gets called when the result from aiterator.__anext__() is available. # It calls the callable on it and sends the result to the oldest waiting Deferred # (by chaining if the result is a Deferred too or by firing if not). self.anext_deferred = None callable_result = self.callable( result, *self.callable_args, **self.callable_kwargs ) d = self.waiting_deferreds.pop(0) if isinstance(callable_result, Deferred): callable_result.chainDeferred(d) else: d.callback(None) if self.waiting_deferreds: self._call_anext() def _errback(self, failure: Failure) -> None: # This gets called on any exceptions in aiterator.__anext__(). # It handles StopAsyncIteration by stopping the iteration and reraises all others. self.anext_deferred = None failure.trap(StopAsyncIteration) self.finished = True for d in self.waiting_deferreds: d.callback(None) def _call_anext(self) -> None: # This starts waiting for the next result from aiterator. # If aiterator is exhausted, _errback will be called. self.anext_deferred = deferred_from_coro(self.aiterator.__anext__()) self.anext_deferred.addCallbacks(self._callback, self._errback) def __next__(self) -> Deferred[Any]: # This puts a new Deferred into self.waiting_deferreds and returns it. # It also calls __anext__() if needed. if self.finished: raise StopIteration d: Deferred[Any] = Deferred() self.waiting_deferreds.append(d) if not self.anext_deferred: self._call_anext() return d def parallel_async( async_iterable: AsyncIterator[_T], count: int, callable: Callable[Concatenate[_T, _P], Deferred[Any] | None], # noqa: A002 *args: _P.args, **named: _P.kwargs, ) -> Deferred[list[tuple[bool, Iterator[Deferred[Any]]]]]: """Like ``parallel`` but for async iterators""" coop = Cooperator() work: Iterator[Deferred[Any]] = _AsyncCooperatorAdapter( async_iterable, callable, *args, **named ) dl: Deferred[list[tuple[bool, Iterator[Deferred[Any]]]]] = DeferredList( [coop.coiterate(work) for _ in range(count)] ) return dl def process_chain( callbacks: Iterable[Callable[Concatenate[_T, _P], _T]], input: _T, # noqa: A002 *a: _P.args, **kw: _P.kwargs, ) -> Deferred[_T]: """Return a Deferred built by chaining the given callbacks""" warnings.warn( "process_chain() is deprecated.", category=ScrapyDeprecationWarning, stacklevel=2, ) d: Deferred[_T] = Deferred() for x in callbacks: d.addCallback(x, *a, **kw) d.callback(input) return d def process_parallel( callbacks: Iterable[Callable[Concatenate[_T, _P], _T2]], input: _T, # noqa: A002 *a: _P.args, **kw: _P.kwargs, ) -> Deferred[list[_T2]]: # pragma: no cover """Return a Deferred with the output of all successful calls to the given callbacks """ warnings.warn( "process_parallel() is deprecated.", category=ScrapyDeprecationWarning, stacklevel=2, ) dfds = [succeed(input).addCallback(x, *a, **kw) for x in callbacks] d: Deferred[list[tuple[bool, _T2]]] = DeferredList( dfds, fireOnOneErrback=True, consumeErrors=True ) d2: Deferred[list[_T2]] = d.addCallback(lambda r: [x[1] for x in r]) def eb(failure: Failure) -> Failure: return failure.value.subFailure d2.addErrback(eb) return d2 def iter_errback( iterable: Iterable[_T], errback: Callable[Concatenate[Failure, _P], Any], *a: _P.args, **kw: _P.kwargs, ) -> Iterable[_T]: """Wrap an iterable calling an errback if an error is caught while iterating it. """ it = iter(iterable) while True: try: yield next(it) except StopIteration: break except Exception: errback(failure.Failure(), *a, **kw) async def aiter_errback( aiterable: AsyncIterator[_T], errback: Callable[Concatenate[Failure, _P], Any], *a: _P.args, **kw: _P.kwargs, ) -> AsyncIterator[_T]: """Wrap an async iterable calling an errback if an error is caught while iterating it. Similar to :func:`scrapy.utils.defer.iter_errback`. """ it = aiterable.__aiter__() while True: try: yield await it.__anext__() except StopAsyncIteration: break except Exception: errback(failure.Failure(), *a, **kw) @overload def deferred_from_coro(o: Awaitable[_T]) -> Deferred[_T]: ... @overload def deferred_from_coro(o: _T2) -> _T2: ... def deferred_from_coro(o: Awaitable[_T] | _T2) -> Deferred[_T] | _T2: """Convert a coroutine or other awaitable object into a Deferred, or return the object as is if it isn't a coroutine.""" if isinstance(o, Deferred): return o if inspect.isawaitable(o): if not is_asyncio_available(): # wrapping the coroutine directly into a Deferred, this doesn't work correctly with coroutines # that use asyncio, e.g. "await asyncio.sleep(1)" return Deferred.fromCoroutine(cast("Coroutine[Deferred[Any], Any, _T]", o)) # wrapping the coroutine into a Future and then into a Deferred, this requires AsyncioSelectorReactor return Deferred.fromFuture(asyncio.ensure_future(o)) return o def deferred_f_from_coro_f( coro_f: Callable[_P, Awaitable[_T]], ) -> Callable[_P, Deferred[_T]]: """Convert a coroutine function into a function that returns a Deferred. The coroutine function will be called at the time when the wrapper is called. Wrapper args will be passed to it. This is useful for callback chains, as callback functions are called with the previous callback result. """ @wraps(coro_f) def f(*coro_args: _P.args, **coro_kwargs: _P.kwargs) -> Deferred[_T]: return deferred_from_coro(coro_f(*coro_args, **coro_kwargs)) return f def maybeDeferred_coro( f: Callable[_P, Any], *args: _P.args, **kw: _P.kwargs ) -> Deferred[Any]: """Copy of defer.maybeDeferred that also converts coroutines to Deferreds.""" try: result = f(*args, **kw) except: # noqa: E722 # pylint: disable=bare-except return fail(failure.Failure(captureVars=Deferred.debug)) if isinstance(result, Deferred): warnings.warn( f"{global_object_name(f)} returned a Deferred, this is deprecated." f" Please refactor this function to return a coroutine.", ScrapyDeprecationWarning, stacklevel=2, ) return result if asyncio.isfuture(result) or inspect.isawaitable(result): return deferred_from_coro(result) if isinstance(result, failure.Failure): return fail(result) return succeed(result) def deferred_to_future(d: Deferred[_T]) -> Future[_T]: """Return an :class:`asyncio.Future` object that wraps *d*. This function requires :class:`~twisted.internet.asyncioreactor.AsyncioSelectorReactor` to be installed. When :ref:`using the asyncio reactor <install-asyncio>`, you cannot await on :class:`~twisted.internet.defer.Deferred` objects from :ref:`Scrapy callables defined as coroutines <coroutine-support>`, you can only await on ``Future`` objects. Wrapping ``Deferred`` objects into ``Future`` objects allows you to wait on them:: class MySpider(Spider): ... async def parse(self, response): additional_request = scrapy.Request('https://example.org/price') deferred = self.crawler.engine.download(additional_request) additional_response = await deferred_to_future(deferred) .. versionadded:: 2.6.0 .. versionchanged:: VERSION This function no longer installs an asyncio loop if called before the Twisted asyncio reactor is installed. A :exc:`RuntimeError` is raised in this case. """ if not is_asyncio_available(): raise RuntimeError("deferred_to_future() requires AsyncioSelectorReactor.") return d.asFuture(asyncio.get_event_loop()) def maybe_deferred_to_future(d: Deferred[_T]) -> Deferred[_T] | Future[_T]: """Return *d* as an object that can be awaited from a :ref:`Scrapy callable defined as a coroutine <coroutine-support>`. What you can await in Scrapy callables defined as coroutines depends on the value of :setting:`TWISTED_REACTOR`: - When :ref:`using the asyncio reactor <install-asyncio>`, you can only await on :class:`asyncio.Future` objects. - When not using the asyncio reactor, you can only await on :class:`~twisted.internet.defer.Deferred` objects. If you want to write code that uses ``Deferred`` objects but works with any reactor, use this function on all ``Deferred`` objects:: class MySpider(Spider): ... async def parse(self, response): additional_request = scrapy.Request('https://example.org/price') deferred = self.crawler.engine.download(additional_request) additional_response = await maybe_deferred_to_future(deferred) .. versionadded:: 2.6.0 """ if not is_asyncio_available(): return d return deferred_to_future(d) def _schedule_coro(coro: Coroutine[Any, Any, Any]) -> None: """Schedule the coroutine as a task or a Deferred. This doesn't store the reference to the task/Deferred, so a better alternative is calling :func:`scrapy.utils.defer.deferred_from_coro`, keeping the result, and adding proper exception handling (e.g. errbacks) to it. """ if not is_asyncio_available(): Deferred.fromCoroutine(coro) return loop = asyncio.get_event_loop() loop.create_task(coro) # noqa: RUF006 @overload def ensure_awaitable(o: Awaitable[_T], _warn: str | None = None) -> Awaitable[_T]: ... @overload def ensure_awaitable(o: _T, _warn: str | None = None) -> Awaitable[_T]: ... def ensure_awaitable(o: _T | Awaitable[_T], _warn: str | None = None) -> Awaitable[_T]: """Convert any value to an awaitable object. For a :class:`~twisted.internet.defer.Deferred` object, use :func:`maybe_deferred_to_future` to wrap it into a suitable object. For an awaitable object of a different type, return it as is. For any other value, return a coroutine that completes with that value. .. versionadded:: VERSION """ if isinstance(o, Deferred): if _warn: warnings.warn( f"{_warn} returned a Deferred, this is deprecated." f" Please refactor this function to return a coroutine.", ScrapyDeprecationWarning, stacklevel=2, ) return maybe_deferred_to_future(o) if inspect.isawaitable(o): return o async def coro() -> _T: return o return coro()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/ossignal.py
scrapy/utils/ossignal.py
from __future__ import annotations import signal from collections.abc import Callable from types import FrameType from typing import Any, TypeAlias # copy of _HANDLER from typeshed/stdlib/signal.pyi SignalHandlerT: TypeAlias = ( Callable[[int, FrameType | None], Any] | int | signal.Handlers | None ) signal_names: dict[int, str] = {} for signame in dir(signal): if signame.startswith("SIG") and not signame.startswith("SIG_"): signum = getattr(signal, signame) if isinstance(signum, int): signal_names[signum] = signame def install_shutdown_handlers( function: SignalHandlerT, override_sigint: bool = True ) -> None: """Install the given function as a signal handler for all common shutdown signals (such as SIGINT, SIGTERM, etc). If ``override_sigint`` is ``False`` the SIGINT handler won't be installed if there is already a handler in place (e.g. Pdb) """ signal.signal(signal.SIGTERM, function) if ( signal.getsignal(signal.SIGINT) # pylint: disable=comparison-with-callable == signal.default_int_handler or override_sigint ): signal.signal(signal.SIGINT, function) # Catch Ctrl-Break in windows if hasattr(signal, "SIGBREAK"): signal.signal(signal.SIGBREAK, function)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/decorators.py
scrapy/utils/decorators.py
from __future__ import annotations import inspect import warnings from functools import wraps from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar, overload from twisted.internet.defer import Deferred, maybeDeferred from twisted.internet.threads import deferToThread from scrapy.exceptions import ScrapyDeprecationWarning if TYPE_CHECKING: from collections.abc import AsyncGenerator, Callable, Coroutine _T = TypeVar("_T") _P = ParamSpec("_P") def deprecated( use_instead: Any = None, ) -> Callable[[Callable[_P, _T]], Callable[_P, _T]]: """This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emitted when the function is used.""" def deco(func: Callable[_P, _T]) -> Callable[_P, _T]: @wraps(func) def wrapped(*args: _P.args, **kwargs: _P.kwargs) -> _T: message = f"Call to deprecated function {func.__name__}." if use_instead: message += f" Use {use_instead} instead." warnings.warn(message, category=ScrapyDeprecationWarning, stacklevel=2) return func(*args, **kwargs) return wrapped if callable(use_instead): deco = deco(use_instead) use_instead = None return deco def defers(func: Callable[_P, _T]) -> Callable[_P, Deferred[_T]]: # pragma: no cover """Decorator to make sure a function always returns a deferred""" warnings.warn( "@defers is deprecated, you can use maybeDeferred() directly if needed.", category=ScrapyDeprecationWarning, stacklevel=2, ) @wraps(func) def wrapped(*a: _P.args, **kw: _P.kwargs) -> Deferred[_T]: return maybeDeferred(func, *a, **kw) return wrapped def inthread(func: Callable[_P, _T]) -> Callable[_P, Deferred[_T]]: """Decorator to call a function in a thread and return a deferred with the result """ @wraps(func) def wrapped(*a: _P.args, **kw: _P.kwargs) -> Deferred[_T]: return deferToThread(func, *a, **kw) return wrapped @overload def _warn_spider_arg( func: Callable[_P, Coroutine[Any, Any, _T]], ) -> Callable[_P, Coroutine[Any, Any, _T]]: ... @overload def _warn_spider_arg( func: Callable[_P, AsyncGenerator[_T]], ) -> Callable[_P, AsyncGenerator[_T]]: ... @overload def _warn_spider_arg(func: Callable[_P, _T]) -> Callable[_P, _T]: ... def _warn_spider_arg( func: Callable[_P, _T], ) -> ( Callable[_P, _T] | Callable[_P, Coroutine[Any, Any, _T]] | Callable[_P, AsyncGenerator[_T]] ): """Decorator to warn if a ``spider`` argument is passed to a function.""" sig = inspect.signature(func) def check_args(*args: _P.args, **kwargs: _P.kwargs) -> None: bound = sig.bind(*args, **kwargs) if "spider" in bound.arguments: warnings.warn( f"Passing a 'spider' argument to {func.__qualname__}() is deprecated and " "the argument will be removed in a future Scrapy version.", category=ScrapyDeprecationWarning, stacklevel=3, ) if inspect.iscoroutinefunction(func): @wraps(func) async def async_inner(*args: _P.args, **kwargs: _P.kwargs) -> _T: check_args(*args, **kwargs) return await func(*args, **kwargs) return async_inner if inspect.isasyncgenfunction(func): @wraps(func) async def asyncgen_inner( *args: _P.args, **kwargs: _P.kwargs ) -> AsyncGenerator[_T]: check_args(*args, **kwargs) async for item in func(*args, **kwargs): yield item return asyncgen_inner @wraps(func) def sync_inner(*args: _P.args, **kwargs: _P.kwargs) -> _T: check_args(*args, **kwargs) return func(*args, **kwargs) return sync_inner
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/versions.py
scrapy/utils/versions.py
from __future__ import annotations import platform import sys from importlib.metadata import version from warnings import warn import lxml.etree from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.settings.default_settings import LOG_VERSIONS from scrapy.utils.ssl import get_openssl_version _DEFAULT_SOFTWARE = ["Scrapy", *LOG_VERSIONS] def _version(item): lowercase_item = item.lower() if lowercase_item == "libxml2": return ".".join(map(str, lxml.etree.LIBXML_VERSION)) if lowercase_item == "platform": return platform.platform() if lowercase_item == "pyopenssl": return get_openssl_version() if lowercase_item == "python": return sys.version.replace("\n", "- ") return version(item) def get_versions( software: list | None = None, ) -> list[tuple[str, str]]: software = software or _DEFAULT_SOFTWARE return [(item, _version(item)) for item in software] def scrapy_components_versions() -> list[tuple[str, str]]: warn( ( "scrapy.utils.versions.scrapy_components_versions() is deprecated, " "use scrapy.utils.versions.get_versions() instead." ), ScrapyDeprecationWarning, stacklevel=2, ) return get_versions()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/httpobj.py
scrapy/utils/httpobj.py
"""Helper functions for scrapy.http objects (Request, Response)""" from __future__ import annotations from typing import TYPE_CHECKING from urllib.parse import ParseResult, urlparse from weakref import WeakKeyDictionary if TYPE_CHECKING: from scrapy.http import Request, Response _urlparse_cache: WeakKeyDictionary[Request | Response, ParseResult] = ( WeakKeyDictionary() ) def urlparse_cached(request_or_response: Request | Response) -> ParseResult: """Return urlparse.urlparse caching the result, where the argument can be a Request or Response object """ if request_or_response not in _urlparse_cache: _urlparse_cache[request_or_response] = urlparse(request_or_response.url) return _urlparse_cache[request_or_response]
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/console.py
scrapy/utils/console.py
from __future__ import annotations import code from collections.abc import Callable from functools import wraps from typing import TYPE_CHECKING, Any if TYPE_CHECKING: from collections.abc import Iterable EmbedFuncT = Callable[..., None] KnownShellsT = dict[str, Callable[..., EmbedFuncT]] def _embed_ipython_shell( namespace: dict[str, Any] = {}, banner: str = "" ) -> EmbedFuncT: """Start an IPython Shell""" try: from IPython.terminal.embed import InteractiveShellEmbed # noqa: T100,PLC0415 from IPython.terminal.ipapp import load_default_config # noqa: PLC0415 except ImportError: from IPython.frontend.terminal.embed import ( # type: ignore[no-redef] # noqa: T100,PLC0415 InteractiveShellEmbed, ) from IPython.frontend.terminal.ipapp import ( # type: ignore[no-redef] # noqa: PLC0415 load_default_config, ) @wraps(_embed_ipython_shell) def wrapper(namespace: dict[str, Any] = namespace, banner: str = "") -> None: config = load_default_config() # Always use .instance() to ensure _instance propagation to all parents # this is needed for <TAB> completion works well for new imports # and clear the instance to always have the fresh env # on repeated breaks like with inspect_response() InteractiveShellEmbed.clear_instance() shell = InteractiveShellEmbed.instance( banner1=banner, user_ns=namespace, config=config ) shell() return wrapper def _embed_bpython_shell( namespace: dict[str, Any] = {}, banner: str = "" ) -> EmbedFuncT: """Start a bpython shell""" import bpython # noqa: PLC0415 @wraps(_embed_bpython_shell) def wrapper(namespace: dict[str, Any] = namespace, banner: str = "") -> None: bpython.embed(locals_=namespace, banner=banner) return wrapper def _embed_ptpython_shell( namespace: dict[str, Any] = {}, banner: str = "" ) -> EmbedFuncT: """Start a ptpython shell""" import ptpython.repl # noqa: PLC0415 # pylint: disable=import-error @wraps(_embed_ptpython_shell) def wrapper(namespace: dict[str, Any] = namespace, banner: str = "") -> None: print(banner) ptpython.repl.embed(locals=namespace) return wrapper def _embed_standard_shell( namespace: dict[str, Any] = {}, banner: str = "" ) -> EmbedFuncT: """Start a standard python shell""" try: # readline module is only available on unix systems import readline # noqa: PLC0415 except ImportError: pass else: import rlcompleter # noqa: F401,PLC0415 readline.parse_and_bind("tab:complete") # type: ignore[attr-defined] @wraps(_embed_standard_shell) def wrapper(namespace: dict[str, Any] = namespace, banner: str = "") -> None: code.interact(banner=banner, local=namespace) return wrapper DEFAULT_PYTHON_SHELLS: KnownShellsT = { "ptpython": _embed_ptpython_shell, "ipython": _embed_ipython_shell, "bpython": _embed_bpython_shell, "python": _embed_standard_shell, } def get_shell_embed_func( shells: Iterable[str] | None = None, known_shells: KnownShellsT | None = None ) -> EmbedFuncT | None: """Return the first acceptable shell-embed function from a given list of shell names. """ if shells is None: # list, preference order of shells shells = DEFAULT_PYTHON_SHELLS.keys() if known_shells is None: # available embeddable shells known_shells = DEFAULT_PYTHON_SHELLS.copy() for shell in shells: if shell in known_shells: try: # function test: run all setup code (imports), # but dont fall into the shell return known_shells[shell]() except ImportError: continue return None def start_python_console( namespace: dict[str, Any] | None = None, banner: str = "", shells: Iterable[str] | None = None, ) -> None: """Start Python console bound to the given namespace. Readline support and tab completion will be used on Unix, if available. """ if namespace is None: namespace = {} try: shell = get_shell_embed_func(shells) if shell is not None: shell(namespace=namespace, banner=banner) except SystemExit: # raised when using exit() in python code.interact pass
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/request.py
scrapy/utils/request.py
""" This module provides some useful functions for working with scrapy.Request objects """ from __future__ import annotations import hashlib import json from typing import TYPE_CHECKING, Any, Protocol from urllib.parse import urlunparse from weakref import WeakKeyDictionary from w3lib.url import canonicalize_url from scrapy import Request, Spider from scrapy.utils.httpobj import urlparse_cached from scrapy.utils.misc import load_object from scrapy.utils.python import to_bytes, to_unicode if TYPE_CHECKING: from collections.abc import Iterable # typing.Self requires Python 3.11 from typing_extensions import Self from scrapy.crawler import Crawler _fingerprint_cache: WeakKeyDictionary[ Request, dict[tuple[tuple[bytes, ...] | None, bool], bytes] ] = WeakKeyDictionary() def fingerprint( request: Request, *, include_headers: Iterable[bytes | str] | None = None, keep_fragments: bool = False, ) -> bytes: """ Return the request fingerprint. The request fingerprint is a hash that uniquely identifies the resource the request points to. For example, take the following two urls: ``http://www.example.com/query?id=111&cat=222``, ``http://www.example.com/query?cat=222&id=111``. Even though those are two different URLs both point to the same resource and are equivalent (i.e. they should return the same response). Another example are cookies used to store session ids. Suppose the following page is only accessible to authenticated users: ``http://www.example.com/members/offers.html``. Lots of sites use a cookie to store the session id, which adds a random component to the HTTP Request and thus should be ignored when calculating the fingerprint. For this reason, request headers are ignored by default when calculating the fingerprint. If you want to include specific headers use the include_headers argument, which is a list of Request headers to include. Also, servers usually ignore fragments in urls when handling requests, so they are also ignored by default when calculating the fingerprint. If you want to include them, set the keep_fragments argument to True (for instance when handling requests with a headless browser). """ processed_include_headers: tuple[bytes, ...] | None = None if include_headers: processed_include_headers = tuple( to_bytes(h.lower()) for h in sorted(include_headers) ) cache = _fingerprint_cache.setdefault(request, {}) cache_key = (processed_include_headers, keep_fragments) if cache_key not in cache: # To decode bytes reliably (JSON does not support bytes), regardless of # character encoding, we use bytes.hex() headers: dict[str, list[str]] = {} if processed_include_headers: for header in processed_include_headers: if header in request.headers: headers[header.hex()] = [ header_value.hex() for header_value in request.headers.getlist(header) ] fingerprint_data = { "method": to_unicode(request.method), "url": canonicalize_url(request.url, keep_fragments=keep_fragments), "body": (request.body or b"").hex(), "headers": headers, } fingerprint_json = json.dumps(fingerprint_data, sort_keys=True) cache[cache_key] = hashlib.sha1( # noqa: S324 fingerprint_json.encode() ).digest() return cache[cache_key] class RequestFingerprinterProtocol(Protocol): def fingerprint(self, request: Request) -> bytes: ... class RequestFingerprinter: """Default fingerprinter. It takes into account a canonical version (:func:`w3lib.url.canonicalize_url`) of :attr:`request.url <scrapy.Request.url>` and the values of :attr:`request.method <scrapy.Request.method>` and :attr:`request.body <scrapy.Request.body>`. It then generates an `SHA1 <https://en.wikipedia.org/wiki/SHA-1>`_ hash. """ @classmethod def from_crawler(cls, crawler: Crawler) -> Self: return cls(crawler) def __init__(self, crawler: Crawler | None = None): self._fingerprint = fingerprint def fingerprint(self, request: Request) -> bytes: return self._fingerprint(request) def request_httprepr(request: Request) -> bytes: """Return the raw HTTP representation (as bytes) of the given request. This is provided only for reference since it's not the actual stream of bytes that will be send when performing the request (that's controlled by Twisted). """ parsed = urlparse_cached(request) path = urlunparse(("", "", parsed.path or "/", parsed.params, parsed.query, "")) s = to_bytes(request.method) + b" " + to_bytes(path) + b" HTTP/1.1\r\n" s += b"Host: " + to_bytes(parsed.hostname or b"") + b"\r\n" if request.headers: s += request.headers.to_string() + b"\r\n" s += b"\r\n" s += request.body return s def referer_str(request: Request) -> str | None: """Return Referer HTTP header suitable for logging.""" referrer = request.headers.get("Referer") if referrer is None: return referrer return to_unicode(referrer, errors="replace") def request_from_dict(d: dict[str, Any], *, spider: Spider | None = None) -> Request: """Create a :class:`~scrapy.Request` object from a dict. If a spider is given, it will try to resolve the callbacks looking at the spider for methods with the same name. """ request_cls: type[Request] = load_object(d["_class"]) if "_class" in d else Request kwargs = {key: value for key, value in d.items() if key in request_cls.attributes} if d.get("callback") and spider: kwargs["callback"] = _get_method(spider, d["callback"]) if d.get("errback") and spider: kwargs["errback"] = _get_method(spider, d["errback"]) return request_cls(**kwargs) def _get_method(obj: Any, name: Any) -> Any: """Helper function for request_from_dict""" name = str(name) try: return getattr(obj, name) except AttributeError: raise ValueError(f"Method {name!r} not found in: {obj}") def request_to_curl(request: Request) -> str: """ Converts a :class:`~scrapy.Request` object to a curl command. :param :class:`~scrapy.Request`: Request object to be converted :return: string containing the curl command """ method = request.method data = f"--data-raw '{request.body.decode('utf-8')}'" if request.body else "" headers = " ".join( f"-H '{k.decode()}: {v[0].decode()}'" for k, v in request.headers.items() ) url = request.url cookies = "" if request.cookies: if isinstance(request.cookies, dict): cookie = "; ".join(f"{k}={v}" for k, v in request.cookies.items()) cookies = f"--cookie '{cookie}'" elif isinstance(request.cookies, list): cookie = "; ".join( f"{next(iter(c.keys()))}={next(iter(c.values()))}" for c in request.cookies ) cookies = f"--cookie '{cookie}'" curl_cmd = f"curl -X {method} {url} {data} {headers} {cookies}".strip() return " ".join(curl_cmd.split())
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/log.py
scrapy/utils/log.py
from __future__ import annotations import logging import pprint import sys from collections.abc import MutableMapping from logging.config import dictConfig from typing import TYPE_CHECKING, Any, cast from twisted.internet import asyncioreactor from twisted.python import log as twisted_log from twisted.python.failure import Failure import scrapy from scrapy.settings import Settings, _SettingsKey from scrapy.utils.versions import get_versions if TYPE_CHECKING: from types import TracebackType from scrapy.crawler import Crawler from scrapy.logformatter import LogFormatterResult logger = logging.getLogger(__name__) def failure_to_exc_info( failure: Failure, ) -> tuple[type[BaseException], BaseException, TracebackType | None] | None: """Extract exc_info from Failure instances""" if isinstance(failure, Failure): assert failure.type assert failure.value return ( failure.type, failure.value, cast("TracebackType | None", failure.getTracebackObject()), ) return None class TopLevelFormatter(logging.Filter): """Keep only top level loggers' name (direct children from root) from records. This filter will replace Scrapy loggers' names with 'scrapy'. This mimics the old Scrapy log behaviour and helps shortening long names. Since it can't be set for just one logger (it won't propagate for its children), it's going to be set in the root handler, with a parametrized ``loggers`` list where it should act. """ def __init__(self, loggers: list[str] | None = None): super().__init__() self.loggers: list[str] = loggers or [] def filter(self, record: logging.LogRecord) -> bool: if any(record.name.startswith(logger + ".") for logger in self.loggers): record.name = record.name.split(".", 1)[0] return True DEFAULT_LOGGING = { "version": 1, "disable_existing_loggers": False, "loggers": { "filelock": { "level": "ERROR", }, "hpack": { "level": "ERROR", }, "scrapy": { "level": "DEBUG", }, "twisted": { "level": "ERROR", }, }, } def configure_logging( settings: Settings | dict[_SettingsKey, Any] | None = None, install_root_handler: bool = True, ) -> None: """ Initialize logging defaults for Scrapy. :param settings: settings used to create and configure a handler for the root logger (default: None). :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None`` :param install_root_handler: whether to install root logging handler (default: True) :type install_root_handler: bool This function does: - Route warnings and twisted logging through Python standard logging - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively - Route stdout to log if LOG_STDOUT setting is True When ``install_root_handler`` is True (default), this function also creates a handler for the root logger according to given settings (see :ref:`topics-logging-settings`). You can override default options using ``settings`` argument. When ``settings`` is empty or None, defaults are used. """ if not sys.warnoptions: # Route warnings through python logging logging.captureWarnings(True) observer = twisted_log.PythonLoggingObserver("twisted") observer.start() dictConfig(DEFAULT_LOGGING) if isinstance(settings, dict) or settings is None: settings = Settings(settings) if settings.getbool("LOG_STDOUT"): sys.stdout = StreamLogger(logging.getLogger("stdout")) if install_root_handler: install_scrapy_root_handler(settings) _scrapy_root_handler: logging.Handler | None = None def install_scrapy_root_handler(settings: Settings) -> None: global _scrapy_root_handler # noqa: PLW0603 # pylint: disable=global-statement _uninstall_scrapy_root_handler() logging.root.setLevel(logging.NOTSET) _scrapy_root_handler = _get_handler(settings) logging.root.addHandler(_scrapy_root_handler) def _uninstall_scrapy_root_handler() -> None: global _scrapy_root_handler # noqa: PLW0603 # pylint: disable=global-statement if ( _scrapy_root_handler is not None and _scrapy_root_handler in logging.root.handlers ): logging.root.removeHandler(_scrapy_root_handler) _scrapy_root_handler = None def get_scrapy_root_handler() -> logging.Handler | None: return _scrapy_root_handler def _get_handler(settings: Settings) -> logging.Handler: """Return a log handler object according to settings""" filename = settings.get("LOG_FILE") handler: logging.Handler if filename: mode = "a" if settings.getbool("LOG_FILE_APPEND") else "w" encoding = settings.get("LOG_ENCODING") handler = logging.FileHandler(filename, mode=mode, encoding=encoding) elif settings.getbool("LOG_ENABLED"): handler = logging.StreamHandler() else: handler = logging.NullHandler() formatter = logging.Formatter( fmt=settings.get("LOG_FORMAT"), datefmt=settings.get("LOG_DATEFORMAT") ) handler.setFormatter(formatter) handler.setLevel(settings.get("LOG_LEVEL")) if settings.getbool("LOG_SHORT_NAMES"): handler.addFilter(TopLevelFormatter(["scrapy"])) return handler def log_scrapy_info(settings: Settings) -> None: logger.info( "Scrapy %(version)s started (bot: %(bot)s)", {"version": scrapy.__version__, "bot": settings["BOT_NAME"]}, ) software = settings.getlist("LOG_VERSIONS") if not software: return versions = pprint.pformat(dict(get_versions(software)), sort_dicts=False) logger.info(f"Versions:\n{versions}") def log_reactor_info() -> None: from twisted.internet import reactor logger.debug("Using reactor: %s.%s", reactor.__module__, reactor.__class__.__name__) if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor): logger.debug( "Using asyncio event loop: %s.%s", reactor._asyncioEventloop.__module__, reactor._asyncioEventloop.__class__.__name__, ) class StreamLogger: """Fake file-like stream object that redirects writes to a logger instance Taken from: https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/ """ def __init__(self, logger: logging.Logger, log_level: int = logging.INFO): self.logger: logging.Logger = logger self.log_level: int = log_level self.linebuf: str = "" def write(self, buf: str) -> None: for line in buf.rstrip().splitlines(): self.logger.log(self.log_level, line.rstrip()) def flush(self) -> None: for h in self.logger.handlers: h.flush() class LogCounterHandler(logging.Handler): """Record log levels count into a crawler stats""" def __init__(self, crawler: Crawler, *args: Any, **kwargs: Any): super().__init__(*args, **kwargs) self.crawler: Crawler = crawler def emit(self, record: logging.LogRecord) -> None: sname = f"log_count/{record.levelname}" assert self.crawler.stats self.crawler.stats.inc_value(sname) def logformatter_adapter( logkws: LogFormatterResult, ) -> tuple[int, str, dict[str, Any] | tuple[Any, ...]]: """ Helper that takes the dictionary output from the methods in LogFormatter and adapts it into a tuple of positional arguments for logger.log calls, handling backward compatibility as well. """ level = logkws.get("level", logging.INFO) message = logkws.get("msg") or "" # NOTE: This also handles 'args' being an empty dict, that case doesn't # play well in logger.log calls args = cast("dict[str, Any]", logkws) if not logkws.get("args") else logkws["args"] return (level, message, args) class SpiderLoggerAdapter(logging.LoggerAdapter): def process( self, msg: str, kwargs: MutableMapping[str, Any] ) -> tuple[str, MutableMapping[str, Any]]: """Method that augments logging with additional 'extra' data""" if isinstance(kwargs.get("extra"), MutableMapping): kwargs["extra"].update(self.extra) else: kwargs["extra"] = self.extra return msg, kwargs
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/deprecate.py
scrapy/utils/deprecate.py
"""Some helpers for deprecation messages""" from __future__ import annotations import inspect import warnings from typing import TYPE_CHECKING, Any, overload from scrapy.exceptions import ScrapyDeprecationWarning from scrapy.utils.python import get_func_args_dict if TYPE_CHECKING: from collections.abc import Callable def attribute(obj: Any, oldattr: str, newattr: str, version: str = "0.12") -> None: cname = obj.__class__.__name__ warnings.warn( f"{cname}.{oldattr} attribute is deprecated and will be no longer supported " f"in Scrapy {version}, use {cname}.{newattr} attribute instead", ScrapyDeprecationWarning, stacklevel=3, ) def create_deprecated_class( name: str, new_class: type, clsdict: dict[str, Any] | None = None, warn_category: type[Warning] = ScrapyDeprecationWarning, warn_once: bool = True, old_class_path: str | None = None, new_class_path: str | None = None, subclass_warn_message: str = "{cls} inherits from deprecated class {old}, please inherit from {new}.", instance_warn_message: str = "{cls} is deprecated, instantiate {new} instead.", ) -> type: """ Return a "deprecated" class that causes its subclasses to issue a warning. Subclasses of ``new_class`` are considered subclasses of this class. It also warns when the deprecated class is instantiated, but do not when its subclasses are instantiated. It can be used to rename a base class in a library. For example, if we have class OldName(SomeClass): # ... and we want to rename it to NewName, we can do the following:: class NewName(SomeClass): # ... OldName = create_deprecated_class('OldName', NewName) Then, if user class inherits from OldName, warning is issued. Also, if some code uses ``issubclass(sub, OldName)`` or ``isinstance(sub(), OldName)`` checks they'll still return True if sub is a subclass of NewName instead of OldName. """ # https://github.com/python/mypy/issues/4177 class DeprecatedClass(new_class.__class__): # type: ignore[misc, name-defined] # pylint: disable=no-self-argument deprecated_class: type | None = None warned_on_subclass: bool = False def __new__( # pylint: disable=bad-classmethod-argument metacls, name: str, bases: tuple[type, ...], clsdict_: dict[str, Any] ) -> type: cls = super().__new__(metacls, name, bases, clsdict_) if metacls.deprecated_class is None: metacls.deprecated_class = cls return cls def __init__(cls, name: str, bases: tuple[type, ...], clsdict_: dict[str, Any]): meta = cls.__class__ old = meta.deprecated_class if old in bases and not (warn_once and meta.warned_on_subclass): meta.warned_on_subclass = True msg = subclass_warn_message.format( cls=_clspath(cls), old=_clspath(old, old_class_path), new=_clspath(new_class, new_class_path), ) if warn_once: msg += " (warning only on first subclass, there may be others)" warnings.warn(msg, warn_category, stacklevel=2) super().__init__(name, bases, clsdict_) # see https://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass # and https://docs.python.org/reference/datamodel.html#customizing-instance-and-subclass-checks # for implementation details def __instancecheck__(cls, inst: Any) -> bool: return any(cls.__subclasscheck__(c) for c in (type(inst), inst.__class__)) def __subclasscheck__(cls, sub: type) -> bool: if cls is not DeprecatedClass.deprecated_class: # we should do the magic only if second `issubclass` argument # is the deprecated class itself - subclasses of the # deprecated class should not use custom `__subclasscheck__` # method. return super().__subclasscheck__(sub) if not inspect.isclass(sub): raise TypeError("issubclass() arg 1 must be a class") mro = getattr(sub, "__mro__", ()) return any(c in {cls, new_class} for c in mro) def __call__(cls, *args: Any, **kwargs: Any) -> Any: old = DeprecatedClass.deprecated_class if cls is old: msg = instance_warn_message.format( cls=_clspath(cls, old_class_path), new=_clspath(new_class, new_class_path), ) warnings.warn(msg, warn_category, stacklevel=2) return super().__call__(*args, **kwargs) deprecated_cls = DeprecatedClass(name, (new_class,), clsdict or {}) try: frm = inspect.stack()[1] parent_module = inspect.getmodule(frm[0]) if parent_module is not None: deprecated_cls.__module__ = parent_module.__name__ except Exception as e: # Sometimes inspect.stack() fails (e.g. when the first import of # deprecated class is in jinja2 template). __module__ attribute is not # important enough to raise an exception as users may be unable # to fix inspect.stack() errors. warnings.warn(f"Error detecting parent module: {e!r}") return deprecated_cls def _clspath(cls: type, forced: str | None = None) -> str: if forced is not None: return forced return f"{cls.__module__}.{cls.__name__}" DEPRECATION_RULES: list[tuple[str, str]] = [] @overload def update_classpath(path: str) -> str: ... @overload def update_classpath(path: Any) -> Any: ... def update_classpath(path: Any) -> Any: """Update a deprecated path from an object with its new location""" for prefix, replacement in DEPRECATION_RULES: if isinstance(path, str) and path.startswith(prefix): new_path = path.replace(prefix, replacement, 1) warnings.warn( f"`{path}` class is deprecated, use `{new_path}` instead", ScrapyDeprecationWarning, ) return new_path return path def method_is_overridden(subclass: type, base_class: type, method_name: str) -> bool: """ Return True if a method named ``method_name`` of a ``base_class`` is overridden in a ``subclass``. >>> class Base: ... def foo(self): ... pass >>> class Sub1(Base): ... pass >>> class Sub2(Base): ... def foo(self): ... pass >>> class Sub3(Sub1): ... def foo(self): ... pass >>> class Sub4(Sub2): ... pass >>> method_is_overridden(Base, Base, 'foo') False >>> method_is_overridden(Sub1, Base, 'foo') False >>> method_is_overridden(Sub2, Base, 'foo') True >>> method_is_overridden(Sub3, Base, 'foo') True >>> method_is_overridden(Sub4, Base, 'foo') True """ base_method = getattr(base_class, method_name) sub_method = getattr(subclass, method_name) return base_method.__code__ is not sub_method.__code__ def argument_is_required(func: Callable[..., Any], arg_name: str) -> bool: """ Check if a function argument is required (exists and doesn't have a default value). .. versionadded:: VERSION >>> def func(a, b=1, c=None): ... pass >>> argument_is_required(func, 'a') True >>> argument_is_required(func, 'b') False >>> argument_is_required(func, 'c') False >>> argument_is_required(func, 'd') False """ args = get_func_args_dict(func) param = args.get(arg_name) return param is not None and param.default is inspect.Parameter.empty def warn_on_deprecated_spider_attribute(attribute_name: str, setting_name: str) -> None: warnings.warn( f"The '{attribute_name}' spider attribute is deprecated. " "Use Spider.custom_settings or Spider.update_settings() instead. " f"The corresponding setting name is '{setting_name}'.", category=ScrapyDeprecationWarning, stacklevel=2, )
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/signal.py
scrapy/utils/signal.py
"""Helper functions for working with signals""" from __future__ import annotations import asyncio import logging import warnings from collections.abc import Awaitable, Callable, Generator, Sequence from typing import Any as TypingAny from pydispatch.dispatcher import ( Anonymous, Any, disconnect, getAllReceivers, liveReceivers, ) from pydispatch.robustapply import robustApply from twisted.internet.defer import Deferred, DeferredList, inlineCallbacks from twisted.python.failure import Failure from scrapy.exceptions import ScrapyDeprecationWarning, StopDownload from scrapy.utils.asyncio import is_asyncio_available from scrapy.utils.defer import ( ensure_awaitable, maybe_deferred_to_future, maybeDeferred_coro, ) from scrapy.utils.log import failure_to_exc_info from scrapy.utils.python import global_object_name logger = logging.getLogger(__name__) def send_catch_log( signal: TypingAny = Any, sender: TypingAny = Anonymous, *arguments: TypingAny, **named: TypingAny, ) -> list[tuple[TypingAny, TypingAny]]: """Like ``pydispatcher.robust.sendRobust()`` but it also logs errors and returns Failures instead of exceptions. """ dont_log = named.pop("dont_log", ()) dont_log = tuple(dont_log) if isinstance(dont_log, Sequence) else (dont_log,) dont_log += (StopDownload,) spider = named.get("spider") responses: list[tuple[TypingAny, TypingAny]] = [] for receiver in liveReceivers(getAllReceivers(sender, signal)): result: TypingAny try: response = robustApply( receiver, signal=signal, sender=sender, *arguments, **named ) if isinstance(response, Deferred): logger.error( "Cannot return deferreds from signal handler: %(receiver)s", {"receiver": receiver}, extra={"spider": spider}, ) except dont_log: result = Failure() except Exception: result = Failure() logger.error( "Error caught on signal handler: %(receiver)s", {"receiver": receiver}, exc_info=True, extra={"spider": spider}, ) else: result = response responses.append((receiver, result)) return responses def send_catch_log_deferred( signal: TypingAny = Any, sender: TypingAny = Anonymous, *arguments: TypingAny, **named: TypingAny, ) -> Deferred[list[tuple[TypingAny, TypingAny]]]: """Like :func:`send_catch_log` but supports :ref:`asynchronous signal handlers <signal-deferred>`. Returns a deferred that gets fired once all signal handlers have finished. """ warnings.warn( "send_catch_log_deferred() is deprecated, use send_catch_log_async() instead", ScrapyDeprecationWarning, stacklevel=2, ) return _send_catch_log_deferred(signal, sender, *arguments, **named) @inlineCallbacks def _send_catch_log_deferred( signal: TypingAny, sender: TypingAny, *arguments: TypingAny, **named: TypingAny, ) -> Generator[Deferred[TypingAny], TypingAny, list[tuple[TypingAny, TypingAny]]]: def logerror(failure: Failure, recv: TypingAny) -> Failure: if dont_log is None or not isinstance(failure.value, dont_log): logger.error( "Error caught on signal handler: %(receiver)s", {"receiver": recv}, exc_info=failure_to_exc_info(failure), extra={"spider": spider}, ) return failure dont_log = named.pop("dont_log", None) spider = named.get("spider") dfds: list[Deferred[tuple[TypingAny, TypingAny]]] = [] for receiver in liveReceivers(getAllReceivers(sender, signal)): d: Deferred[TypingAny] = maybeDeferred_coro( robustApply, receiver, signal=signal, sender=sender, *arguments, **named ) d.addErrback(logerror, receiver) # TODO https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/cell-var-from-loop.html d2: Deferred[tuple[TypingAny, TypingAny]] = d.addBoth( lambda result: ( receiver, # pylint: disable=cell-var-from-loop # noqa: B023 result, ) ) dfds.append(d2) results = yield DeferredList(dfds) return [result[1] for result in results] async def send_catch_log_async( signal: TypingAny = Any, sender: TypingAny = Anonymous, *arguments: TypingAny, **named: TypingAny, ) -> list[tuple[TypingAny, TypingAny]]: """Like :func:`send_catch_log` but supports :ref:`asynchronous signal handlers <signal-deferred>`. Returns a coroutine that completes once all signal handlers have finished. .. versionadded:: VERSION """ # note that this returns exceptions instead of Failures in the second tuple member if is_asyncio_available(): return await _send_catch_log_asyncio(signal, sender, *arguments, **named) results = await maybe_deferred_to_future( _send_catch_log_deferred(signal, sender, *arguments, **named) ) return [ (receiver, result.value if isinstance(result, Failure) else result) for receiver, result in results ] async def _send_catch_log_asyncio( signal: TypingAny = Any, sender: TypingAny = Anonymous, *arguments: TypingAny, **named: TypingAny, ) -> list[tuple[TypingAny, TypingAny]]: """Like :func:`send_catch_log` but supports :ref:`asynchronous signal handlers <signal-deferred>`. Returns a coroutine that completes once all signal handlers have finished. This function requires :class:`~twisted.internet.asyncioreactor.AsyncioSelectorReactor` to be installed. .. versionadded:: VERSION """ dont_log = named.pop("dont_log", ()) dont_log = tuple(dont_log) if isinstance(dont_log, Sequence) else (dont_log,) spider = named.get("spider") handlers: list[Awaitable[TypingAny]] = [] for receiver in liveReceivers(getAllReceivers(sender, signal)): async def handler(receiver: Callable) -> TypingAny: result: TypingAny try: result = await ensure_awaitable( robustApply( receiver, signal=signal, sender=sender, *arguments, **named ), _warn=global_object_name(receiver), ) except dont_log as ex: # pylint: disable=catching-non-exception result = ex except Exception as ex: logger.error( "Error caught on signal handler: %(receiver)s", {"receiver": receiver}, exc_info=True, extra={"spider": spider}, ) result = ex return (receiver, result) handlers.append(handler(receiver)) return await asyncio.gather(*handlers, return_exceptions=True) def disconnect_all(signal: TypingAny = Any, sender: TypingAny = Any) -> None: """Disconnect all signal handlers. Useful for cleaning up after running tests. """ for receiver in liveReceivers(getAllReceivers(sender, signal)): disconnect(receiver, signal=signal, sender=sender)
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/_compression.py
scrapy/utils/_compression.py
import contextlib import zlib from io import BytesIO with contextlib.suppress(ImportError): try: import brotli except ImportError: import brotlicffi as brotli with contextlib.suppress(ImportError): import zstandard _CHUNK_SIZE = 65536 # 64 KiB class _DecompressionMaxSizeExceeded(ValueError): def __init__(self, decompressed_size: int, max_size: int) -> None: self.decompressed_size = decompressed_size self.max_size = max_size def __str__(self) -> str: return ( f"The number of bytes decompressed so far " f"({self.decompressed_size} B) exceeded the specified maximum " f"({self.max_size} B)." ) def _check_max_size(decompressed_size: int, max_size: int) -> None: if max_size and decompressed_size > max_size: raise _DecompressionMaxSizeExceeded(decompressed_size, max_size) def _inflate(data: bytes, *, max_size: int = 0) -> bytes: decompressor = zlib.decompressobj() try: first_chunk = decompressor.decompress(data, max_length=_CHUNK_SIZE) except zlib.error: # to work with raw deflate content that may be sent by microsoft servers. decompressor = zlib.decompressobj(wbits=-15) first_chunk = decompressor.decompress(data, max_length=_CHUNK_SIZE) decompressed_size = len(first_chunk) _check_max_size(decompressed_size, max_size) output_stream = BytesIO() output_stream.write(first_chunk) while decompressor.unconsumed_tail: output_chunk = decompressor.decompress( decompressor.unconsumed_tail, max_length=_CHUNK_SIZE ) decompressed_size += len(output_chunk) _check_max_size(decompressed_size, max_size) output_stream.write(output_chunk) if tail := decompressor.flush(): decompressed_size += len(tail) _check_max_size(decompressed_size, max_size) output_stream.write(tail) return output_stream.getvalue() def _unbrotli(data: bytes, *, max_size: int = 0) -> bytes: decompressor = brotli.Decompressor() first_chunk = decompressor.process(data, output_buffer_limit=_CHUNK_SIZE) decompressed_size = len(first_chunk) _check_max_size(decompressed_size, max_size) output_stream = BytesIO() output_stream.write(first_chunk) while not decompressor.is_finished(): output_chunk = decompressor.process(b"", output_buffer_limit=_CHUNK_SIZE) if not output_chunk: break decompressed_size += len(output_chunk) _check_max_size(decompressed_size, max_size) output_stream.write(output_chunk) return output_stream.getvalue() def _unzstd(data: bytes, *, max_size: int = 0) -> bytes: decompressor = zstandard.ZstdDecompressor() stream_reader = decompressor.stream_reader(BytesIO(data)) output_stream = BytesIO() output_chunk = b"." decompressed_size = 0 while output_chunk: output_chunk = stream_reader.read(_CHUNK_SIZE) decompressed_size += len(output_chunk) _check_max_size(decompressed_size, max_size) output_stream.write(output_chunk) return output_stream.getvalue()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/python.py
scrapy/utils/python.py
""" This module contains essential stuff that should've come with Python itself ;) """ from __future__ import annotations import gc import inspect import re import sys import weakref from collections.abc import AsyncIterator, Iterable, Mapping from functools import partial, wraps from itertools import chain from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar, overload from scrapy.utils.asyncgen import as_async_generator if TYPE_CHECKING: from collections.abc import Callable, Iterator from re import Pattern # typing.Self requires Python 3.11 from typing_extensions import Self _T = TypeVar("_T") _KT = TypeVar("_KT") _VT = TypeVar("_VT") _P = ParamSpec("_P") def is_listlike(x: Any) -> bool: """ >>> is_listlike("foo") False >>> is_listlike(5) False >>> is_listlike(b"foo") False >>> is_listlike([b"foo"]) True >>> is_listlike((b"foo",)) True >>> is_listlike({}) True >>> is_listlike(set()) True >>> is_listlike((x for x in range(3))) True >>> is_listlike(range(5)) True """ return hasattr(x, "__iter__") and not isinstance(x, (str, bytes)) def unique(list_: Iterable[_T], key: Callable[[_T], Any] = lambda x: x) -> list[_T]: """efficient function to uniquify a list preserving item order""" seen = set() result: list[_T] = [] for item in list_: seenkey = key(item) if seenkey in seen: continue seen.add(seenkey) result.append(item) return result def to_unicode( text: str | bytes, encoding: str | None = None, errors: str = "strict" ) -> str: """Return the unicode representation of a bytes object ``text``. If ``text`` is already an unicode object, return it as-is.""" if isinstance(text, str): return text if not isinstance(text, (bytes, str)): raise TypeError( f"to_unicode must receive a bytes or str object, got {type(text).__name__}" ) if encoding is None: encoding = "utf-8" return text.decode(encoding, errors) def to_bytes( text: str | bytes, encoding: str | None = None, errors: str = "strict" ) -> bytes: """Return the binary representation of ``text``. If ``text`` is already a bytes object, return it as-is.""" if isinstance(text, bytes): return text if not isinstance(text, str): raise TypeError( f"to_bytes must receive a str or bytes object, got {type(text).__name__}" ) if encoding is None: encoding = "utf-8" return text.encode(encoding, errors) def re_rsearch( pattern: str | Pattern[str], text: str, chunk_size: int = 1024 ) -> tuple[int, int] | None: """ This function does a reverse search in a text using a regular expression given in the attribute 'pattern'. Since the re module does not provide this functionality, we have to find for the expression into chunks of text extracted from the end (for the sake of efficiency). At first, a chunk of 'chunk_size' kilobytes is extracted from the end, and searched for the pattern. If the pattern is not found, another chunk is extracted, and another search is performed. This process continues until a match is found, or until the whole file is read. In case the pattern wasn't found, None is returned, otherwise it returns a tuple containing the start position of the match, and the ending (regarding the entire text). """ def _chunk_iter() -> Iterable[tuple[str, int]]: offset = len(text) while True: offset -= chunk_size * 1024 if offset <= 0: break yield (text[offset:], offset) yield (text, 0) if isinstance(pattern, str): pattern = re.compile(pattern) for chunk, offset in _chunk_iter(): matches = list(pattern.finditer(chunk)) if matches: start, end = matches[-1].span() return offset + start, offset + end return None _SelfT = TypeVar("_SelfT") def memoizemethod_noargs( method: Callable[Concatenate[_SelfT, _P], _T], ) -> Callable[Concatenate[_SelfT, _P], _T]: """Decorator to cache the result of a method (without arguments) using a weak reference to its object """ cache: weakref.WeakKeyDictionary[_SelfT, _T] = weakref.WeakKeyDictionary() @wraps(method) def new_method(self: _SelfT, *args: _P.args, **kwargs: _P.kwargs) -> _T: if self not in cache: cache[self] = method(self, *args, **kwargs) return cache[self] return new_method _BINARYCHARS = { i for i in range(32) if to_bytes(chr(i)) not in {b"\0", b"\t", b"\n", b"\r"} } def binary_is_text(data: bytes) -> bool: """Returns ``True`` if the given ``data`` argument (a ``bytes`` object) does not contain unprintable control characters. """ if not isinstance(data, bytes): raise TypeError(f"data must be bytes, got '{type(data).__name__}'") return all(c not in _BINARYCHARS for c in data) def get_func_args_dict( func: Callable[..., Any], stripself: bool = False ) -> Mapping[str, inspect.Parameter]: """Return the argument dict of a callable object. .. versionadded:: VERSION """ if not callable(func): raise TypeError(f"func must be callable, got '{type(func).__name__}'") args: Mapping[str, inspect.Parameter] try: sig = inspect.signature(func) except ValueError: return {} if isinstance(func, partial): partial_args = func.args partial_kw = func.keywords args = {} for name, param in sig.parameters.items(): if name in partial_args: continue if partial_kw and name in partial_kw: continue args[name] = param else: args = sig.parameters if stripself and args and "self" in args: args = {k: v for k, v in args.items() if k != "self"} return args def get_func_args(func: Callable[..., Any], stripself: bool = False) -> list[str]: """Return the argument name list of a callable object""" return list(get_func_args_dict(func, stripself=stripself)) def get_spec(func: Callable[..., Any]) -> tuple[list[str], dict[str, Any]]: """Returns (args, kwargs) tuple for a function >>> import re >>> get_spec(re.match) (['pattern', 'string'], {'flags': 0}) >>> class Test: ... def __call__(self, val): ... pass ... def method(self, val, flags=0): ... pass >>> get_spec(Test) (['self', 'val'], {}) >>> get_spec(Test.method) (['self', 'val'], {'flags': 0}) >>> get_spec(Test().method) (['self', 'val'], {'flags': 0}) """ if inspect.isfunction(func) or inspect.ismethod(func): spec = inspect.getfullargspec(func) elif hasattr(func, "__call__"): # noqa: B004 spec = inspect.getfullargspec(func.__call__) else: raise TypeError(f"{type(func)} is not callable") defaults: tuple[Any, ...] = spec.defaults or () firstdefault = len(spec.args) - len(defaults) args = spec.args[:firstdefault] kwargs = dict(zip(spec.args[firstdefault:], defaults, strict=False)) return args, kwargs @overload def without_none_values(iterable: Mapping[_KT, _VT]) -> dict[_KT, _VT]: ... @overload def without_none_values(iterable: Iterable[_KT]) -> Iterable[_KT]: ... def without_none_values( iterable: Mapping[_KT, _VT] | Iterable[_KT], ) -> dict[_KT, _VT] | Iterable[_KT]: """Return a copy of ``iterable`` with all ``None`` entries removed. If ``iterable`` is a mapping, return a dictionary where all pairs that have value ``None`` have been removed. """ if isinstance(iterable, Mapping): return {k: v for k, v in iterable.items() if v is not None} # the iterable __init__ must take another iterable return type(iterable)(v for v in iterable if v is not None) # type: ignore[call-arg] def global_object_name(obj: Any) -> str: """Return the full import path of the given object. >>> from scrapy import Request >>> global_object_name(Request) 'scrapy.http.request.Request' >>> global_object_name(Request.replace) 'scrapy.http.request.Request.replace' """ return f"{obj.__module__}.{obj.__qualname__}" if hasattr(sys, "pypy_version_info"): def garbage_collect() -> None: # Collecting weakreferences can take two collections on PyPy. gc.collect() gc.collect() else: def garbage_collect() -> None: gc.collect() class MutableChain(Iterable[_T]): """ Thin wrapper around itertools.chain, allowing to add iterables "in-place" """ def __init__(self, *args: Iterable[_T]): self.data: Iterator[_T] = chain.from_iterable(args) def extend(self, *iterables: Iterable[_T]) -> None: self.data = chain(self.data, chain.from_iterable(iterables)) def __iter__(self) -> Iterator[_T]: return self def __next__(self) -> _T: return next(self.data) async def _async_chain( *iterables: Iterable[_T] | AsyncIterator[_T], ) -> AsyncIterator[_T]: for it in iterables: async for o in as_async_generator(it): yield o class MutableAsyncChain(AsyncIterator[_T]): """ Similar to MutableChain but for async iterables """ def __init__(self, *args: Iterable[_T] | AsyncIterator[_T]): self.data: AsyncIterator[_T] = _async_chain(*args) def extend(self, *iterables: Iterable[_T] | AsyncIterator[_T]) -> None: self.data = _async_chain(self.data, _async_chain(*iterables)) def __aiter__(self) -> Self: return self async def __anext__(self) -> _T: return await self.data.__anext__()
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/curl.py
scrapy/utils/curl.py
from __future__ import annotations import argparse import warnings from http.cookies import SimpleCookie from shlex import split from typing import TYPE_CHECKING, Any, NoReturn from urllib.parse import urlparse from w3lib.http import basic_auth_header if TYPE_CHECKING: from collections.abc import Sequence class DataAction(argparse.Action): def __call__( self, parser: argparse.ArgumentParser, namespace: argparse.Namespace, values: str | Sequence[Any] | None, option_string: str | None = None, ) -> None: value = str(values) value = value.removeprefix("$") setattr(namespace, self.dest, value) class CurlParser(argparse.ArgumentParser): def error(self, message: str) -> NoReturn: error_msg = f"There was an error parsing the curl command: {message}" raise ValueError(error_msg) curl_parser = CurlParser() curl_parser.add_argument("url") curl_parser.add_argument("-H", "--header", dest="headers", action="append") curl_parser.add_argument("-X", "--request", dest="method") curl_parser.add_argument("-b", "--cookie", dest="cookies", action="append") curl_parser.add_argument("-d", "--data", "--data-raw", dest="data", action=DataAction) curl_parser.add_argument("-u", "--user", dest="auth") safe_to_ignore_arguments = [ ["--compressed"], # `--compressed` argument is not safe to ignore, but it's included here # because the `HttpCompressionMiddleware` is enabled by default ["-s", "--silent"], ["-v", "--verbose"], ["-#", "--progress-bar"], ] for argument in safe_to_ignore_arguments: curl_parser.add_argument(*argument, action="store_true") def _parse_headers_and_cookies( parsed_args: argparse.Namespace, ) -> tuple[list[tuple[str, bytes]], dict[str, str]]: headers: list[tuple[str, bytes]] = [] cookies: dict[str, str] = {} for header in parsed_args.headers or (): name, val = header.split(":", 1) name = name.strip() val = val.strip() if name.title() == "Cookie": for name, morsel in SimpleCookie(val).items(): cookies[name] = morsel.value else: headers.append((name, val)) for cookie_param in parsed_args.cookies or (): # curl can treat this parameter as either "key=value; key2=value2" pairs, or a filename. # Scrapy will only support key-value pairs. if "=" not in cookie_param: continue for name, morsel in SimpleCookie(cookie_param).items(): cookies[name] = morsel.value if parsed_args.auth: user, password = parsed_args.auth.split(":", 1) headers.append(("Authorization", basic_auth_header(user, password))) return headers, cookies def curl_to_request_kwargs( curl_command: str, ignore_unknown_options: bool = True ) -> dict[str, Any]: """Convert a cURL command syntax to Request kwargs. :param str curl_command: string containing the curl command :param bool ignore_unknown_options: If true, only a warning is emitted when cURL options are unknown. Otherwise raises an error. (default: True) :return: dictionary of Request kwargs """ curl_args = split(curl_command) if curl_args[0] != "curl": raise ValueError('A curl command must start with "curl"') parsed_args, argv = curl_parser.parse_known_args(curl_args[1:]) if argv: msg = f"Unrecognized options: {', '.join(argv)}" if ignore_unknown_options: warnings.warn(msg) else: raise ValueError(msg) url = parsed_args.url # curl automatically prepends 'http' if the scheme is missing, but Request # needs the scheme to work parsed_url = urlparse(url) if not parsed_url.scheme: url = "http://" + url method = parsed_args.method or "GET" result: dict[str, Any] = {"method": method.upper(), "url": url} headers, cookies = _parse_headers_and_cookies(parsed_args) if headers: result["headers"] = headers if cookies: result["cookies"] = cookies if parsed_args.data: result["body"] = parsed_args.data if not parsed_args.method: # if the "data" is specified but the "method" is not specified, # the default method is 'POST' result["method"] = "POST" return result
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false
scrapy/scrapy
https://github.com/scrapy/scrapy/blob/d1bd8eb49f7aba9289e4ff692006cead8bcd9080/scrapy/utils/trackref.py
scrapy/utils/trackref.py
"""This module provides some functions and classes to record and report references to live object instances. If you want live objects for a particular class to be tracked, you only have to subclass from object_ref (instead of object). About performance: This library has a minimal performance impact when enabled, and no performance penalty at all when disabled (as object_ref becomes just an alias to object in that case). """ from __future__ import annotations from collections import defaultdict from operator import itemgetter from time import time from typing import TYPE_CHECKING, Any from weakref import WeakKeyDictionary if TYPE_CHECKING: from collections.abc import Iterable # typing.Self requires Python 3.11 from typing_extensions import Self NoneType = type(None) live_refs: defaultdict[type, WeakKeyDictionary] = defaultdict(WeakKeyDictionary) class object_ref: """Inherit from this class to a keep a record of live instances""" __slots__ = () def __new__(cls, *args: Any, **kwargs: Any) -> Self: obj = object.__new__(cls) live_refs[cls][obj] = time() return obj # using Any as it's hard to type type(None) def format_live_refs(ignore: Any = NoneType) -> str: """Return a tabular representation of tracked objects""" s = "Live References\n\n" now = time() for cls, wdict in sorted(live_refs.items(), key=lambda x: x[0].__name__): if not wdict: continue if issubclass(cls, ignore): continue oldest = min(wdict.values()) s += f"{cls.__name__:<30} {len(wdict):6} oldest: {int(now - oldest)}s ago\n" return s def print_live_refs(*a: Any, **kw: Any) -> None: """Print tracked objects""" print(format_live_refs(*a, **kw)) def get_oldest(class_name: str) -> Any: """Get the oldest object for a specific class name""" for cls, wdict in live_refs.items(): if cls.__name__ == class_name: if not wdict: break return min(wdict.items(), key=itemgetter(1))[0] return None def iter_all(class_name: str) -> Iterable[Any]: """Iterate over all objects of the same class by its class name""" for cls, wdict in live_refs.items(): if cls.__name__ == class_name: return wdict.keys() return []
python
BSD-3-Clause
d1bd8eb49f7aba9289e4ff692006cead8bcd9080
2026-01-04T14:38:41.023839Z
false