text
stringlengths
81
112k
Register a klass as the factory for a given type URL. :type klass: :class:`type` :param klass: class to be used as a factory for the given type :type type_url: str :param type_url: (Optional) URL naming the type. If not provided, infers the URL from the type descriptor. :raises ValueError: if a registration already exists for the URL. def register_type(klass, type_url=None): """Register a klass as the factory for a given type URL. :type klass: :class:`type` :param klass: class to be used as a factory for the given type :type type_url: str :param type_url: (Optional) URL naming the type. If not provided, infers the URL from the type descriptor. :raises ValueError: if a registration already exists for the URL. """ if type_url is None: type_url = _compute_type_url(klass) if type_url in _TYPE_URL_MAP: if _TYPE_URL_MAP[type_url] is not klass: raise ValueError("Conflict: %s" % (_TYPE_URL_MAP[type_url],)) _TYPE_URL_MAP[type_url] = klass
Convert an ``Any`` protobuf into the actual class. Uses the type URL to do the conversion. .. note:: This assumes that the type URL is already registered. :type any_pb: :class:`google.protobuf.any_pb2.Any` :param any_pb: An any object to be converted. :rtype: object :returns: The instance (of the correct type) stored in the any instance. def _from_any(any_pb): """Convert an ``Any`` protobuf into the actual class. Uses the type URL to do the conversion. .. note:: This assumes that the type URL is already registered. :type any_pb: :class:`google.protobuf.any_pb2.Any` :param any_pb: An any object to be converted. :rtype: object :returns: The instance (of the correct type) stored in the any instance. """ klass = _TYPE_URL_MAP[any_pb.type_url] return klass.FromString(any_pb.value)
Factory: construct an instance from a protobuf. :type operation_pb: :class:`~google.longrunning.operations_pb2.Operation` :param operation_pb: Protobuf to be parsed. :type client: object: must provide ``_operations_stub`` accessor. :param client: The client used to poll for the status of the operation. :type caller_metadata: dict :param caller_metadata: caller-assigned metadata about the operation :rtype: :class:`Operation` :returns: new instance, with attributes based on the protobuf. def from_pb(cls, operation_pb, client, **caller_metadata): """Factory: construct an instance from a protobuf. :type operation_pb: :class:`~google.longrunning.operations_pb2.Operation` :param operation_pb: Protobuf to be parsed. :type client: object: must provide ``_operations_stub`` accessor. :param client: The client used to poll for the status of the operation. :type caller_metadata: dict :param caller_metadata: caller-assigned metadata about the operation :rtype: :class:`Operation` :returns: new instance, with attributes based on the protobuf. """ result = cls(operation_pb.name, client, **caller_metadata) result._update_state(operation_pb) result._from_grpc = True return result
Factory: construct an instance from a dictionary. :type operation: dict :param operation: Operation as a JSON object. :type client: :class:`~google.cloud.client.Client` :param client: The client used to poll for the status of the operation. :type caller_metadata: dict :param caller_metadata: caller-assigned metadata about the operation :rtype: :class:`Operation` :returns: new instance, with attributes based on the protobuf. def from_dict(cls, operation, client, **caller_metadata): """Factory: construct an instance from a dictionary. :type operation: dict :param operation: Operation as a JSON object. :type client: :class:`~google.cloud.client.Client` :param client: The client used to poll for the status of the operation. :type caller_metadata: dict :param caller_metadata: caller-assigned metadata about the operation :rtype: :class:`Operation` :returns: new instance, with attributes based on the protobuf. """ operation_pb = json_format.ParseDict(operation, operations_pb2.Operation()) result = cls(operation_pb.name, client, **caller_metadata) result._update_state(operation_pb) result._from_grpc = False return result
Polls the status of the current operation. Uses gRPC request to check. :rtype: :class:`~google.longrunning.operations_pb2.Operation` :returns: The latest status of the current operation. def _get_operation_rpc(self): """Polls the status of the current operation. Uses gRPC request to check. :rtype: :class:`~google.longrunning.operations_pb2.Operation` :returns: The latest status of the current operation. """ request_pb = operations_pb2.GetOperationRequest(name=self.name) return self.client._operations_stub.GetOperation(request_pb)
Checks the status of the current operation. Uses HTTP request to check. :rtype: :class:`~google.longrunning.operations_pb2.Operation` :returns: The latest status of the current operation. def _get_operation_http(self): """Checks the status of the current operation. Uses HTTP request to check. :rtype: :class:`~google.longrunning.operations_pb2.Operation` :returns: The latest status of the current operation. """ path = "operations/%s" % (self.name,) api_response = self.client._connection.api_request(method="GET", path=path) return json_format.ParseDict(api_response, operations_pb2.Operation())
Update the state of the current object based on operation. :type operation_pb: :class:`~google.longrunning.operations_pb2.Operation` :param operation_pb: Protobuf to be parsed. def _update_state(self, operation_pb): """Update the state of the current object based on operation. :type operation_pb: :class:`~google.longrunning.operations_pb2.Operation` :param operation_pb: Protobuf to be parsed. """ if operation_pb.done: self._complete = True if operation_pb.HasField("metadata"): self.metadata = _from_any(operation_pb.metadata) result_type = operation_pb.WhichOneof("result") if result_type == "error": self.error = operation_pb.error elif result_type == "response": self.response = _from_any(operation_pb.response)
Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed. def poll(self): """Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed. """ if self.complete: raise ValueError("The operation has completed.") operation_pb = self._get_operation() self._update_state(operation_pb) return self.complete
Parses the response to a ``ReadModifyWriteRow`` request. :type row_response: :class:`.data_v2_pb2.Row` :param row_response: The response row (with only modified cells) from a ``ReadModifyWriteRow`` request. :rtype: dict :returns: The new contents of all modified cells. Returned as a dictionary of column families, each of which holds a dictionary of columns. Each column contains a list of cells modified. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. For example: .. code:: python { u'col-fam-id': { b'col-name1': [ (b'cell-val', datetime.datetime(...)), (b'cell-val-newer', datetime.datetime(...)), ], b'col-name2': [ (b'altcol-cell-val', datetime.datetime(...)), ], }, u'col-fam-id2': { b'col-name3-but-other-fam': [ (b'foo', datetime.datetime(...)), ], }, } def _parse_rmw_row_response(row_response): """Parses the response to a ``ReadModifyWriteRow`` request. :type row_response: :class:`.data_v2_pb2.Row` :param row_response: The response row (with only modified cells) from a ``ReadModifyWriteRow`` request. :rtype: dict :returns: The new contents of all modified cells. Returned as a dictionary of column families, each of which holds a dictionary of columns. Each column contains a list of cells modified. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. For example: .. code:: python { u'col-fam-id': { b'col-name1': [ (b'cell-val', datetime.datetime(...)), (b'cell-val-newer', datetime.datetime(...)), ], b'col-name2': [ (b'altcol-cell-val', datetime.datetime(...)), ], }, u'col-fam-id2': { b'col-name3-but-other-fam': [ (b'foo', datetime.datetime(...)), ], }, } """ result = {} for column_family in row_response.row.families: column_family_id, curr_family = _parse_family_pb(column_family) result[column_family_id] = curr_family return result
Parses a Family protobuf into a dictionary. :type family_pb: :class:`._generated.data_pb2.Family` :param family_pb: A protobuf :rtype: tuple :returns: A string and dictionary. The string is the name of the column family and the dictionary has column names (within the family) as keys and cell lists as values. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. For example: .. code:: python { b'col-name1': [ (b'cell-val', datetime.datetime(...)), (b'cell-val-newer', datetime.datetime(...)), ], b'col-name2': [ (b'altcol-cell-val', datetime.datetime(...)), ], } def _parse_family_pb(family_pb): """Parses a Family protobuf into a dictionary. :type family_pb: :class:`._generated.data_pb2.Family` :param family_pb: A protobuf :rtype: tuple :returns: A string and dictionary. The string is the name of the column family and the dictionary has column names (within the family) as keys and cell lists as values. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. For example: .. code:: python { b'col-name1': [ (b'cell-val', datetime.datetime(...)), (b'cell-val-newer', datetime.datetime(...)), ], b'col-name2': [ (b'altcol-cell-val', datetime.datetime(...)), ], } """ result = {} for column in family_pb.columns: result[column.qualifier] = cells = [] for cell in column.cells: val_pair = (cell.value, _datetime_from_microseconds(cell.timestamp_micros)) cells.append(val_pair) return family_pb.name, result
Helper for :meth:`set_cell` Adds a mutation to set the value in a specific cell. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes or :class:`int` :param value: The value to set in the cell. If an integer is used, will be interpreted as a 64-bit big-endian signed integer (8 bytes). :type timestamp: :class:`datetime.datetime` :param timestamp: (Optional) The timestamp of the operation. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. def _set_cell(self, column_family_id, column, value, timestamp=None, state=None): """Helper for :meth:`set_cell` Adds a mutation to set the value in a specific cell. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes or :class:`int` :param value: The value to set in the cell. If an integer is used, will be interpreted as a 64-bit big-endian signed integer (8 bytes). :type timestamp: :class:`datetime.datetime` :param timestamp: (Optional) The timestamp of the operation. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. """ column = _to_bytes(column) if isinstance(value, six.integer_types): value = _PACK_I64(value) value = _to_bytes(value) if timestamp is None: # Use -1 for current Bigtable server time. timestamp_micros = -1 else: timestamp_micros = _microseconds_from_datetime(timestamp) # Truncate to millisecond granularity. timestamp_micros -= timestamp_micros % 1000 mutation_val = data_v2_pb2.Mutation.SetCell( family_name=column_family_id, column_qualifier=column, timestamp_micros=timestamp_micros, value=value, ) mutation_pb = data_v2_pb2.Mutation(set_cell=mutation_val) self._get_mutations(state).append(mutation_pb)
Helper for :meth:`delete` Adds a delete mutation (for the entire row) to the accumulated mutations. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. def _delete(self, state=None): """Helper for :meth:`delete` Adds a delete mutation (for the entire row) to the accumulated mutations. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. """ mutation_val = data_v2_pb2.Mutation.DeleteFromRow() mutation_pb = data_v2_pb2.Mutation(delete_from_row=mutation_val) self._get_mutations(state).append(mutation_pb)
Helper for :meth:`delete_cell` and :meth:`delete_cells`. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type columns: :class:`list` of :class:`str` / :func:`unicode <unicode>`, or :class:`object` :param columns: The columns within the column family that will have cells deleted. If :attr:`ALL_COLUMNS` is used then the entire column family will be deleted from the row. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. def _delete_cells(self, column_family_id, columns, time_range=None, state=None): """Helper for :meth:`delete_cell` and :meth:`delete_cells`. ``state`` is unused by :class:`DirectRow` but is used by subclasses. :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type columns: :class:`list` of :class:`str` / :func:`unicode <unicode>`, or :class:`object` :param columns: The columns within the column family that will have cells deleted. If :attr:`ALL_COLUMNS` is used then the entire column family will be deleted from the row. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. :type state: bool :param state: (Optional) The state that is passed along to :meth:`_get_mutations`. """ mutations_list = self._get_mutations(state) if columns is self.ALL_COLUMNS: mutation_val = data_v2_pb2.Mutation.DeleteFromFamily( family_name=column_family_id ) mutation_pb = data_v2_pb2.Mutation(delete_from_family=mutation_val) mutations_list.append(mutation_pb) else: delete_kwargs = {} if time_range is not None: delete_kwargs["time_range"] = time_range.to_pb() to_append = [] for column in columns: column = _to_bytes(column) # time_range will never change if present, but the rest of # delete_kwargs will delete_kwargs.update( family_name=column_family_id, column_qualifier=column ) mutation_val = data_v2_pb2.Mutation.DeleteFromColumn(**delete_kwargs) mutation_pb = data_v2_pb2.Mutation(delete_from_column=mutation_val) to_append.append(mutation_pb) # We don't add the mutations until all columns have been # processed without error. mutations_list.extend(to_append)
Gets the total mutations size for current row For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_get_mutations_size] :end-before: [END bigtable_row_get_mutations_size] def get_mutations_size(self): """ Gets the total mutations size for current row For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_get_mutations_size] :end-before: [END bigtable_row_get_mutations_size] """ mutation_size = 0 for mutation in self._get_mutations(): mutation_size += mutation.ByteSize() return mutation_size
Sets a value in this row. The cell is determined by the ``row_key`` of this :class:`DirectRow` and the ``column``. The ``column`` must be in an existing :class:`.ColumnFamily` (as determined by ``column_family_id``). .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_set_cell] :end-before: [END bigtable_row_set_cell] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes or :class:`int` :param value: The value to set in the cell. If an integer is used, will be interpreted as a 64-bit big-endian signed integer (8 bytes). :type timestamp: :class:`datetime.datetime` :param timestamp: (Optional) The timestamp of the operation. def set_cell(self, column_family_id, column, value, timestamp=None): """Sets a value in this row. The cell is determined by the ``row_key`` of this :class:`DirectRow` and the ``column``. The ``column`` must be in an existing :class:`.ColumnFamily` (as determined by ``column_family_id``). .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_set_cell] :end-before: [END bigtable_row_set_cell] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes or :class:`int` :param value: The value to set in the cell. If an integer is used, will be interpreted as a 64-bit big-endian signed integer (8 bytes). :type timestamp: :class:`datetime.datetime` :param timestamp: (Optional) The timestamp of the operation. """ self._set_cell(column_family_id, column, value, timestamp=timestamp, state=None)
Deletes cell in this row. .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_delete_cell] :end-before: [END bigtable_row_delete_cell] :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family that will have a cell deleted. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. def delete_cell(self, column_family_id, column, time_range=None): """Deletes cell in this row. .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_delete_cell] :end-before: [END bigtable_row_delete_cell] :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family that will have a cell deleted. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. """ self._delete_cells( column_family_id, [column], time_range=time_range, state=None )
Deletes cells in this row. .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_delete_cells] :end-before: [END bigtable_row_delete_cells] :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type columns: :class:`list` of :class:`str` / :func:`unicode <unicode>`, or :class:`object` :param columns: The columns within the column family that will have cells deleted. If :attr:`ALL_COLUMNS` is used then the entire column family will be deleted from the row. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. def delete_cells(self, column_family_id, columns, time_range=None): """Deletes cells in this row. .. note:: This method adds a mutation to the accumulated mutations on this row, but does not make an API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_delete_cells] :end-before: [END bigtable_row_delete_cells] :type column_family_id: str :param column_family_id: The column family that contains the column or columns with cells being deleted. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type columns: :class:`list` of :class:`str` / :func:`unicode <unicode>`, or :class:`object` :param columns: The columns within the column family that will have cells deleted. If :attr:`ALL_COLUMNS` is used then the entire column family will be deleted from the row. :type time_range: :class:`TimestampRange` :param time_range: (Optional) The range of time within which cells should be deleted. """ self._delete_cells(column_family_id, columns, time_range=time_range, state=None)
Makes a ``CheckAndMutateRow`` API request. If no mutations have been created in the row, no request is made. The mutations will be applied conditionally, based on whether the filter matches any cells in the :class:`ConditionalRow` or not. (Each method which adds a mutation has a ``state`` parameter for this purpose.) Mutations are applied atomically and in order, meaning that earlier mutations can be masked / negated by later ones. Cells already present in the row are left unchanged unless explicitly changed by a mutation. After committing the accumulated mutations, resets the local mutations. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_commit] :end-before: [END bigtable_row_commit] :rtype: bool :returns: Flag indicating if the filter was matched (which also indicates which set of mutations were applied by the server). :raises: :class:`ValueError <exceptions.ValueError>` if the number of mutations exceeds the :data:`MAX_MUTATIONS`. def commit(self): """Makes a ``CheckAndMutateRow`` API request. If no mutations have been created in the row, no request is made. The mutations will be applied conditionally, based on whether the filter matches any cells in the :class:`ConditionalRow` or not. (Each method which adds a mutation has a ``state`` parameter for this purpose.) Mutations are applied atomically and in order, meaning that earlier mutations can be masked / negated by later ones. Cells already present in the row are left unchanged unless explicitly changed by a mutation. After committing the accumulated mutations, resets the local mutations. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_commit] :end-before: [END bigtable_row_commit] :rtype: bool :returns: Flag indicating if the filter was matched (which also indicates which set of mutations were applied by the server). :raises: :class:`ValueError <exceptions.ValueError>` if the number of mutations exceeds the :data:`MAX_MUTATIONS`. """ true_mutations = self._get_mutations(state=True) false_mutations = self._get_mutations(state=False) num_true_mutations = len(true_mutations) num_false_mutations = len(false_mutations) if num_true_mutations == 0 and num_false_mutations == 0: return if num_true_mutations > MAX_MUTATIONS or num_false_mutations > MAX_MUTATIONS: raise ValueError( "Exceed the maximum allowable mutations (%d). Had %s true " "mutations and %d false mutations." % (MAX_MUTATIONS, num_true_mutations, num_false_mutations) ) data_client = self._table._instance._client.table_data_client resp = data_client.check_and_mutate_row( table_name=self._table.name, row_key=self._row_key, predicate_filter=self._filter.to_pb(), true_mutations=true_mutations, false_mutations=false_mutations, ) self.clear() return resp.predicate_matched
Appends a value to an existing cell. .. note:: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row, but does not make an API request. To actually send an API request (with the rules) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_append_cell_value] :end-before: [END bigtable_row_append_cell_value] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes :param value: The value to append to the existing value in the cell. If the targeted cell is unset, it will be treated as containing the empty string. def append_cell_value(self, column_family_id, column, value): """Appends a value to an existing cell. .. note:: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row, but does not make an API request. To actually send an API request (with the rules) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_append_cell_value] :end-before: [END bigtable_row_append_cell_value] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type value: bytes :param value: The value to append to the existing value in the cell. If the targeted cell is unset, it will be treated as containing the empty string. """ column = _to_bytes(column) value = _to_bytes(value) rule_pb = data_v2_pb2.ReadModifyWriteRule( family_name=column_family_id, column_qualifier=column, append_value=value ) self._rule_pb_list.append(rule_pb)
Increments a value in an existing cell. Assumes the value in the cell is stored as a 64 bit integer serialized to bytes. .. note:: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row, but does not make an API request. To actually send an API request (with the rules) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_increment_cell_value] :end-before: [END bigtable_row_increment_cell_value] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type int_value: int :param int_value: The value to increment the existing value in the cell by. If the targeted cell is unset, it will be treated as containing a zero. Otherwise, the targeted cell must contain an 8-byte value (interpreted as a 64-bit big-endian signed integer), or the entire request will fail. def increment_cell_value(self, column_family_id, column, int_value): """Increments a value in an existing cell. Assumes the value in the cell is stored as a 64 bit integer serialized to bytes. .. note:: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row, but does not make an API request. To actually send an API request (with the rules) to the Google Cloud Bigtable API, call :meth:`commit`. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_increment_cell_value] :end-before: [END bigtable_row_increment_cell_value] :type column_family_id: str :param column_family_id: The column family that contains the column. Must be of the form ``[_a-zA-Z0-9][-_.a-zA-Z0-9]*``. :type column: bytes :param column: The column within the column family where the cell is located. :type int_value: int :param int_value: The value to increment the existing value in the cell by. If the targeted cell is unset, it will be treated as containing a zero. Otherwise, the targeted cell must contain an 8-byte value (interpreted as a 64-bit big-endian signed integer), or the entire request will fail. """ column = _to_bytes(column) rule_pb = data_v2_pb2.ReadModifyWriteRule( family_name=column_family_id, column_qualifier=column, increment_amount=int_value, ) self._rule_pb_list.append(rule_pb)
Makes a ``ReadModifyWriteRow`` API request. This commits modifications made by :meth:`append_cell_value` and :meth:`increment_cell_value`. If no modifications were made, makes no API request and just returns ``{}``. Modifies a row atomically, reading the latest existing timestamp / value from the specified columns and writing a new value by appending / incrementing. The new cell created uses either the current server time or the highest timestamp of a cell in that column (if it exceeds the server time). After committing the accumulated mutations, resets the local mutations. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_commit] :end-before: [END bigtable_row_commit] :rtype: dict :returns: The new contents of all modified cells. Returned as a dictionary of column families, each of which holds a dictionary of columns. Each column contains a list of cells modified. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. :raises: :class:`ValueError <exceptions.ValueError>` if the number of mutations exceeds the :data:`MAX_MUTATIONS`. def commit(self): """Makes a ``ReadModifyWriteRow`` API request. This commits modifications made by :meth:`append_cell_value` and :meth:`increment_cell_value`. If no modifications were made, makes no API request and just returns ``{}``. Modifies a row atomically, reading the latest existing timestamp / value from the specified columns and writing a new value by appending / incrementing. The new cell created uses either the current server time or the highest timestamp of a cell in that column (if it exceeds the server time). After committing the accumulated mutations, resets the local mutations. For example: .. literalinclude:: snippets_table.py :start-after: [START bigtable_row_commit] :end-before: [END bigtable_row_commit] :rtype: dict :returns: The new contents of all modified cells. Returned as a dictionary of column families, each of which holds a dictionary of columns. Each column contains a list of cells modified. Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the cell. :raises: :class:`ValueError <exceptions.ValueError>` if the number of mutations exceeds the :data:`MAX_MUTATIONS`. """ num_mutations = len(self._rule_pb_list) if num_mutations == 0: return {} if num_mutations > MAX_MUTATIONS: raise ValueError( "%d total append mutations exceed the maximum " "allowable %d." % (num_mutations, MAX_MUTATIONS) ) data_client = self._table._instance._client.table_data_client row_response = data_client.read_modify_write_row( table_name=self._table.name, row_key=self._row_key, rules=self._rule_pb_list ) # Reset modifications after commit-ing request. self.clear() # NOTE: We expect row_response.key == self._row_key but don't check. return _parse_rmw_row_response(row_response)
Creates a Retry object given a gapic retry configuration. Args: retry_params (dict): The retry parameter values, for example:: { "initial_retry_delay_millis": 1000, "retry_delay_multiplier": 2.5, "max_retry_delay_millis": 120000, "initial_rpc_timeout_millis": 120000, "rpc_timeout_multiplier": 1.0, "max_rpc_timeout_millis": 120000, "total_timeout_millis": 600000 } retry_codes (sequence[str]): The list of retryable gRPC error code names. Returns: google.api_core.retry.Retry: The default retry object for the method. def _retry_from_retry_config(retry_params, retry_codes): """Creates a Retry object given a gapic retry configuration. Args: retry_params (dict): The retry parameter values, for example:: { "initial_retry_delay_millis": 1000, "retry_delay_multiplier": 2.5, "max_retry_delay_millis": 120000, "initial_rpc_timeout_millis": 120000, "rpc_timeout_multiplier": 1.0, "max_rpc_timeout_millis": 120000, "total_timeout_millis": 600000 } retry_codes (sequence[str]): The list of retryable gRPC error code names. Returns: google.api_core.retry.Retry: The default retry object for the method. """ exception_classes = [ _exception_class_for_grpc_status_name(code) for code in retry_codes ] return retry.Retry( retry.if_exception_type(*exception_classes), initial=(retry_params["initial_retry_delay_millis"] / _MILLIS_PER_SECOND), maximum=(retry_params["max_retry_delay_millis"] / _MILLIS_PER_SECOND), multiplier=retry_params["retry_delay_multiplier"], deadline=retry_params["total_timeout_millis"] / _MILLIS_PER_SECOND, )
Creates a ExponentialTimeout object given a gapic retry configuration. Args: retry_params (dict): The retry parameter values, for example:: { "initial_retry_delay_millis": 1000, "retry_delay_multiplier": 2.5, "max_retry_delay_millis": 120000, "initial_rpc_timeout_millis": 120000, "rpc_timeout_multiplier": 1.0, "max_rpc_timeout_millis": 120000, "total_timeout_millis": 600000 } Returns: google.api_core.retry.ExponentialTimeout: The default time object for the method. def _timeout_from_retry_config(retry_params): """Creates a ExponentialTimeout object given a gapic retry configuration. Args: retry_params (dict): The retry parameter values, for example:: { "initial_retry_delay_millis": 1000, "retry_delay_multiplier": 2.5, "max_retry_delay_millis": 120000, "initial_rpc_timeout_millis": 120000, "rpc_timeout_multiplier": 1.0, "max_rpc_timeout_millis": 120000, "total_timeout_millis": 600000 } Returns: google.api_core.retry.ExponentialTimeout: The default time object for the method. """ return timeout.ExponentialTimeout( initial=(retry_params["initial_rpc_timeout_millis"] / _MILLIS_PER_SECOND), maximum=(retry_params["max_rpc_timeout_millis"] / _MILLIS_PER_SECOND), multiplier=retry_params["rpc_timeout_multiplier"], deadline=(retry_params["total_timeout_millis"] / _MILLIS_PER_SECOND), )
Creates default retry and timeout objects for each method in a gapic interface config. Args: interface_config (Mapping): The interface config section of the full gapic library config. For example, If the full configuration has an interface named ``google.example.v1.ExampleService`` you would pass in just that interface's configuration, for example ``gapic_config['interfaces']['google.example.v1.ExampleService']``. Returns: Mapping[str, MethodConfig]: A mapping of RPC method names to their configuration. def parse_method_configs(interface_config): """Creates default retry and timeout objects for each method in a gapic interface config. Args: interface_config (Mapping): The interface config section of the full gapic library config. For example, If the full configuration has an interface named ``google.example.v1.ExampleService`` you would pass in just that interface's configuration, for example ``gapic_config['interfaces']['google.example.v1.ExampleService']``. Returns: Mapping[str, MethodConfig]: A mapping of RPC method names to their configuration. """ # Grab all the retry codes retry_codes_map = { name: retry_codes for name, retry_codes in six.iteritems(interface_config.get("retry_codes", {})) } # Grab all of the retry params retry_params_map = { name: retry_params for name, retry_params in six.iteritems( interface_config.get("retry_params", {}) ) } # Iterate through all the API methods and create a flat MethodConfig # instance for each one. method_configs = {} for method_name, method_params in six.iteritems( interface_config.get("methods", {}) ): retry_params_name = method_params.get("retry_params_name") if retry_params_name is not None: retry_params = retry_params_map[retry_params_name] retry_ = _retry_from_retry_config( retry_params, retry_codes_map[method_params["retry_codes_name"]] ) timeout_ = _timeout_from_retry_config(retry_params) # No retry config, so this is a non-retryable method. else: retry_ = None timeout_ = timeout.ConstantTimeout( method_params["timeout_millis"] / _MILLIS_PER_SECOND ) method_configs[method_name] = MethodConfig(retry=retry_, timeout=timeout_) return method_configs
Return True the future is done, False otherwise. This still returns True in failure cases; checking :meth:`result` or :meth:`exception` is the canonical way to assess success or failure. def done(self): """Return True the future is done, False otherwise. This still returns True in failure cases; checking :meth:`result` or :meth:`exception` is the canonical way to assess success or failure. """ return self._exception != self._SENTINEL or self._result != self._SENTINEL
Return the exception raised by the call, if any. This blocks until the message has successfully been published, and returns the exception. If the call succeeded, return None. Args: timeout (Union[int, float]): The number of seconds before this call times out and raises TimeoutError. Raises: TimeoutError: If the request times out. Returns: Exception: The exception raised by the call, if any. def exception(self, timeout=None): """Return the exception raised by the call, if any. This blocks until the message has successfully been published, and returns the exception. If the call succeeded, return None. Args: timeout (Union[int, float]): The number of seconds before this call times out and raises TimeoutError. Raises: TimeoutError: If the request times out. Returns: Exception: The exception raised by the call, if any. """ # Wait until the future is done. if not self._completed.wait(timeout=timeout): raise exceptions.TimeoutError("Timed out waiting for result.") # If the batch completed successfully, this should return None. if self._result != self._SENTINEL: return None # Okay, this batch had an error; this should return it. return self._exception
Attach the provided callable to the future. The provided function is called, with this future as its only argument, when the future finishes running. def add_done_callback(self, fn): """Attach the provided callable to the future. The provided function is called, with this future as its only argument, when the future finishes running. """ if self.done(): return fn(self) self._callbacks.append(fn)
Set the result of the future to the provided result. Args: result (Any): The result def set_result(self, result): """Set the result of the future to the provided result. Args: result (Any): The result """ # Sanity check: A future can only complete once. if self.done(): raise RuntimeError("set_result can only be called once.") # Set the result and trigger the future. self._result = result self._trigger()
Set the result of the future to the given exception. Args: exception (:exc:`Exception`): The exception raised. def set_exception(self, exception): """Set the result of the future to the given exception. Args: exception (:exc:`Exception`): The exception raised. """ # Sanity check: A future can only complete once. if self.done(): raise RuntimeError("set_exception can only be called once.") # Set the exception and trigger the future. self._exception = exception self._trigger()
Trigger all callbacks registered to this Future. This method is called internally by the batch once the batch completes. Args: message_id (str): The message ID, as a string. def _trigger(self): """Trigger all callbacks registered to this Future. This method is called internally by the batch once the batch completes. Args: message_id (str): The message ID, as a string. """ self._completed.set() for callback in self._callbacks: callback(self)
Overrides transport.send(). :type record: :class:`logging.LogRecord` :param record: Python log record that the handler was called with. :type message: str :param message: The message from the ``LogRecord`` after being formatted by the associated log formatters. :type resource: :class:`~google.cloud.logging.resource.Resource` :param resource: (Optional) Monitored resource of the entry. :type labels: dict :param labels: (Optional) Mapping of labels for the entry. def send( self, record, message, resource=None, labels=None, trace=None, span_id=None ): """Overrides transport.send(). :type record: :class:`logging.LogRecord` :param record: Python log record that the handler was called with. :type message: str :param message: The message from the ``LogRecord`` after being formatted by the associated log formatters. :type resource: :class:`~google.cloud.logging.resource.Resource` :param resource: (Optional) Monitored resource of the entry. :type labels: dict :param labels: (Optional) Mapping of labels for the entry. """ info = {"message": message, "python_logger": record.name} self.logger.log_struct( info, severity=record.levelname, resource=resource, labels=labels, trace=trace, span_id=span_id, )
Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned. A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read. Read sessions automatically expire 24 hours after they are created and do not require manual clean-up by the caller. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `table_reference`: >>> table_reference = {} >>> >>> # TODO: Initialize `parent`: >>> parent = '' >>> >>> response = client.create_read_session(table_reference, parent) Args: table_reference (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableReference]): Required. Reference to the table to read. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableReference` parent (str): Required. String of the form ``projects/{project_id}`` indicating the project this ReadSession is associated with. This is the project that will be billed for usage. table_modifiers (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableModifiers]): Optional. Any modifiers to the Table (e.g. snapshot timestamp). If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableModifiers` requested_streams (int): Optional. Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system. Streams must be read starting from offset 0. read_options (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableReadOptions]): Optional. Read options for this session (e.g. column selection, filters). If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableReadOptions` format_ (~google.cloud.bigquery_storage_v1beta1.types.DataFormat): Data output format. Currently default to Avro. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.bigquery_storage_v1beta1.types.ReadSession` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. def create_read_session( self, table_reference, parent, table_modifiers=None, requested_streams=None, read_options=None, format_=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned. A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read. Read sessions automatically expire 24 hours after they are created and do not require manual clean-up by the caller. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `table_reference`: >>> table_reference = {} >>> >>> # TODO: Initialize `parent`: >>> parent = '' >>> >>> response = client.create_read_session(table_reference, parent) Args: table_reference (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableReference]): Required. Reference to the table to read. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableReference` parent (str): Required. String of the form ``projects/{project_id}`` indicating the project this ReadSession is associated with. This is the project that will be billed for usage. table_modifiers (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableModifiers]): Optional. Any modifiers to the Table (e.g. snapshot timestamp). If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableModifiers` requested_streams (int): Optional. Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system. Streams must be read starting from offset 0. read_options (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.TableReadOptions]): Optional. Read options for this session (e.g. column selection, filters). If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.TableReadOptions` format_ (~google.cloud.bigquery_storage_v1beta1.types.DataFormat): Data output format. Currently default to Avro. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.bigquery_storage_v1beta1.types.ReadSession` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "create_read_session" not in self._inner_api_calls: self._inner_api_calls[ "create_read_session" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_read_session, default_retry=self._method_configs["CreateReadSession"].retry, default_timeout=self._method_configs["CreateReadSession"].timeout, client_info=self._client_info, ) request = storage_pb2.CreateReadSessionRequest( table_reference=table_reference, parent=parent, table_modifiers=table_modifiers, requested_streams=requested_streams, read_options=read_options, format=format_, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [ ("table_reference.project_id", table_reference.project_id), ("table_reference.dataset_id", table_reference.dataset_id), ] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( # pragma: no cover routing_header ) metadata.append(routing_metadata) # pragma: no cover return self._inner_api_calls["create_read_session"]( request, retry=retry, timeout=timeout, metadata=metadata )
Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail. Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `read_position`: >>> read_position = {} >>> >>> for element in client.read_rows(read_position): ... # process element ... pass Args: read_position (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.StreamPosition]): Required. Identifier of the position in the stream to start reading from. The offset requested must be less than the last row read from ReadRows. Requesting a larger offset is undefined. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.StreamPosition` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: Iterable[~google.cloud.bigquery_storage_v1beta1.types.ReadRowsResponse]. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. def read_rows( self, read_position, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail. Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `read_position`: >>> read_position = {} >>> >>> for element in client.read_rows(read_position): ... # process element ... pass Args: read_position (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.StreamPosition]): Required. Identifier of the position in the stream to start reading from. The offset requested must be less than the last row read from ReadRows. Requesting a larger offset is undefined. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.StreamPosition` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: Iterable[~google.cloud.bigquery_storage_v1beta1.types.ReadRowsResponse]. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "read_rows" not in self._inner_api_calls: self._inner_api_calls[ "read_rows" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.read_rows, default_retry=self._method_configs["ReadRows"].retry, default_timeout=self._method_configs["ReadRows"].timeout, client_info=self._client_info, ) request = storage_pb2.ReadRowsRequest(read_position=read_position) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("read_position.stream.name", read_position.stream.name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( # pragma: no cover routing_header ) metadata.append(routing_metadata) # pragma: no cover return self._inner_api_calls["read_rows"]( request, retry=retry, timeout=timeout, metadata=metadata )
Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `session`: >>> session = {} >>> >>> # TODO: Initialize `requested_streams`: >>> requested_streams = 0 >>> >>> response = client.batch_create_read_session_streams(session, requested_streams) Args: session (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.ReadSession]): Required. Must be a non-expired session obtained from a call to CreateReadSession. Only the name field needs to be set. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.ReadSession` requested_streams (int): Required. Number of new streams requested. Must be positive. Number of added streams may be less than this, see CreateReadSessionRequest for more information. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.bigquery_storage_v1beta1.types.BatchCreateReadSessionStreamsResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. def batch_create_read_session_streams( self, session, requested_streams, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers. Example: >>> from google.cloud import bigquery_storage_v1beta1 >>> >>> client = bigquery_storage_v1beta1.BigQueryStorageClient() >>> >>> # TODO: Initialize `session`: >>> session = {} >>> >>> # TODO: Initialize `requested_streams`: >>> requested_streams = 0 >>> >>> response = client.batch_create_read_session_streams(session, requested_streams) Args: session (Union[dict, ~google.cloud.bigquery_storage_v1beta1.types.ReadSession]): Required. Must be a non-expired session obtained from a call to CreateReadSession. Only the name field needs to be set. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.bigquery_storage_v1beta1.types.ReadSession` requested_streams (int): Required. Number of new streams requested. Must be positive. Number of added streams may be less than this, see CreateReadSessionRequest for more information. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.bigquery_storage_v1beta1.types.BatchCreateReadSessionStreamsResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "batch_create_read_session_streams" not in self._inner_api_calls: self._inner_api_calls[ "batch_create_read_session_streams" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.batch_create_read_session_streams, default_retry=self._method_configs[ "BatchCreateReadSessionStreams" ].retry, default_timeout=self._method_configs[ "BatchCreateReadSessionStreams" ].timeout, client_info=self._client_info, ) request = storage_pb2.BatchCreateReadSessionStreamsRequest( session=session, requested_streams=requested_streams ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("session.name", session.name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( # pragma: no cover routing_header ) metadata.append(routing_metadata) # pragma: no cover return self._inner_api_calls["batch_create_read_session_streams"]( request, retry=retry, timeout=timeout, metadata=metadata )
Union[str, bytes]: The qualifier encoded in binary. The type is ``str`` (Python 2.x) or ``bytes`` (Python 3.x). The module will handle base64 encoding for you. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.%28key%29.bigtableOptions.columnFamilies.columns.qualifierEncoded https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies.columns.qualifierEncoded def qualifier_encoded(self): """Union[str, bytes]: The qualifier encoded in binary. The type is ``str`` (Python 2.x) or ``bytes`` (Python 3.x). The module will handle base64 encoding for you. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.%28key%29.bigtableOptions.columnFamilies.columns.qualifierEncoded https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies.columns.qualifierEncoded """ prop = self._properties.get("qualifierEncoded") if prop is None: return None return base64.standard_b64decode(_to_bytes(prop))
List[:class:`~.external_config.BigtableColumn`]: Lists of columns that should be exposed as individual fields. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).bigtableOptions.columnFamilies.columns https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies.columns def columns(self): """List[:class:`~.external_config.BigtableColumn`]: Lists of columns that should be exposed as individual fields. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).bigtableOptions.columnFamilies.columns https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies.columns """ prop = self._properties.get("columns", []) return [BigtableColumn.from_api_repr(col) for col in prop]
List[:class:`~.external_config.BigtableColumnFamily`]: List of column families to expose in the table schema along with their types. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).bigtableOptions.columnFamilies https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies def column_families(self): """List[:class:`~.external_config.BigtableColumnFamily`]: List of column families to expose in the table schema along with their types. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).bigtableOptions.columnFamilies https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.bigtableOptions.columnFamilies """ prop = self._properties.get("columnFamilies", []) return [BigtableColumnFamily.from_api_repr(cf) for cf in prop]
List[:class:`~google.cloud.bigquery.schema.SchemaField`]: The schema for the data. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).schema https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.schema def schema(self): """List[:class:`~google.cloud.bigquery.schema.SchemaField`]: The schema for the data. See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).schema https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.schema """ prop = self._properties.get("schema", {}) return [SchemaField.from_api_repr(field) for field in prop.get("fields", [])]
Build an API representation of this object. Returns: Dict[str, Any]: A dictionary in the format used by the BigQuery API. def to_api_repr(self): """Build an API representation of this object. Returns: Dict[str, Any]: A dictionary in the format used by the BigQuery API. """ config = copy.deepcopy(self._properties) if self.options is not None: r = self.options.to_api_repr() if r != {}: config[self.options._RESOURCE_NAME] = r return config
Factory: construct an :class:`~.external_config.ExternalConfig` instance given its API representation. Args: resource (Dict[str, Any]): Definition of an :class:`~.external_config.ExternalConfig` instance in the same representation as is returned from the API. Returns: :class:`~.external_config.ExternalConfig`: Configuration parsed from ``resource``. def from_api_repr(cls, resource): """Factory: construct an :class:`~.external_config.ExternalConfig` instance given its API representation. Args: resource (Dict[str, Any]): Definition of an :class:`~.external_config.ExternalConfig` instance in the same representation as is returned from the API. Returns: :class:`~.external_config.ExternalConfig`: Configuration parsed from ``resource``. """ config = cls(resource["sourceFormat"]) for optcls in _OPTION_CLASSES: opts = resource.get(optcls._RESOURCE_NAME) if opts is not None: config._options = optcls.from_api_repr(opts) break config._properties = copy.deepcopy(resource) return config
Run linters. Returns a failure if the linters find linting errors or sufficiently serious code quality issues. def lint(session): """Run linters. Returns a failure if the linters find linting errors or sufficiently serious code quality issues. """ session.install('flake8', *LOCAL_DEPS) session.install('-e', '.') session.run( 'flake8', os.path.join('google', 'cloud', 'bigquery_storage_v1beta1')) session.run('flake8', 'tests')
Run the system test suite. def system(session): """Run the system test suite.""" # Sanity check: Only run system tests if the environment variable is set. if not os.environ.get('GOOGLE_APPLICATION_CREDENTIALS', ''): session.skip('Credentials must be set via environment variable.') # Install all test dependencies, then install this package into the # virtualenv's dist-packages. session.install('pytest') session.install('-e', os.path.join('..', 'test_utils')) for local_dep in LOCAL_DEPS: session.install('-e', local_dep) session.install('-e', '.[pandas,fastavro]') # Run py.test against the system tests. session.run('py.test', '--quiet', 'tests/system/')
Build the docs. def docs(session): """Build the docs.""" session.install('sphinx', 'sphinx_rtd_theme') session.install('-e', '.[pandas,fastavro]') shutil.rmtree(os.path.join('docs', '_build'), ignore_errors=True) session.run( 'sphinx-build', '-W', # warnings as errors '-T', # show full traceback on exception '-N', # no colors '-b', 'html', '-d', os.path.join('docs', '_build', 'doctrees', ''), os.path.join('docs', ''), os.path.join('docs', '_build', 'html', ''), )
Convert a string representation of a binary operator to an enum. These enums come from the protobuf message definition ``StructuredQuery.FieldFilter.Operator``. Args: op_string (str): A comparison operation in the form of a string. Acceptable values are ``<``, ``<=``, ``==``, ``>=`` and ``>``. Returns: int: The enum corresponding to ``op_string``. Raises: ValueError: If ``op_string`` is not a valid operator. def _enum_from_op_string(op_string): """Convert a string representation of a binary operator to an enum. These enums come from the protobuf message definition ``StructuredQuery.FieldFilter.Operator``. Args: op_string (str): A comparison operation in the form of a string. Acceptable values are ``<``, ``<=``, ``==``, ``>=`` and ``>``. Returns: int: The enum corresponding to ``op_string``. Raises: ValueError: If ``op_string`` is not a valid operator. """ try: return _COMPARISON_OPERATORS[op_string] except KeyError: choices = ", ".join(sorted(_COMPARISON_OPERATORS.keys())) msg = _BAD_OP_STRING.format(op_string, choices) raise ValueError(msg)
Convert a string representation of a direction to an enum. Args: direction (str): A direction to order by. Must be one of :attr:`~.firestore.Query.ASCENDING` or :attr:`~.firestore.Query.DESCENDING`. Returns: int: The enum corresponding to ``direction``. Raises: ValueError: If ``direction`` is not a valid direction. def _enum_from_direction(direction): """Convert a string representation of a direction to an enum. Args: direction (str): A direction to order by. Must be one of :attr:`~.firestore.Query.ASCENDING` or :attr:`~.firestore.Query.DESCENDING`. Returns: int: The enum corresponding to ``direction``. Raises: ValueError: If ``direction`` is not a valid direction. """ if isinstance(direction, int): return direction if direction == Query.ASCENDING: return enums.StructuredQuery.Direction.ASCENDING elif direction == Query.DESCENDING: return enums.StructuredQuery.Direction.DESCENDING else: msg = _BAD_DIR_STRING.format(direction, Query.ASCENDING, Query.DESCENDING) raise ValueError(msg)
Convert a specific protobuf filter to the generic filter type. Args: field_or_unary (Union[google.cloud.proto.firestore.v1beta1.\ query_pb2.StructuredQuery.FieldFilter, google.cloud.proto.\ firestore.v1beta1.query_pb2.StructuredQuery.FieldFilter]): A field or unary filter to convert to a generic filter. Returns: google.cloud.firestore_v1beta1.types.\ StructuredQuery.Filter: A "generic" filter. Raises: ValueError: If ``field_or_unary`` is not a field or unary filter. def _filter_pb(field_or_unary): """Convert a specific protobuf filter to the generic filter type. Args: field_or_unary (Union[google.cloud.proto.firestore.v1beta1.\ query_pb2.StructuredQuery.FieldFilter, google.cloud.proto.\ firestore.v1beta1.query_pb2.StructuredQuery.FieldFilter]): A field or unary filter to convert to a generic filter. Returns: google.cloud.firestore_v1beta1.types.\ StructuredQuery.Filter: A "generic" filter. Raises: ValueError: If ``field_or_unary`` is not a field or unary filter. """ if isinstance(field_or_unary, query_pb2.StructuredQuery.FieldFilter): return query_pb2.StructuredQuery.Filter(field_filter=field_or_unary) elif isinstance(field_or_unary, query_pb2.StructuredQuery.UnaryFilter): return query_pb2.StructuredQuery.Filter(unary_filter=field_or_unary) else: raise ValueError("Unexpected filter type", type(field_or_unary), field_or_unary)
Convert a cursor pair to a protobuf. If ``cursor_pair`` is :data:`None`, just returns :data:`None`. Args: cursor_pair (Optional[Tuple[list, bool]]): Two-tuple of * a list of field values. * a ``before`` flag Returns: Optional[google.cloud.firestore_v1beta1.types.Cursor]: A protobuf cursor corresponding to the values. def _cursor_pb(cursor_pair): """Convert a cursor pair to a protobuf. If ``cursor_pair`` is :data:`None`, just returns :data:`None`. Args: cursor_pair (Optional[Tuple[list, bool]]): Two-tuple of * a list of field values. * a ``before`` flag Returns: Optional[google.cloud.firestore_v1beta1.types.Cursor]: A protobuf cursor corresponding to the values. """ if cursor_pair is not None: data, before = cursor_pair value_pbs = [_helpers.encode_value(value) for value in data] return query_pb2.Cursor(values=value_pbs, before=before)
Parse a query response protobuf to a document snapshot. Args: response_pb (google.cloud.proto.firestore.v1beta1.\ firestore_pb2.RunQueryResponse): A collection (~.firestore_v1beta1.collection.CollectionReference): A reference to the collection that initiated the query. expected_prefix (str): The expected prefix for fully-qualified document names returned in the query results. This can be computed directly from ``collection`` via :meth:`_parent_info`. Returns: Optional[~.firestore.document.DocumentSnapshot]: A snapshot of the data returned in the query. If ``response_pb.document`` is not set, the snapshot will be :data:`None`. def _query_response_to_snapshot(response_pb, collection, expected_prefix): """Parse a query response protobuf to a document snapshot. Args: response_pb (google.cloud.proto.firestore.v1beta1.\ firestore_pb2.RunQueryResponse): A collection (~.firestore_v1beta1.collection.CollectionReference): A reference to the collection that initiated the query. expected_prefix (str): The expected prefix for fully-qualified document names returned in the query results. This can be computed directly from ``collection`` via :meth:`_parent_info`. Returns: Optional[~.firestore.document.DocumentSnapshot]: A snapshot of the data returned in the query. If ``response_pb.document`` is not set, the snapshot will be :data:`None`. """ if not response_pb.HasField("document"): return None document_id = _helpers.get_doc_id(response_pb.document, expected_prefix) reference = collection.document(document_id) data = _helpers.decode_dict(response_pb.document.fields, collection._client) snapshot = document.DocumentSnapshot( reference, data, exists=True, read_time=response_pb.read_time, create_time=response_pb.document.create_time, update_time=response_pb.document.update_time, ) return snapshot
Project documents matching query to a limited set of fields. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. If the current query already has a projection set (i.e. has already called :meth:`~.firestore_v1beta1.query.Query.select`), this will overwrite it. Args: field_paths (Iterable[str, ...]): An iterable of field paths (``.``-delimited list of field names) to use as a projection of document fields in the query results. Returns: ~.firestore_v1beta1.query.Query: A "projected" query. Acts as a copy of the current query, modified with the newly added projection. Raises: ValueError: If any ``field_path`` is invalid. def select(self, field_paths): """Project documents matching query to a limited set of fields. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. If the current query already has a projection set (i.e. has already called :meth:`~.firestore_v1beta1.query.Query.select`), this will overwrite it. Args: field_paths (Iterable[str, ...]): An iterable of field paths (``.``-delimited list of field names) to use as a projection of document fields in the query results. Returns: ~.firestore_v1beta1.query.Query: A "projected" query. Acts as a copy of the current query, modified with the newly added projection. Raises: ValueError: If any ``field_path`` is invalid. """ field_paths = list(field_paths) for field_path in field_paths: field_path_module.split_field_path(field_path) # raises new_projection = query_pb2.StructuredQuery.Projection( fields=[ query_pb2.StructuredQuery.FieldReference(field_path=field_path) for field_path in field_paths ] ) return self.__class__( self._parent, projection=new_projection, field_filters=self._field_filters, orders=self._orders, limit=self._limit, offset=self._offset, start_at=self._start_at, end_at=self._end_at, )
Filter the query on a field. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. Returns a new :class:`~.firestore_v1beta1.query.Query` that filters on a specific field path, according to an operation (e.g. ``==`` or "equals") and a particular value to be paired with that operation. Args: field_path (str): A field path (``.``-delimited list of field names) for the field to filter on. op_string (str): A comparison operation in the form of a string. Acceptable values are ``<``, ``<=``, ``==``, ``>=`` and ``>``. value (Any): The value to compare the field against in the filter. If ``value`` is :data:`None` or a NaN, then ``==`` is the only allowed operation. Returns: ~.firestore_v1beta1.query.Query: A filtered query. Acts as a copy of the current query, modified with the newly added filter. Raises: ValueError: If ``field_path`` is invalid. ValueError: If ``value`` is a NaN or :data:`None` and ``op_string`` is not ``==``. def where(self, field_path, op_string, value): """Filter the query on a field. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. Returns a new :class:`~.firestore_v1beta1.query.Query` that filters on a specific field path, according to an operation (e.g. ``==`` or "equals") and a particular value to be paired with that operation. Args: field_path (str): A field path (``.``-delimited list of field names) for the field to filter on. op_string (str): A comparison operation in the form of a string. Acceptable values are ``<``, ``<=``, ``==``, ``>=`` and ``>``. value (Any): The value to compare the field against in the filter. If ``value`` is :data:`None` or a NaN, then ``==`` is the only allowed operation. Returns: ~.firestore_v1beta1.query.Query: A filtered query. Acts as a copy of the current query, modified with the newly added filter. Raises: ValueError: If ``field_path`` is invalid. ValueError: If ``value`` is a NaN or :data:`None` and ``op_string`` is not ``==``. """ field_path_module.split_field_path(field_path) # raises if value is None: if op_string != _EQ_OP: raise ValueError(_BAD_OP_NAN_NULL) filter_pb = query_pb2.StructuredQuery.UnaryFilter( field=query_pb2.StructuredQuery.FieldReference(field_path=field_path), op=enums.StructuredQuery.UnaryFilter.Operator.IS_NULL, ) elif _isnan(value): if op_string != _EQ_OP: raise ValueError(_BAD_OP_NAN_NULL) filter_pb = query_pb2.StructuredQuery.UnaryFilter( field=query_pb2.StructuredQuery.FieldReference(field_path=field_path), op=enums.StructuredQuery.UnaryFilter.Operator.IS_NAN, ) elif isinstance(value, (transforms.Sentinel, transforms._ValueList)): raise ValueError(_INVALID_WHERE_TRANSFORM) else: filter_pb = query_pb2.StructuredQuery.FieldFilter( field=query_pb2.StructuredQuery.FieldReference(field_path=field_path), op=_enum_from_op_string(op_string), value=_helpers.encode_value(value), ) new_filters = self._field_filters + (filter_pb,) return self.__class__( self._parent, projection=self._projection, field_filters=new_filters, orders=self._orders, limit=self._limit, offset=self._offset, start_at=self._start_at, end_at=self._end_at, )
Helper for :meth:`order_by`. def _make_order(field_path, direction): """Helper for :meth:`order_by`.""" return query_pb2.StructuredQuery.Order( field=query_pb2.StructuredQuery.FieldReference(field_path=field_path), direction=_enum_from_direction(direction), )
Modify the query to add an order clause on a specific field. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. Successive :meth:`~.firestore_v1beta1.query.Query.order_by` calls will further refine the ordering of results returned by the query (i.e. the new "order by" fields will be added to existing ones). Args: field_path (str): A field path (``.``-delimited list of field names) on which to order the query results. direction (Optional[str]): The direction to order by. Must be one of :attr:`ASCENDING` or :attr:`DESCENDING`, defaults to :attr:`ASCENDING`. Returns: ~.firestore_v1beta1.query.Query: An ordered query. Acts as a copy of the current query, modified with the newly added "order by" constraint. Raises: ValueError: If ``field_path`` is invalid. ValueError: If ``direction`` is not one of :attr:`ASCENDING` or :attr:`DESCENDING`. def order_by(self, field_path, direction=ASCENDING): """Modify the query to add an order clause on a specific field. See :meth:`~.firestore_v1beta1.client.Client.field_path` for more information on **field paths**. Successive :meth:`~.firestore_v1beta1.query.Query.order_by` calls will further refine the ordering of results returned by the query (i.e. the new "order by" fields will be added to existing ones). Args: field_path (str): A field path (``.``-delimited list of field names) on which to order the query results. direction (Optional[str]): The direction to order by. Must be one of :attr:`ASCENDING` or :attr:`DESCENDING`, defaults to :attr:`ASCENDING`. Returns: ~.firestore_v1beta1.query.Query: An ordered query. Acts as a copy of the current query, modified with the newly added "order by" constraint. Raises: ValueError: If ``field_path`` is invalid. ValueError: If ``direction`` is not one of :attr:`ASCENDING` or :attr:`DESCENDING`. """ field_path_module.split_field_path(field_path) # raises order_pb = self._make_order(field_path, direction) new_orders = self._orders + (order_pb,) return self.__class__( self._parent, projection=self._projection, field_filters=self._field_filters, orders=new_orders, limit=self._limit, offset=self._offset, start_at=self._start_at, end_at=self._end_at, )
Limit a query to return a fixed number of results. If the current query already has a limit set, this will overwrite it. Args: count (int): Maximum number of documents to return that match the query. Returns: ~.firestore_v1beta1.query.Query: A limited query. Acts as a copy of the current query, modified with the newly added "limit" filter. def limit(self, count): """Limit a query to return a fixed number of results. If the current query already has a limit set, this will overwrite it. Args: count (int): Maximum number of documents to return that match the query. Returns: ~.firestore_v1beta1.query.Query: A limited query. Acts as a copy of the current query, modified with the newly added "limit" filter. """ return self.__class__( self._parent, projection=self._projection, field_filters=self._field_filters, orders=self._orders, limit=count, offset=self._offset, start_at=self._start_at, end_at=self._end_at, )
Skip to an offset in a query. If the current query already has specified an offset, this will overwrite it. Args: num_to_skip (int): The number of results to skip at the beginning of query results. (Must be non-negative.) Returns: ~.firestore_v1beta1.query.Query: An offset query. Acts as a copy of the current query, modified with the newly added "offset" field. def offset(self, num_to_skip): """Skip to an offset in a query. If the current query already has specified an offset, this will overwrite it. Args: num_to_skip (int): The number of results to skip at the beginning of query results. (Must be non-negative.) Returns: ~.firestore_v1beta1.query.Query: An offset query. Acts as a copy of the current query, modified with the newly added "offset" field. """ return self.__class__( self._parent, projection=self._projection, field_filters=self._field_filters, orders=self._orders, limit=self._limit, offset=num_to_skip, start_at=self._start_at, end_at=self._end_at, )
Set values to be used for a ``start_at`` or ``end_at`` cursor. The values will later be used in a query protobuf. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. before (bool): Flag indicating if the document in ``document_fields`` should (:data:`False`) or shouldn't (:data:`True`) be included in the result set. start (Optional[bool]): determines if the cursor is a ``start_at`` cursor (:data:`True`) or an ``end_at`` cursor (:data:`False`). Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start at" cursor. def _cursor_helper(self, document_fields, before, start): """Set values to be used for a ``start_at`` or ``end_at`` cursor. The values will later be used in a query protobuf. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. before (bool): Flag indicating if the document in ``document_fields`` should (:data:`False`) or shouldn't (:data:`True`) be included in the result set. start (Optional[bool]): determines if the cursor is a ``start_at`` cursor (:data:`True`) or an ``end_at`` cursor (:data:`False`). Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start at" cursor. """ if isinstance(document_fields, tuple): document_fields = list(document_fields) elif isinstance(document_fields, document.DocumentSnapshot): if document_fields.reference._path[:-1] != self._parent._path: raise ValueError( "Cannot use snapshot from another collection as a cursor." ) else: # NOTE: We copy so that the caller can't modify after calling. document_fields = copy.deepcopy(document_fields) cursor_pair = document_fields, before query_kwargs = { "projection": self._projection, "field_filters": self._field_filters, "orders": self._orders, "limit": self._limit, "offset": self._offset, } if start: query_kwargs["start_at"] = cursor_pair query_kwargs["end_at"] = self._end_at else: query_kwargs["start_at"] = self._start_at query_kwargs["end_at"] = cursor_pair return self.__class__(self._parent, **query_kwargs)
Start query results at a particular document value. The result set will **include** the document specified by ``document_fields``. If the current query already has specified a start cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.start_after` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start at" cursor. def start_at(self, document_fields): """Start query results at a particular document value. The result set will **include** the document specified by ``document_fields``. If the current query already has specified a start cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.start_after` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start at" cursor. """ return self._cursor_helper(document_fields, before=True, start=True)
Start query results after a particular document value. The result set will **exclude** the document specified by ``document_fields``. If the current query already has specified a start cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.start_at` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start after" cursor. def start_after(self, document_fields): """Start query results after a particular document value. The result set will **exclude** the document specified by ``document_fields``. If the current query already has specified a start cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.start_at` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "start after" cursor. """ return self._cursor_helper(document_fields, before=False, start=True)
End query results before a particular document value. The result set will **exclude** the document specified by ``document_fields``. If the current query already has specified an end cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.end_at` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "end before" cursor. def end_before(self, document_fields): """End query results before a particular document value. The result set will **exclude** the document specified by ``document_fields``. If the current query already has specified an end cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.end_at` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "end before" cursor. """ return self._cursor_helper(document_fields, before=True, start=False)
End query results at a particular document value. The result set will **include** the document specified by ``document_fields``. If the current query already has specified an end cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.end_before` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "end at" cursor. def end_at(self, document_fields): """End query results at a particular document value. The result set will **include** the document specified by ``document_fields``. If the current query already has specified an end cursor -- either via this method or :meth:`~.firestore_v1beta1.query.Query.end_before` -- this will overwrite it. When the query is sent to the server, the ``document_fields`` will be used in the order given by fields set by :meth:`~.firestore_v1beta1.query.Query.order_by`. Args: document_fields (Union[~.firestore_v1beta1.\ document.DocumentSnapshot, dict, list, tuple]): a document snapshot or a dictionary/list/tuple of fields representing a query results cursor. A cursor is a collection of values that represent a position in a query result set. Returns: ~.firestore_v1beta1.query.Query: A query with cursor. Acts as a copy of the current query, modified with the newly added "end at" cursor. """ return self._cursor_helper(document_fields, before=False, start=False)
Convert all the filters into a single generic Filter protobuf. This may be a lone field filter or unary filter, may be a composite filter or may be :data:`None`. Returns: google.cloud.firestore_v1beta1.types.\ StructuredQuery.Filter: A "generic" filter representing the current query's filters. def _filters_pb(self): """Convert all the filters into a single generic Filter protobuf. This may be a lone field filter or unary filter, may be a composite filter or may be :data:`None`. Returns: google.cloud.firestore_v1beta1.types.\ StructuredQuery.Filter: A "generic" filter representing the current query's filters. """ num_filters = len(self._field_filters) if num_filters == 0: return None elif num_filters == 1: return _filter_pb(self._field_filters[0]) else: composite_filter = query_pb2.StructuredQuery.CompositeFilter( op=enums.StructuredQuery.CompositeFilter.Operator.AND, filters=[_filter_pb(filter_) for filter_ in self._field_filters], ) return query_pb2.StructuredQuery.Filter(composite_filter=composite_filter)
Helper: convert field paths to message. def _normalize_projection(projection): """Helper: convert field paths to message.""" if projection is not None: fields = list(projection.fields) if not fields: field_ref = query_pb2.StructuredQuery.FieldReference( field_path="__name__" ) return query_pb2.StructuredQuery.Projection(fields=[field_ref]) return projection
Helper: adjust orders based on cursors, where clauses. def _normalize_orders(self): """Helper: adjust orders based on cursors, where clauses.""" orders = list(self._orders) _has_snapshot_cursor = False if self._start_at: if isinstance(self._start_at[0], document.DocumentSnapshot): _has_snapshot_cursor = True if self._end_at: if isinstance(self._end_at[0], document.DocumentSnapshot): _has_snapshot_cursor = True if _has_snapshot_cursor: should_order = [ _enum_from_op_string(key) for key in _COMPARISON_OPERATORS if key not in (_EQ_OP, "array_contains") ] order_keys = [order.field.field_path for order in orders] for filter_ in self._field_filters: field = filter_.field.field_path if filter_.op in should_order and field not in order_keys: orders.append(self._make_order(field, "ASCENDING")) if not orders: orders.append(self._make_order("__name__", "ASCENDING")) else: order_keys = [order.field.field_path for order in orders] if "__name__" not in order_keys: direction = orders[-1].direction # enum? orders.append(self._make_order("__name__", direction)) return orders
Helper: convert cursor to a list of values based on orders. def _normalize_cursor(self, cursor, orders): """Helper: convert cursor to a list of values based on orders.""" if cursor is None: return if not orders: raise ValueError(_NO_ORDERS_FOR_CURSOR) document_fields, before = cursor order_keys = [order.field.field_path for order in orders] if isinstance(document_fields, document.DocumentSnapshot): snapshot = document_fields document_fields = snapshot.to_dict() document_fields["__name__"] = snapshot.reference if isinstance(document_fields, dict): # Transform to list using orders values = [] data = document_fields for order_key in order_keys: try: values.append(field_path_module.get_nested_value(order_key, data)) except KeyError: msg = _MISSING_ORDER_BY.format(order_key, data) raise ValueError(msg) document_fields = values if len(document_fields) != len(orders): msg = _MISMATCH_CURSOR_W_ORDER_BY.format(document_fields, order_keys) raise ValueError(msg) _transform_bases = (transforms.Sentinel, transforms._ValueList) for index, key_field in enumerate(zip(order_keys, document_fields)): key, field = key_field if isinstance(field, _transform_bases): msg = _INVALID_CURSOR_TRANSFORM raise ValueError(msg) if key == "__name__" and isinstance(field, six.string_types): document_fields[index] = self._parent.document(field) return document_fields, before
Convert the current query into the equivalent protobuf. Returns: google.cloud.firestore_v1beta1.types.StructuredQuery: The query protobuf. def _to_protobuf(self): """Convert the current query into the equivalent protobuf. Returns: google.cloud.firestore_v1beta1.types.StructuredQuery: The query protobuf. """ projection = self._normalize_projection(self._projection) orders = self._normalize_orders() start_at = self._normalize_cursor(self._start_at, orders) end_at = self._normalize_cursor(self._end_at, orders) query_kwargs = { "select": projection, "from": [ query_pb2.StructuredQuery.CollectionSelector( collection_id=self._parent.id ) ], "where": self._filters_pb(), "order_by": orders, "start_at": _cursor_pb(start_at), "end_at": _cursor_pb(end_at), } if self._offset is not None: query_kwargs["offset"] = self._offset if self._limit is not None: query_kwargs["limit"] = wrappers_pb2.Int32Value(value=self._limit) return query_pb2.StructuredQuery(**query_kwargs)
Deprecated alias for :meth:`stream`. def get(self, transaction=None): """Deprecated alias for :meth:`stream`.""" warnings.warn( "'Query.get' is deprecated: please use 'Query.stream' instead.", DeprecationWarning, stacklevel=2, ) return self.stream(transaction=transaction)
Read the documents in the collection that match this query. This sends a ``RunQuery`` RPC and then returns an iterator which consumes each document returned in the stream of ``RunQueryResponse`` messages. .. note:: The underlying stream of responses will time out after the ``max_rpc_timeout_millis`` value set in the GAPIC client configuration for the ``RunQuery`` API. Snapshots not consumed from the iterator before that point will be lost. If a ``transaction`` is used and it already has write operations added, this method cannot be used (i.e. read-after-write is not allowed). Args: transaction (Optional[~.firestore_v1beta1.transaction.\ Transaction]): An existing transaction that this query will run in. Yields: ~.firestore_v1beta1.document.DocumentSnapshot: The next document that fulfills the query. def stream(self, transaction=None): """Read the documents in the collection that match this query. This sends a ``RunQuery`` RPC and then returns an iterator which consumes each document returned in the stream of ``RunQueryResponse`` messages. .. note:: The underlying stream of responses will time out after the ``max_rpc_timeout_millis`` value set in the GAPIC client configuration for the ``RunQuery`` API. Snapshots not consumed from the iterator before that point will be lost. If a ``transaction`` is used and it already has write operations added, this method cannot be used (i.e. read-after-write is not allowed). Args: transaction (Optional[~.firestore_v1beta1.transaction.\ Transaction]): An existing transaction that this query will run in. Yields: ~.firestore_v1beta1.document.DocumentSnapshot: The next document that fulfills the query. """ parent_path, expected_prefix = self._parent._parent_info() response_iterator = self._client._firestore_api.run_query( parent_path, self._to_protobuf(), transaction=_helpers.get_transaction_id(transaction), metadata=self._client._rpc_metadata, ) for response in response_iterator: snapshot = _query_response_to_snapshot( response, self._parent, expected_prefix ) if snapshot is not None: yield snapshot
Monitor the documents in this collection that match this query. This starts a watch on this query using a background thread. The provided callback is run on the snapshot of the documents. Args: callback(~.firestore.query.QuerySnapshot): a callback to run when a change occurs. Example: from google.cloud import firestore_v1beta1 db = firestore_v1beta1.Client() query_ref = db.collection(u'users').where("user", "==", u'Ada') def on_snapshot(docs, changes, read_time): for doc in docs: print(u'{} => {}'.format(doc.id, doc.to_dict())) # Watch this query query_watch = query_ref.on_snapshot(on_snapshot) # Terminate this watch query_watch.unsubscribe() def on_snapshot(self, callback): """Monitor the documents in this collection that match this query. This starts a watch on this query using a background thread. The provided callback is run on the snapshot of the documents. Args: callback(~.firestore.query.QuerySnapshot): a callback to run when a change occurs. Example: from google.cloud import firestore_v1beta1 db = firestore_v1beta1.Client() query_ref = db.collection(u'users').where("user", "==", u'Ada') def on_snapshot(docs, changes, read_time): for doc in docs: print(u'{} => {}'.format(doc.id, doc.to_dict())) # Watch this query query_watch = query_ref.on_snapshot(on_snapshot) # Terminate this watch query_watch.unsubscribe() """ return Watch.for_query( self, callback, document.DocumentSnapshot, document.DocumentReference )
Constructs a ModelReference. Args: model_id (str): the ID of the model. Returns: google.cloud.bigquery.model.ModelReference: A ModelReference for a model in this dataset. def _get_model_reference(self, model_id): """Constructs a ModelReference. Args: model_id (str): the ID of the model. Returns: google.cloud.bigquery.model.ModelReference: A ModelReference for a model in this dataset. """ return ModelReference.from_api_repr( {"projectId": self.project, "datasetId": self.dataset_id, "modelId": model_id} )
Construct the API resource representation of this access entry Returns: Dict[str, object]: Access entry represented as an API resource def to_api_repr(self): """Construct the API resource representation of this access entry Returns: Dict[str, object]: Access entry represented as an API resource """ resource = {self.entity_type: self.entity_id} if self.role is not None: resource["role"] = self.role return resource
Factory: construct an access entry given its API representation Args: resource (Dict[str, object]): Access entry resource representation returned from the API Returns: google.cloud.bigquery.dataset.AccessEntry: Access entry parsed from ``resource``. Raises: ValueError: If the resource has more keys than ``role`` and one additional key. def from_api_repr(cls, resource): """Factory: construct an access entry given its API representation Args: resource (Dict[str, object]): Access entry resource representation returned from the API Returns: google.cloud.bigquery.dataset.AccessEntry: Access entry parsed from ``resource``. Raises: ValueError: If the resource has more keys than ``role`` and one additional key. """ entry = resource.copy() role = entry.pop("role", None) entity_type, entity_id = entry.popitem() if len(entry) != 0: raise ValueError("Entry has unexpected keys remaining.", entry) return cls(role, entity_type, entity_id)
Factory: construct a dataset reference given its API representation Args: resource (Dict[str, str]): Dataset reference resource representation returned from the API Returns: google.cloud.bigquery.dataset.DatasetReference: Dataset reference parsed from ``resource``. def from_api_repr(cls, resource): """Factory: construct a dataset reference given its API representation Args: resource (Dict[str, str]): Dataset reference resource representation returned from the API Returns: google.cloud.bigquery.dataset.DatasetReference: Dataset reference parsed from ``resource``. """ project = resource["projectId"] dataset_id = resource["datasetId"] return cls(project, dataset_id)
Construct a dataset reference from dataset ID string. Args: dataset_id (str): A dataset ID in standard SQL format. If ``default_project`` is not specified, this must included both the project ID and the dataset ID, separated by ``.``. default_project (str): Optional. The project ID to use when ``dataset_id`` does not include a project ID. Returns: DatasetReference: Dataset reference parsed from ``dataset_id``. Examples: >>> DatasetReference.from_string('my-project-id.some_dataset') DatasetReference('my-project-id', 'some_dataset') Raises: ValueError: If ``dataset_id`` is not a fully-qualified dataset ID in standard SQL format. def from_string(cls, dataset_id, default_project=None): """Construct a dataset reference from dataset ID string. Args: dataset_id (str): A dataset ID in standard SQL format. If ``default_project`` is not specified, this must included both the project ID and the dataset ID, separated by ``.``. default_project (str): Optional. The project ID to use when ``dataset_id`` does not include a project ID. Returns: DatasetReference: Dataset reference parsed from ``dataset_id``. Examples: >>> DatasetReference.from_string('my-project-id.some_dataset') DatasetReference('my-project-id', 'some_dataset') Raises: ValueError: If ``dataset_id`` is not a fully-qualified dataset ID in standard SQL format. """ output_dataset_id = dataset_id output_project_id = default_project parts = dataset_id.split(".") if len(parts) == 1 and not default_project: raise ValueError( "When default_project is not set, dataset_id must be a " "fully-qualified dataset ID in standard SQL format. " 'e.g. "project.dataset_id", got {}'.format(dataset_id) ) elif len(parts) == 2: output_project_id, output_dataset_id = parts elif len(parts) > 2: raise ValueError( "Too many parts in dataset_id. Expected a fully-qualified " "dataset ID in standard SQL format. e.g. " '"project.dataset_id", got {}'.format(dataset_id) ) return cls(output_project_id, output_dataset_id)
List[google.cloud.bigquery.dataset.AccessEntry]: Dataset's access entries. ``role`` augments the entity type and must be present **unless** the entity type is ``view``. Raises: TypeError: If 'value' is not a sequence ValueError: If any item in the sequence is not an :class:`~google.cloud.bigquery.dataset.AccessEntry`. def access_entries(self): """List[google.cloud.bigquery.dataset.AccessEntry]: Dataset's access entries. ``role`` augments the entity type and must be present **unless** the entity type is ``view``. Raises: TypeError: If 'value' is not a sequence ValueError: If any item in the sequence is not an :class:`~google.cloud.bigquery.dataset.AccessEntry`. """ entries = self._properties.get("access", []) return [AccessEntry.from_api_repr(entry) for entry in entries]
Union[datetime.datetime, None]: Datetime at which the dataset was created (:data:`None` until set from the server). def created(self): """Union[datetime.datetime, None]: Datetime at which the dataset was created (:data:`None` until set from the server). """ creation_time = self._properties.get("creationTime") if creation_time is not None: # creation_time will be in milliseconds. return google.cloud._helpers._datetime_from_microseconds( 1000.0 * float(creation_time) )
Union[datetime.datetime, None]: Datetime at which the dataset was last modified (:data:`None` until set from the server). def modified(self): """Union[datetime.datetime, None]: Datetime at which the dataset was last modified (:data:`None` until set from the server). """ modified_time = self._properties.get("lastModifiedTime") if modified_time is not None: # modified_time will be in milliseconds. return google.cloud._helpers._datetime_from_microseconds( 1000.0 * float(modified_time) )
Factory: construct a dataset given its API representation Args: resource (Dict[str: object]): Dataset resource representation returned from the API Returns: google.cloud.bigquery.dataset.Dataset: Dataset parsed from ``resource``. def from_api_repr(cls, resource): """Factory: construct a dataset given its API representation Args: resource (Dict[str: object]): Dataset resource representation returned from the API Returns: google.cloud.bigquery.dataset.Dataset: Dataset parsed from ``resource``. """ if ( "datasetReference" not in resource or "datasetId" not in resource["datasetReference"] ): raise KeyError( "Resource lacks required identity information:" '["datasetReference"]["datasetId"]' ) project_id = resource["datasetReference"]["projectId"] dataset_id = resource["datasetReference"]["datasetId"] dataset = cls(DatasetReference(project_id, dataset_id)) dataset._properties = copy.deepcopy(resource) return dataset
Grab prefixes after a :class:`~google.cloud.iterator.Page` started. :type iterator: :class:`~google.api_core.page_iterator.Iterator` :param iterator: The iterator that is currently in use. :type page: :class:`~google.cloud.api.core.page_iterator.Page` :param page: The page that was just created. :type response: dict :param response: The JSON API response for a page of blobs. def _blobs_page_start(iterator, page, response): """Grab prefixes after a :class:`~google.cloud.iterator.Page` started. :type iterator: :class:`~google.api_core.page_iterator.Iterator` :param iterator: The iterator that is currently in use. :type page: :class:`~google.cloud.api.core.page_iterator.Page` :param page: The page that was just created. :type response: dict :param response: The JSON API response for a page of blobs. """ page.prefixes = tuple(response.get("prefixes", ())) iterator.prefixes.update(page.prefixes)
Convert a JSON blob to the native object. .. note:: This assumes that the ``bucket`` attribute has been added to the iterator after being created. :type iterator: :class:`~google.api_core.page_iterator.Iterator` :param iterator: The iterator that has retrieved the item. :type item: dict :param item: An item to be converted to a blob. :rtype: :class:`.Blob` :returns: The next blob in the page. def _item_to_blob(iterator, item): """Convert a JSON blob to the native object. .. note:: This assumes that the ``bucket`` attribute has been added to the iterator after being created. :type iterator: :class:`~google.api_core.page_iterator.Iterator` :param iterator: The iterator that has retrieved the item. :type item: dict :param item: An item to be converted to a blob. :rtype: :class:`.Blob` :returns: The next blob in the page. """ name = item.get("name") blob = Blob(name, bucket=iterator.bucket) blob._set_properties(item) return blob
Factory: construct instance from resource. :type resource: dict :param resource: mapping as returned from API call. :rtype: :class:`LifecycleRuleDelete` :returns: Instance created from resource. def from_api_repr(cls, resource): """Factory: construct instance from resource. :type resource: dict :param resource: mapping as returned from API call. :rtype: :class:`LifecycleRuleDelete` :returns: Instance created from resource. """ action = resource["action"] instance = cls(action["storageClass"], _factory=True) instance.update(resource) return instance
Factory: construct instance from resource. :type bucket: :class:`Bucket` :params bucket: Bucket for which this instance is the policy. :type resource: dict :param resource: mapping as returned from API call. :rtype: :class:`IAMConfiguration` :returns: Instance created from resource. def from_api_repr(cls, resource, bucket): """Factory: construct instance from resource. :type bucket: :class:`Bucket` :params bucket: Bucket for which this instance is the policy. :type resource: dict :param resource: mapping as returned from API call. :rtype: :class:`IAMConfiguration` :returns: Instance created from resource. """ instance = cls(bucket) instance.update(resource) return instance
Deadline for changing :attr:`bucket_policy_only_enabled` from true to false. If the bucket's :attr:`bucket_policy_only_enabled` is true, this property is time time after which that setting becomes immutable. If the bucket's :attr:`bucket_policy_only_enabled` is false, this property is ``None``. :rtype: Union[:class:`datetime.datetime`, None] :returns: (readonly) Time after which :attr:`bucket_policy_only_enabled` will be frozen as true. def bucket_policy_only_locked_time(self): """Deadline for changing :attr:`bucket_policy_only_enabled` from true to false. If the bucket's :attr:`bucket_policy_only_enabled` is true, this property is time time after which that setting becomes immutable. If the bucket's :attr:`bucket_policy_only_enabled` is false, this property is ``None``. :rtype: Union[:class:`datetime.datetime`, None] :returns: (readonly) Time after which :attr:`bucket_policy_only_enabled` will be frozen as true. """ bpo = self.get("bucketPolicyOnly", {}) stamp = bpo.get("lockedTime") if stamp is not None: stamp = _rfc3339_to_datetime(stamp) return stamp
Set the properties for the current object. :type value: dict or :class:`google.cloud.storage.batch._FutureDict` :param value: The properties to be set. def _set_properties(self, value): """Set the properties for the current object. :type value: dict or :class:`google.cloud.storage.batch._FutureDict` :param value: The properties to be set. """ self._label_removals.clear() return super(Bucket, self)._set_properties(value)
Factory constructor for blob object. .. note:: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket. :type blob_name: str :param blob_name: The name of the blob to be instantiated. :type chunk_size: int :param chunk_size: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. :type encryption_key: bytes :param encryption_key: Optional 32 byte encryption key for customer-supplied encryption. :type kms_key_name: str :param kms_key_name: Optional resource name of KMS key used to encrypt blob's content. :type generation: long :param generation: Optional. If present, selects a specific revision of this object. :rtype: :class:`google.cloud.storage.blob.Blob` :returns: The blob object created. def blob( self, blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None, ): """Factory constructor for blob object. .. note:: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket. :type blob_name: str :param blob_name: The name of the blob to be instantiated. :type chunk_size: int :param chunk_size: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. :type encryption_key: bytes :param encryption_key: Optional 32 byte encryption key for customer-supplied encryption. :type kms_key_name: str :param kms_key_name: Optional resource name of KMS key used to encrypt blob's content. :type generation: long :param generation: Optional. If present, selects a specific revision of this object. :rtype: :class:`google.cloud.storage.blob.Blob` :returns: The blob object created. """ return Blob( name=blob_name, bucket=self, chunk_size=chunk_size, encryption_key=encryption_key, kms_key_name=kms_key_name, generation=generation, )
Factory: create a notification resource for the bucket. See: :class:`.BucketNotification` for parameters. :rtype: :class:`.BucketNotification` def notification( self, topic_name, topic_project=None, custom_attributes=None, event_types=None, blob_name_prefix=None, payload_format=NONE_PAYLOAD_FORMAT, ): """Factory: create a notification resource for the bucket. See: :class:`.BucketNotification` for parameters. :rtype: :class:`.BucketNotification` """ return BucketNotification( self, topic_name, topic_project=topic_project, custom_attributes=custom_attributes, event_types=event_types, blob_name_prefix=blob_name_prefix, payload_format=payload_format, )
Creates current bucket. If the bucket already exists, will raise :class:`google.cloud.exceptions.Conflict`. This implements "storage.buckets.insert". If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type project: str :param project: Optional. The project under which the bucket is to be created. If not passed, uses the project set on the client. :raises ValueError: if :attr:`user_project` is set. :raises ValueError: if ``project`` is None and client's :attr:`project` is also None. :type location: str :param location: Optional. The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations def create(self, client=None, project=None, location=None): """Creates current bucket. If the bucket already exists, will raise :class:`google.cloud.exceptions.Conflict`. This implements "storage.buckets.insert". If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type project: str :param project: Optional. The project under which the bucket is to be created. If not passed, uses the project set on the client. :raises ValueError: if :attr:`user_project` is set. :raises ValueError: if ``project`` is None and client's :attr:`project` is also None. :type location: str :param location: Optional. The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations """ if self.user_project is not None: raise ValueError("Cannot create bucket with 'user_project' set.") client = self._require_client(client) if project is None: project = client.project if project is None: raise ValueError("Client project not set: pass an explicit project.") query_params = {"project": project} properties = {key: self._properties[key] for key in self._changes} properties["name"] = self.name if location is not None: properties["location"] = location api_response = client._connection.api_request( method="POST", path="/b", query_params=query_params, data=properties, _target_object=self, ) self._set_properties(api_response)
Sends all changed properties in a PATCH request. Updates the ``_properties`` with the response from the backend. If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current object. def patch(self, client=None): """Sends all changed properties in a PATCH request. Updates the ``_properties`` with the response from the backend. If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: the client to use. If not passed, falls back to the ``client`` stored on the current object. """ # Special case: For buckets, it is possible that labels are being # removed; this requires special handling. if self._label_removals: self._changes.add("labels") self._properties.setdefault("labels", {}) for removed_label in self._label_removals: self._properties["labels"][removed_label] = None # Call the superclass method. return super(Bucket, self).patch(client=client)
Get a blob object by name. This will return None if the blob doesn't exist: .. literalinclude:: snippets.py :start-after: [START get_blob] :end-before: [END get_blob] If :attr:`user_project` is set, bills the API request to that project. :type blob_name: str :param blob_name: The name of the blob to retrieve. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type encryption_key: bytes :param encryption_key: Optional 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied. :type generation: long :param generation: Optional. If present, selects a specific revision of this object. :type kwargs: dict :param kwargs: Keyword arguments to pass to the :class:`~google.cloud.storage.blob.Blob` constructor. :rtype: :class:`google.cloud.storage.blob.Blob` or None :returns: The blob object if it exists, otherwise None. def get_blob( self, blob_name, client=None, encryption_key=None, generation=None, **kwargs ): """Get a blob object by name. This will return None if the blob doesn't exist: .. literalinclude:: snippets.py :start-after: [START get_blob] :end-before: [END get_blob] If :attr:`user_project` is set, bills the API request to that project. :type blob_name: str :param blob_name: The name of the blob to retrieve. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type encryption_key: bytes :param encryption_key: Optional 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied. :type generation: long :param generation: Optional. If present, selects a specific revision of this object. :type kwargs: dict :param kwargs: Keyword arguments to pass to the :class:`~google.cloud.storage.blob.Blob` constructor. :rtype: :class:`google.cloud.storage.blob.Blob` or None :returns: The blob object if it exists, otherwise None. """ blob = Blob( bucket=self, name=blob_name, encryption_key=encryption_key, generation=generation, **kwargs ) try: # NOTE: This will not fail immediately in a batch. However, when # Batch.finish() is called, the resulting `NotFound` will be # raised. blob.reload(client=client) except NotFound: return None else: return blob
Return an iterator used to find blobs in the bucket. If :attr:`user_project` is set, bills the API request to that project. :type max_results: int :param max_results: (Optional) The maximum number of blobs in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: (Optional) If present, return the next batch of blobs, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :type prefix: str :param prefix: (Optional) prefix used to filter blobs. :type delimiter: str :param delimiter: (Optional) Delimiter, used with ``prefix`` to emulate hierarchy. :type versions: bool :param versions: (Optional) Whether object versions should be returned as separate blobs. :type projection: str :param projection: (Optional) If used, must be 'full' or 'noAcl'. Defaults to ``'noAcl'``. Specifies the set of properties to return. :type fields: str :param fields: (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the language of each blob returned: ``'items/contentLanguage,nextPageToken'``. :type client: :class:`~google.cloud.storage.client.Client` :param client: (Optional) The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: Iterator of all :class:`~google.cloud.storage.blob.Blob` in this bucket matching the arguments. def list_blobs( self, max_results=None, page_token=None, prefix=None, delimiter=None, versions=None, projection="noAcl", fields=None, client=None, ): """Return an iterator used to find blobs in the bucket. If :attr:`user_project` is set, bills the API request to that project. :type max_results: int :param max_results: (Optional) The maximum number of blobs in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: (Optional) If present, return the next batch of blobs, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :type prefix: str :param prefix: (Optional) prefix used to filter blobs. :type delimiter: str :param delimiter: (Optional) Delimiter, used with ``prefix`` to emulate hierarchy. :type versions: bool :param versions: (Optional) Whether object versions should be returned as separate blobs. :type projection: str :param projection: (Optional) If used, must be 'full' or 'noAcl'. Defaults to ``'noAcl'``. Specifies the set of properties to return. :type fields: str :param fields: (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the language of each blob returned: ``'items/contentLanguage,nextPageToken'``. :type client: :class:`~google.cloud.storage.client.Client` :param client: (Optional) The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: Iterator of all :class:`~google.cloud.storage.blob.Blob` in this bucket matching the arguments. """ extra_params = {"projection": projection} if prefix is not None: extra_params["prefix"] = prefix if delimiter is not None: extra_params["delimiter"] = delimiter if versions is not None: extra_params["versions"] = versions if fields is not None: extra_params["fields"] = fields if self.user_project is not None: extra_params["userProject"] = self.user_project client = self._require_client(client) path = self.path + "/o" iterator = page_iterator.HTTPIterator( client=client, api_request=client._connection.api_request, path=path, item_to_value=_item_to_blob, page_token=page_token, max_results=max_results, extra_params=extra_params, page_start=_blobs_page_start, ) iterator.bucket = self iterator.prefixes = set() return iterator
List Pub / Sub notifications for this bucket. See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: list of :class:`.BucketNotification` :returns: notification instances def list_notifications(self, client=None): """List Pub / Sub notifications for this bucket. See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list If :attr:`user_project` is set, bills the API request to that project. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: list of :class:`.BucketNotification` :returns: notification instances """ client = self._require_client(client) path = self.path + "/notificationConfigs" iterator = page_iterator.HTTPIterator( client=client, api_request=client._connection.api_request, path=path, item_to_value=_item_to_notification, ) iterator.bucket = self return iterator
Delete this bucket. The bucket **must** be empty in order to submit a delete request. If ``force=True`` is passed, this will first attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket). If the bucket doesn't exist, this will raise :class:`google.cloud.exceptions.NotFound`. If the bucket is not empty (and ``force=False``), will raise :class:`google.cloud.exceptions.Conflict`. If ``force=True`` and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long runtime of this method. If :attr:`user_project` is set, bills the API request to that project. :type force: bool :param force: If True, empties the bucket's objects then deletes it. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :raises: :class:`ValueError` if ``force`` is ``True`` and the bucket contains more than 256 objects / blobs. def delete(self, force=False, client=None): """Delete this bucket. The bucket **must** be empty in order to submit a delete request. If ``force=True`` is passed, this will first attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket). If the bucket doesn't exist, this will raise :class:`google.cloud.exceptions.NotFound`. If the bucket is not empty (and ``force=False``), will raise :class:`google.cloud.exceptions.Conflict`. If ``force=True`` and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long runtime of this method. If :attr:`user_project` is set, bills the API request to that project. :type force: bool :param force: If True, empties the bucket's objects then deletes it. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :raises: :class:`ValueError` if ``force`` is ``True`` and the bucket contains more than 256 objects / blobs. """ client = self._require_client(client) query_params = {} if self.user_project is not None: query_params["userProject"] = self.user_project if force: blobs = list( self.list_blobs( max_results=self._MAX_OBJECTS_FOR_ITERATION + 1, client=client ) ) if len(blobs) > self._MAX_OBJECTS_FOR_ITERATION: message = ( "Refusing to delete bucket with more than " "%d objects. If you actually want to delete " "this bucket, please delete the objects " "yourself before calling Bucket.delete()." ) % (self._MAX_OBJECTS_FOR_ITERATION,) raise ValueError(message) # Ignore 404 errors on delete. self.delete_blobs(blobs, on_error=lambda blob: None, client=client) # We intentionally pass `_target_object=None` since a DELETE # request has no response value (whether in a standard request or # in a batch request). client._connection.api_request( method="DELETE", path=self.path, query_params=query_params, _target_object=None, )
Deletes a blob from the current bucket. If the blob isn't found (backend 404), raises a :class:`google.cloud.exceptions.NotFound`. For example: .. literalinclude:: snippets.py :start-after: [START delete_blob] :end-before: [END delete_blob] If :attr:`user_project` is set, bills the API request to that project. :type blob_name: str :param blob_name: A blob name to delete. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type generation: long :param generation: Optional. If present, permanently deletes a specific revision of this object. :raises: :class:`google.cloud.exceptions.NotFound` (to suppress the exception, call ``delete_blobs``, passing a no-op ``on_error`` callback, e.g.: .. literalinclude:: snippets.py :start-after: [START delete_blobs] :end-before: [END delete_blobs] def delete_blob(self, blob_name, client=None, generation=None): """Deletes a blob from the current bucket. If the blob isn't found (backend 404), raises a :class:`google.cloud.exceptions.NotFound`. For example: .. literalinclude:: snippets.py :start-after: [START delete_blob] :end-before: [END delete_blob] If :attr:`user_project` is set, bills the API request to that project. :type blob_name: str :param blob_name: A blob name to delete. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type generation: long :param generation: Optional. If present, permanently deletes a specific revision of this object. :raises: :class:`google.cloud.exceptions.NotFound` (to suppress the exception, call ``delete_blobs``, passing a no-op ``on_error`` callback, e.g.: .. literalinclude:: snippets.py :start-after: [START delete_blobs] :end-before: [END delete_blobs] """ client = self._require_client(client) blob = Blob(blob_name, bucket=self, generation=generation) # We intentionally pass `_target_object=None` since a DELETE # request has no response value (whether in a standard request or # in a batch request). client._connection.api_request( method="DELETE", path=blob.path, query_params=blob._query_params, _target_object=None, )
Deletes a list of blobs from the current bucket. Uses :meth:`delete_blob` to delete each individual blob. If :attr:`user_project` is set, bills the API request to that project. :type blobs: list :param blobs: A list of :class:`~google.cloud.storage.blob.Blob`-s or blob names to delete. :type on_error: callable :param on_error: (Optional) Takes single argument: ``blob``. Called called once for each blob raising :class:`~google.cloud.exceptions.NotFound`; otherwise, the exception is propagated. :type client: :class:`~google.cloud.storage.client.Client` :param client: (Optional) The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :raises: :class:`~google.cloud.exceptions.NotFound` (if `on_error` is not passed). def delete_blobs(self, blobs, on_error=None, client=None): """Deletes a list of blobs from the current bucket. Uses :meth:`delete_blob` to delete each individual blob. If :attr:`user_project` is set, bills the API request to that project. :type blobs: list :param blobs: A list of :class:`~google.cloud.storage.blob.Blob`-s or blob names to delete. :type on_error: callable :param on_error: (Optional) Takes single argument: ``blob``. Called called once for each blob raising :class:`~google.cloud.exceptions.NotFound`; otherwise, the exception is propagated. :type client: :class:`~google.cloud.storage.client.Client` :param client: (Optional) The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :raises: :class:`~google.cloud.exceptions.NotFound` (if `on_error` is not passed). """ for blob in blobs: try: blob_name = blob if not isinstance(blob_name, six.string_types): blob_name = blob.name self.delete_blob(blob_name, client=client) except NotFound: if on_error is not None: on_error(blob) else: raise
Copy the given blob to the given bucket, optionally with a new name. If :attr:`user_project` is set, bills the API request to that project. :type blob: :class:`google.cloud.storage.blob.Blob` :param blob: The blob to be copied. :type destination_bucket: :class:`google.cloud.storage.bucket.Bucket` :param destination_bucket: The bucket into which the blob should be copied. :type new_name: str :param new_name: (optional) the new name for the copied file. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type preserve_acl: bool :param preserve_acl: Optional. Copies ACL from old blob to new blob. Default: True. :type source_generation: long :param source_generation: Optional. The generation of the blob to be copied. :rtype: :class:`google.cloud.storage.blob.Blob` :returns: The new Blob. def copy_blob( self, blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None, ): """Copy the given blob to the given bucket, optionally with a new name. If :attr:`user_project` is set, bills the API request to that project. :type blob: :class:`google.cloud.storage.blob.Blob` :param blob: The blob to be copied. :type destination_bucket: :class:`google.cloud.storage.bucket.Bucket` :param destination_bucket: The bucket into which the blob should be copied. :type new_name: str :param new_name: (optional) the new name for the copied file. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type preserve_acl: bool :param preserve_acl: Optional. Copies ACL from old blob to new blob. Default: True. :type source_generation: long :param source_generation: Optional. The generation of the blob to be copied. :rtype: :class:`google.cloud.storage.blob.Blob` :returns: The new Blob. """ client = self._require_client(client) query_params = {} if self.user_project is not None: query_params["userProject"] = self.user_project if source_generation is not None: query_params["sourceGeneration"] = source_generation if new_name is None: new_name = blob.name new_blob = Blob(bucket=destination_bucket, name=new_name) api_path = blob.path + "/copyTo" + new_blob.path copy_result = client._connection.api_request( method="POST", path=api_path, query_params=query_params, _target_object=new_blob, ) if not preserve_acl: new_blob.acl.save(acl={}, client=client) new_blob._set_properties(copy_result) return new_blob
Rename the given blob using copy and delete operations. If :attr:`user_project` is set, bills the API request to that project. Effectively, copies blob to the same bucket with a new name, then deletes the blob. .. warning:: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. :type blob: :class:`google.cloud.storage.blob.Blob` :param blob: The blob to be renamed. :type new_name: str :param new_name: The new name for this blob. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: :class:`Blob` :returns: The newly-renamed blob. def rename_blob(self, blob, new_name, client=None): """Rename the given blob using copy and delete operations. If :attr:`user_project` is set, bills the API request to that project. Effectively, copies blob to the same bucket with a new name, then deletes the blob. .. warning:: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation. :type blob: :class:`google.cloud.storage.blob.Blob` :param blob: The blob to be renamed. :type new_name: str :param new_name: The new name for this blob. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :rtype: :class:`Blob` :returns: The newly-renamed blob. """ same_name = blob.name == new_name new_blob = self.copy_blob(blob, self, new_name, client=client) if not same_name: blob.delete(client=client) return new_blob
Set default KMS encryption key for objects in the bucket. :type value: str or None :param value: new KMS key name (None to clear any existing key). def default_kms_key_name(self, value): """Set default KMS encryption key for objects in the bucket. :type value: str or None :param value: new KMS key name (None to clear any existing key). """ encryption_config = self._properties.get("encryption", {}) encryption_config["defaultKmsKeyName"] = value self._patch_property("encryption", encryption_config)
Retrieve or set labels assigned to this bucket. See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels .. note:: The getter for this property returns a dict which is a *copy* of the bucket's labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.: >>> labels = bucket.labels >>> labels['new_key'] = 'some-label' >>> del labels['old_key'] >>> bucket.labels = labels >>> bucket.update() :setter: Set labels for this bucket. :getter: Gets the labels for this bucket. :rtype: :class:`dict` :returns: Name-value pairs (string->string) labelling the bucket. def labels(self): """Retrieve or set labels assigned to this bucket. See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels .. note:: The getter for this property returns a dict which is a *copy* of the bucket's labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.: >>> labels = bucket.labels >>> labels['new_key'] = 'some-label' >>> del labels['old_key'] >>> bucket.labels = labels >>> bucket.update() :setter: Set labels for this bucket. :getter: Gets the labels for this bucket. :rtype: :class:`dict` :returns: Name-value pairs (string->string) labelling the bucket. """ labels = self._properties.get("labels") if labels is None: return {} return copy.deepcopy(labels)
Set labels assigned to this bucket. See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels :type mapping: :class:`dict` :param mapping: Name-value pairs (string->string) labelling the bucket. def labels(self, mapping): """Set labels assigned to this bucket. See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels :type mapping: :class:`dict` :param mapping: Name-value pairs (string->string) labelling the bucket. """ # If any labels have been expressly removed, we need to track this # so that a future .patch() call can do the correct thing. existing = set([k for k in self.labels.keys()]) incoming = set([k for k in mapping.keys()]) self._label_removals = self._label_removals.union(existing.difference(incoming)) # Actually update the labels on the object. self._patch_property("labels", copy.deepcopy(mapping))
Retrieve IAM configuration for this bucket. :rtype: :class:`IAMConfiguration` :returns: an instance for managing the bucket's IAM configuration. def iam_configuration(self): """Retrieve IAM configuration for this bucket. :rtype: :class:`IAMConfiguration` :returns: an instance for managing the bucket's IAM configuration. """ info = self._properties.get("iamConfiguration", {}) return IAMConfiguration.from_api_repr(info, self)
Retrieve or set lifecycle rules configured for this bucket. See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets .. note:: The getter for this property returns a list which contains *copies* of the bucket's lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.: >>> rules = bucket.lifecycle_rules >>> rules.append({'origin': '/foo', ...}) >>> rules[1]['rule']['action']['type'] = 'Delete' >>> del rules[0] >>> bucket.lifecycle_rules = rules >>> bucket.update() :setter: Set lifestyle rules for this bucket. :getter: Gets the lifestyle rules for this bucket. :rtype: generator(dict) :returns: A sequence of mappings describing each lifecycle rule. def lifecycle_rules(self): """Retrieve or set lifecycle rules configured for this bucket. See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets .. note:: The getter for this property returns a list which contains *copies* of the bucket's lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.: >>> rules = bucket.lifecycle_rules >>> rules.append({'origin': '/foo', ...}) >>> rules[1]['rule']['action']['type'] = 'Delete' >>> del rules[0] >>> bucket.lifecycle_rules = rules >>> bucket.update() :setter: Set lifestyle rules for this bucket. :getter: Gets the lifestyle rules for this bucket. :rtype: generator(dict) :returns: A sequence of mappings describing each lifecycle rule. """ info = self._properties.get("lifecycle", {}) for rule in info.get("rule", ()): action_type = rule["action"]["type"] if action_type == "Delete": yield LifecycleRuleDelete.from_api_repr(rule) elif action_type == "SetStorageClass": yield LifecycleRuleSetStorageClass.from_api_repr(rule) else: raise ValueError("Unknown lifecycle rule: {}".format(rule))
Set lifestyle rules configured for this bucket. See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets :type entries: list of dictionaries :param entries: A sequence of mappings describing each lifecycle rule. def lifecycle_rules(self, rules): """Set lifestyle rules configured for this bucket. See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_api/v1/buckets :type entries: list of dictionaries :param entries: A sequence of mappings describing each lifecycle rule. """ rules = [dict(rule) for rule in rules] # Convert helpers if needed self._patch_property("lifecycle", {"rule": rules})